Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:40:13.381377Z"
},
"title": "Drug Extraction from the Web: Summarizing Drug Experiences with Multi-Dimensional Topic Models",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD"
}
},
"email": "mpaul@cs.jhu.edu"
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD"
}
},
"email": "mdredze@cs.jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multi-dimensional latent text models, such as factorial LDA (f-LDA), capture multiple factors of corpora, creating structured output for researchers to better understand the contents of a corpus. We consider such models for clinical research of new recreational drugs and trends, an important application for mining current information for healthcare workers. We use a \"three-dimensional\" f-LDA variant to jointly model combinations of drug (marijuana, salvia, etc.), aspect (effects, chemistry, etc.) and route of administration (smoking, oral, etc.) Since a purely unsupervised topic model is unlikely to discover these specific factors of interest, we develop a novel method of incorporating prior knowledge by leveraging user generated tags as priors in our model. We demonstrate that this model can be used as an exploratory tool for learning about these drugs from the Web by applying it to the task of extractive summarization. In addition to providing useful output for this important public health task, our prior-enriched model provides a framework for the application of f-LDA to other tasks.",
"pdf_parse": {
"paper_id": "N13-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "Multi-dimensional latent text models, such as factorial LDA (f-LDA), capture multiple factors of corpora, creating structured output for researchers to better understand the contents of a corpus. We consider such models for clinical research of new recreational drugs and trends, an important application for mining current information for healthcare workers. We use a \"three-dimensional\" f-LDA variant to jointly model combinations of drug (marijuana, salvia, etc.), aspect (effects, chemistry, etc.) and route of administration (smoking, oral, etc.) Since a purely unsupervised topic model is unlikely to discover these specific factors of interest, we develop a novel method of incorporating prior knowledge by leveraging user generated tags as priors in our model. We demonstrate that this model can be used as an exploratory tool for learning about these drugs from the Web by applying it to the task of extractive summarization. In addition to providing useful output for this important public health task, our prior-enriched model provides a framework for the application of f-LDA to other tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Topic models aid exploration of the main thematic elements of large text corpora by revealing latent structure and producing a high level semantic view (Blei et al., 2003) . Topic models have been used for understanding the contents of a corpus and identifying interesting aspects of a collection for more indepth analysis (Talley et al., 2011; Mimno, 2011) . While standard topic models assume a flat semantic structure, there are potentially many dimensions of a corpus that contribute to word choice, such as sentiment, perspective and ideology (Mei et al., 2007; Paul and Girju, 2010; Eisenstein et al., 2011) . Rather than studying these factors in isolation, multi-dimensional topic models can consider multiple factors jointly. Paul and Dredze (2012b) introduced factorial LDA (f-LDA), a general framework for multidimensional text models that capture an arbitrary number of factors (explained in \u00a73). While a standard topic model learns distributions over \"topics\" in documents, f-LDA learns distributions over combinations of multiple factors (e.g. topic, perspective) called tuples (e.g. (HEALTHCARE,LIBERAL)). While f-LDA can model factors without supervision, it has not been used in situations where the user has prior information about the factors.",
"cite_spans": [
{
"start": 152,
"end": 171,
"text": "(Blei et al., 2003)",
"ref_id": null
},
{
"start": 323,
"end": 344,
"text": "(Talley et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 345,
"end": 357,
"text": "Mimno, 2011)",
"ref_id": "BIBREF16"
},
{
"start": 548,
"end": 566,
"text": "(Mei et al., 2007;",
"ref_id": "BIBREF15"
},
{
"start": 567,
"end": 588,
"text": "Paul and Girju, 2010;",
"ref_id": "BIBREF21"
},
{
"start": 589,
"end": 613,
"text": "Eisenstein et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 735,
"end": 758,
"text": "Paul and Dredze (2012b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we consider a setting where the user has prior knowledge about the end application: mining recreational drug trends from user forums, an important clinical research problem ( \u00a72). We show how to incorporate available information from these forums into f-LDA as a novel hierarchical prior over the model parameters, guiding the model toward the desired output ( \u00a73.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then demonstrate the model's utility in exploring a corpus in a targeted manner by using it to automatically extract interesting sentences from the text, a simple form of extractive multi-document summarization (Goldstein et al., 2000) . In the same way that topic models can be used for aspectspecific summarization (Titov and McDonald, 2008; Haghighi and Vanderwende, 2009) , we use f-LDA to extract snippets corresponding to fine-grained information patterns. Our results demonstrate that our multi-dimensional modeling approach targets more informative text than a simpler model ( \u00a74).",
"cite_spans": [
{
"start": 214,
"end": 238,
"text": "(Goldstein et al., 2000)",
"ref_id": "BIBREF9"
},
{
"start": 320,
"end": 346,
"text": "(Titov and McDonald, 2008;",
"ref_id": "BIBREF27"
},
{
"start": 347,
"end": 378,
"text": "Haghighi and Vanderwende, 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recreational drug use imposes a significant burden on the health infrastructure of the United States and other countries. Accurate information on drugs, usage profiles and side effects are necessary for supporting a range of healthcare activities, such as addiction treatment programs, toxin diagnosis, prevention and awareness campaigns, and public policy. These activities rely on up-to-date information on drug trends, but it is increasingly difficult to keep up with current drug information, as distribution and information-sharing of novel drugs is easier than ever via the web (Wax, 2002) . For the third consecutive year, a record number of new drugs (49) were detected in Europe in 2011 (EMCDDA, 2012) . About two-thirds of these new drugs were synthetic cannabinoids (used as legal marijuana substitutes), which led to 11,000 hospitalizations in the U.S. in 2010 (SAMHSA, 2012) . Treatment is complicated by the fact that novel substances like these may have unknown side effects and other properties.",
"cite_spans": [
{
"start": 584,
"end": 595,
"text": "(Wax, 2002)",
"ref_id": "BIBREF28"
},
{
"start": 696,
"end": 710,
"text": "(EMCDDA, 2012)",
"ref_id": null
},
{
"start": 873,
"end": 887,
"text": "(SAMHSA, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing Drug Trends on the Web",
"sec_num": "2"
},
{
"text": "Accurate information on drug trends can be obtained by speaking directly with users, e.g. focus groups and interviews (Reyes et al., 2012; Hout and Bingham, 2012) , but such studies are slow and costly, and can fail to identify the emergence of new drug classes, such as mephedrone (Dunn et al., 2011) . More recently, researchers have begun to recognize clinical value in information obtained from the web (Corazza et al., 2011) . By (manually) analyzing YouTube videos, Drugs-Forum (discussed below), and other social media websites and online communities, researchers have uncovered details about the use, effects, and popularity of a variety of new and emerging drugs (Morgan et al., 2010; Gallagher et al., 2012) , and comprehensive drug reviews now include nonstandard sources such as web forums in addition to standard sources (Hill and Thomas, 2011).",
"cite_spans": [
{
"start": 118,
"end": 138,
"text": "(Reyes et al., 2012;",
"ref_id": "BIBREF24"
},
{
"start": 139,
"end": 162,
"text": "Hout and Bingham, 2012)",
"ref_id": "BIBREF12"
},
{
"start": 282,
"end": 301,
"text": "(Dunn et al., 2011)",
"ref_id": "BIBREF4"
},
{
"start": 407,
"end": 429,
"text": "(Corazza et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 672,
"end": 693,
"text": "(Morgan et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 694,
"end": 717,
"text": "Gallagher et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing Drug Trends on the Web",
"sec_num": "2"
},
{
"text": "Organizing and understanding forums requires significant effort. We propose automated tools to aid in the exploration and analysis of these data. While topic models are a natural fit for corpus exploration (Eisenstein et al., 2012; Chaney and Blei, 2012) , and have been used for similar public health applications (Paul and Dredze, 2011) , online forums can be organized in many ways beyond topic. Guided by do- main experts, we seek to model forums as a combination of drug type, route of intake (oral, injection, etc.) and aspect (cultural settings, drug chemistry, etc.) A multi-dimensional topic model can jointly capture these factors, providing a more informative understanding of the data, and can be used to produce fine-grained information such as the effects of taking a particular drug orally. Our hope is that models such as f-LDA can lead to exploratory tools that aide researchers in learning about new drugs.",
"cite_spans": [
{
"start": 206,
"end": 231,
"text": "(Eisenstein et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 232,
"end": 254,
"text": "Chaney and Blei, 2012)",
"ref_id": "BIBREF1"
},
{
"start": 315,
"end": 338,
"text": "(Paul and Dredze, 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing Drug Trends on the Web",
"sec_num": "2"
},
{
"text": "Our data set is taken from drugs-forum.com, a site active for more than 10 years with over 100,000 members and more than 1 million monthly readers. The site is an information hub where people can freely discuss recreational drugs with psychoactive effects, ranging from coffee to heroin, hosting information and discussions on specific drugs, as well as drug-related politics, law, news, recovery and addiction. With current information on a variety of drugs and an extensive archive, Drugs-Forum provides an ideal information source for public health researchers . Discussion threads are organized into numerous forums, including drugs, the law, addiction, etc. Since we are modeling drug use, we focus on the drug forums. Each thread is assigned to a specific forum or subforum (drug) and each thread has a user specified tag, which can indicate categories like \"Effects\" as well as routes of administration like \"Oral.\" We organized the tags and subforum categorizations into factors and components, as shown in Table 1 . We make use of these tags in \u00a73.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 1015,
"end": 1022,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus: Drugs-Forum",
"sec_num": "2.1"
},
{
"text": "Clinical researchers are interested in specific information about drug usage, including drug type, route of administration, and other aspects of drug use (e.g. dosage, side effects). Rather than considering these factors independently, we would like to model these in a way that can capture interesting interactions between all three factors, because the effects and other aspects of drugs can vary by route of administration. Oral consumption of drugs often produces longer lasting but milder effects than injection or smoking, for example. Many mephedrone users report nose bleeds and nasal pain as a health effect of snorting the drug: this could be modeled as the triple (MEPHEDRONE,SNORTING,HEALTH), a particular combination of all three factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "To this end, we utilize the multi-dimensional text model factorial LDA (f-LDA) (Paul and Dredze, 2012b) , which jointly models multiple semantic factors or dimensions. In this section we summarize f-LDA, then we describe an extension which incorporates user-generated metadata into the model ( \u00a73.1).",
"cite_spans": [
{
"start": 79,
"end": 103,
"text": "(Paul and Dredze, 2012b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "In a standard topic model such as LDA (Blei et al., 2003) , each word token is associated with a latent \"topic\" variable. f-LDA is conceptually similar to LDA except that rather than a single topic variable, each token is associated with a K-dimensional vector of latent variables. In a three-dimensional f-LDA model, each token has three latent variablesdrug, route, and aspect in this case.",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "(Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "In f-LDA, each document has a distribution over all possible K-tuples (rather than topics), and each K-tuple is associated with its own word distribution. Under this model, words are generated by first sampling a tuple from the document's tuple distribution, then sampling a word from that tuple's word distribution. In our threedimensional model, we will consider triples such as (CANNABIS,SMOKING,EFFECTS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "Formally, each document has a distribution \u03b8 (d) over triples, and each token is associated with a latent vector z of size K=3. (We'll describe the model in terms of the three factors we are modeling in this paper, but f-LDA generalizes to K dimensions.) The Cartesian product of the three factors forms a set of triples and the vector z references three discrete components to form a triple t = (t 1 , t 2 , t 3 ). The car- dinality of each dimension (denoted Z k ) is the number of drugs, routes, and aspects, as shown in Table 1 . Each triple has a corresponding word distribution \u03c6 t . The graphical model is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 524,
"end": 532,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 623,
"end": 631,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "\u03b1 \u03b1 d \u03b8 \u03c6 \u03c9 \u03b7 b \u03b3 z w D N K K K k Z k k Z k k Z k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "One would expect that triples that have components in common should have similar word distributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "(CANNABIS,SMOKING,EFFECTS) is expected to have some commonalities with (CANNABIS,ORAL,EFFECTS). f-LDA models this intuition by sharing parameters across priors for triples which share components: all triples with CANNABIS as the drug include cannabis-specific parameters in the prior, and all triples with SMOK-ING as the route have smoking-specific parameters. Formally, \u03c6 t (the word distribution for tuple t) has a Dirichlet(\u03c9 ( t) ) prior, where for each word w in the vector,\u03c9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "( t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "w is a log-linear function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "\u03c9 ( t ) w exp \u03c9 (B) +\u03c9 (0) w +\u03c9 (drug) t 1 w +\u03c9 (route) t 2 w +\u03c9 (aspect) t 3 w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "(1) where \u03c9 (B) is a corpus-wide precision scalar (the bias), \u03c9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "(0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "w is a corpus-specific bias for word w, and \u03c9 (k) t k w is a bias parameter for word w for component t k of the kth factor. That is, each drug, route, and aspect has a weight vector over the vocabulary, and the prior for a particular triple is influenced by the weight vectors of each of the three factors. The \u03c9 parameters are all independent and normally distributed around 0 (effectively L2 regularization).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "The prior over each document's distribution over triples has a similar log-linear prior, where weights for each factor are combined to influence the distribution. Under our model, \u03b8 (d) is drawn from Dirichlet(B \u2022\u03b1 (d) ), where \u2022 denotes an element-wise product between B (described below) and\u03b1 (d) , with",
"cite_spans": [
{
"start": 182,
"end": 185,
"text": "(d)",
"ref_id": null
},
{
"start": 215,
"end": 218,
"text": "(d)",
"ref_id": null
},
{
"start": 295,
"end": 298,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "\u03b1 (d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "t for each triple t defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "\u03b1 (d) t exp \u03b1 (B) +\u03b1 (D,drug) t 1 +\u03b1 (d,drug) t 1 +\u03b1 (D,route) t 2 +\u03b1 (d,route) t 2 +\u03b1 (D,aspect) t 3 +\u03b1 (d,aspect) t 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "(2) Similar to the \u03c9 formulation, \u03b1 (B) is a global bias parameter, while the \u03b1 D vectors are corpuswide weight vectors and \u03b1 d are document-specific weight vectors over the components of each factor. Structuring the prior in this way models the intuition that if a triple with a particular component has high probability, other triples containing that component are likely to also have high probability. For example, if a message discusses triples of the form (CANNABIS,*,EFFECTS), it is more likely to discuss (CANNABIS,*,HEALTH) than (CO-CAINE,*,HEALTH), because the message is about cannabis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "Finally, B is a 3-dimensional array that encodes a sparsity pattern over the space of possible triples. This is used to accommodate triples that can be generated by the model but are not supported by the data. For example, not all routes of administration may be applicable to certain drugs, or certain aspects of a drug may happen to not be discussed in the forum. Each element b t of the array is a real-valued scalar in (0, 1) which is multiplied with\u03b1 (d) t to adjust the prior for that triple. If the b value is near 0 for a particular triple, then it will have very low prior probability. The b values have Beta(\u03b3 0 ,\u03b3 1 ) priors (\u03b3 < 1) which encourage them to be near 0 or 1, so that they function as binary variables.",
"cite_spans": [
{
"start": 456,
"end": 459,
"text": "(d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "Posterior inference and parameter estimation consist of a Monte Carlo EM algorithm that alternates between an iteration of collapsed Gibbs sampler on the z variables (E-step), and an iteration of gradient ascent on the \u03b1 and \u03c9 hyperparameters (M-step). See Paul and Dredze (2012b) for more details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Dimensional Text Models",
"sec_num": "3"
},
{
"text": "In an unsupervised setting, there is no reason f-LDA would actually infer parameters corresponding to the three factors we have been describing. However, the forums include metadata that can help guide the model: the messages are organized into forums corresponding to drug type (factor 1), and some threads are tagged with labels corresponding to routes of administration and other aspects (factors 2 and 3). Tags for aspects are manually grouped into components: e.g. USAGE (tags: Dose, Storing, Weight). Table 1 shows the factors and components in our model. One could simply use these tags as labels in a simple supervised model-this will be our experimental baseline ( \u00a74.1). However, this approach has limitations in that most documents are missing labels (less than a third of our corpus contains one of the labels in Table 1 ) and many messages discuss several components, not just the one implied by the tag. For example, a message tagged \"Side effects\" may talk about both side effects and dosage. While a supervised classifier may attribute all words to a single tag, f-LDA learns per-token assignments.",
"cite_spans": [],
"ref_spans": [
{
"start": 507,
"end": 514,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 825,
"end": 832,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "We will instead use the tags to inform the priors over our f-LDA word distribution parameters. We do this with a two-stage approach. First, we use the tags to train parameters of a related but simplified model. We then use the learned parameters as priors over the corresponding f-LDA parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "In particular, we will place priors on the \u03c9 vectors, the Dirichlet hyperparameters which influence the word distributions. Suppose that we are given a vector \u03b7 (0) which is believed to contain desirable values for \u03c9 (0) , the weight vector over words in the corpus, and similarly we are given vectors \u03b7 i . One option is to fix \u03c9 as \u03b7, forcing the component weights to match the provided weights. However, in our case \u03b7 will only be an approximation of the optimal component parameters since it is estimated from incomplete data (only some messages have tags) and the \u03b7 vectors are learned using an approximate model (see below). Instead, these weight vectors will merely guide learning as prior knowledge over model parameters \u03c9. While f-LDA assumes each \u03c9 is drawn from a 0-mean Gaussian, we alter the means of the appropriate \u03c9 parameters to use \u03b7.",
"cite_spans": [
{
"start": 217,
"end": 220,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 (0) w \u223c N (\u03b7 (0) w , \u03c3 2 ); \u03c9 (k) iw \u223c N (\u03b7 (k) iw , \u03c3 2 )",
"eq_num": "(3)"
}
],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "Recall that \u03c9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "w are corpus-wide bias parameters for each word and \u03c9 (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "iw are component-specific parameters for each word. This yields a hierarchical prior in which \u03b7 parameterizes the prior over \u03c9, while \u03c9 parameterizes the prior over \u03c6 (the word distributions). The resulting \u03c9 parameters can vary from the provided priors to adapt to the data. An example of learned parameters is shown in Figure 2 , illustrating the hierarchical process behind this model.",
"cite_spans": [],
"ref_spans": [
{
"start": 321,
"end": 329,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "Learning the Priors In various applications, priors can come from many different sources, such as labeled data (Jagarlamudi et al., 2012) . We learn the prior means \u03b7 from tagged messages. However, these parameters imply a latent division of responsibility for observed words: some are present because of the tag while others are general words in the corpus. As a result, they must be estimated.",
"cite_spans": [
{
"start": 111,
"end": 137,
"text": "(Jagarlamudi et al., 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "We learn these parameters from the tagged messages using SAGE, which model words in a document as combinations of background and topic word distributions. Eisenstein et al. (2011) present SAGE models for Naive Bayes (one class per document), admixture models (one class per token), and admixture models where tokens come from multiple factors. We combine the first and third models, such that a document has multiple factors which are given as labels across the entire document-the drug type and the tag, which could correspond to a component of either the route or aspect factors. We posit the following model of text generation per document:",
"cite_spans": [
{
"start": 155,
"end": 179,
"text": "Eisenstein et al. (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "P (word w|drug = i, factorf = j) (4) = exp(\u03b7 (0) w + \u03b7 (drug) iw + \u03b7 (f ) jw ) w exp(\u03b7 (0) w + \u03b7 (drug) iw + \u03b7 (f ) jw )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "This log-linear model has a similar form as Eq. 1, but with two factors instead of three, and it is a distribution rather than a Dirichlet vector. As in SAGE, we fix \u03b7 (0) to be the observed vector of corpus log-frequencies over the vocabulary, which acts as an \"overall\" weight vector, while parameter estimation yields \u03b7 (f ) i , the logit parameters for the ith component of factor f . 1 These parameters are then used as the mean of the Gaussian priors over \u03c9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "Standard optimization methods can be used to estimate these parameters. The partial derivative of the likelihood with respect to the parameter \u03b7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "(drug) iw is: \u2202 \u2202\u03b7 (drug) iw = f j\u2208f c(i, j, w) \u2212 \u03c0(i, j, w)c(i, j, * ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "where c(i, j, w) is the number of times word w appears in documents labeled with i (drug) and j (tag), and \u03c0(i, j, w) denotes the probability given by (4). The partial derivative of each \u03b7 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tags and Word Priors",
"sec_num": "3.1"
},
{
"text": "Our corpus consists of messages from drugs-forum.com ( \u00a72.1). The site categorizes threads into many forums and subforums, including some on specific drugs, which are categorized hierarchically. We treated higher-level categories with pharmacologically similar drugs as a single drug type (e.g. OPIOIDS, AMPHETAMINES); for others we took the finest-granularity subforum as the drug type. We selected 22 popular drugs and from these forums we crawled 410K messages. We selected a subset of tags to form components for the route and aspect factors. (Some tags were too general or infrequent to be useful.) A list of the tags and drugs used appears in Table 1 . We also included a GENERAL component in the latter two factors to model word usage which does not pertain to a particular route or aspect; the prior parameters \u03b7 for these components were simply set to 0. We wish to demonstrate that our modified f-LDA model can be used to discover useful information in the text. One way to demonstrate this is by using the model to extract relevant snippets of text from the forums, which will form the basis of our evaluation experiments. Our goal is not to build a complete summarization system, but rather to use the model to direct researchers to interesting messages.",
"cite_spans": [],
"ref_spans": [
{
"start": 649,
"end": 656,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments with Topic Modeling for Extractive Summarization",
"sec_num": "4"
},
{
"text": "While we model all 22 drugs, our summarization experiments will focus on five drugs which have been studied only relatively recently: mephedrone and MDPV (\u03b2-ketones), Bromo-Dragonfly (synthetic phenethylamines), Spice/K2 (synthetic cannabinoids), and salvia divinorum. We will consider these drugs in particular because these are the five drugs for which technical reports were created by the EU Psychonaut Project (Schifano et al., 2006) , an online database of novel and emerging drugs, whose information is collected by reading drug websites, including Drugs-Forum. Extensive technical reports were written about these five popular drugs, and we can use these reports to produce reference summaries for our experiments ( \u00a74.2).",
"cite_spans": [
{
"start": 415,
"end": 438,
"text": "(Schifano et al., 2006)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Topic Modeling for Extractive Summarization",
"sec_num": "4"
},
{
"text": "Of these five drugs, only salvia has its own subforum; the others belong to subforums representing the broader categories shown in parentheses. We simply model the drug type as a proxy for the specific drug, as most of the drugs in each category have similar effects and properties. The first two drugs are both in the same subforum, so for the purpose of our model we treat mephedrone and MDPV as the single drug type, \u03b2-ketones. These two drugs are grouped together during summarization ( \u00a74.2), but the corresponding reference summaries incorporate excepts from the technical reports on both drugs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Topic Modeling for Extractive Summarization",
"sec_num": "4"
},
{
"text": "Of the four drug types being considered for summarization, our data set contains 12K messages with one of the tags in Table 1 and 30K without. Of those without tags, we set aside 5K as development data. There are also over 300K messages (140K tagged) from the remaining 18 drug types: some of these messages are utilized when training f-LDA. Even though we only consider four drug types in our experiments, our intuition is that it can be beneficial to model other drugs as well, because this will help to learn parameters for the various aspects and routes of administration. Our model of the effects of mephedrone can be informed by also modeling the effects of other stimulants such as cocaine.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model Setup",
"sec_num": "4.1"
},
{
"text": "Each message was treated as a document, and we only used documents with at least five word tokens after stop words, low-frequency words, and punctuation were removed. The preprocessed data sets contained an average of 45 tokens per document. Below, we describe two f-LDA variants as well as the baseline used in our experiments. Baseline Our baseline model is a unigram language model trained on the subset of messages which are tagged. We treat the drug subforum as a label for the drug factor, and each message's tag is used as a label for either the route or aspect factor. For example, the word distribution for the pair (SALVIA,EFFECTS) is estimated as the empirical distribution from messages posted in the salvia forum and tagged with \"Effects.\" We use add-\u03bb smoothing where \u03bb is chosen to optimize likelihood on the held-out development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Setup",
"sec_num": "4.1"
},
{
"text": "This is a two-dimensional model, since we explicitly model pairs such as (MEPHEDRONE,SNORTING) or (SALVIA,EFFECTS). However, we also created word distributions for triples such as (SALVIA,ORAL,EFFECTS) by taking a mixture of the corresponding pairs: in this example, we estimate the unigram distribution from salvia documents tagged with either \"Oral\" or \"Effects.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Setup",
"sec_num": "4.1"
},
{
"text": "Factorial LDA Because f-LDA does not rely on tagged data (the tags are only used to create priors), we can run inference on larger sets of data. The drawback is that despite these priors, it is still mostly unsupervised and we want to be careful to ensure the model will learn the patterns we care about. We thus add some reasonable constraints to the parameter space to guide the model further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Setup",
"sec_num": "4.1"
},
{
"text": "First, we treat the drug type as an observed variable based on the subforum the message comes from, just as with the baseline. For example, only tuples of the form (SALVIA, * , * ) can be assigned to tokens in the salvia forum. Second, we restrict the set of possible routes of administration that can be assigned to tokens in particular drug forums, since most drugs can be taken through only a subset of routes. For example, marijuana is typically smoked or eaten orally, but rarely injected. We therefore restrict each drug's allowable set of administration routes to those which are tagged (e.g. with \"Oral\" or \"Snorting\") in at least 1% of that drug's data. Similar ideas are used in Labeled LDA (Ramage et al., ",
"cite_spans": [
{
"start": 701,
"end": 716,
"text": "(Ramage et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Setup",
"sec_num": "4.1"
},
{
"text": "System Snippet Mephedrone (\u03b2-ketones/Bath salts)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "It is recommended by users that Mephedrone be taken on an empty stomach. Doses usually vary between 100mg-1g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "\u2022 If it is SWIYs first time using Mephedrone SWIM recommends a 100mg oral dose on an empty stomach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "Reported negative side effects include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "\u2022 Loss of appetite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "\u2022 Dehydration and dry mouth \u2022 Tense jaw, mild muscle clenching, stiff neck, and bruxia (teeth grinding) \u2022 Anxiety and paranoia \u2022 Increase in mean body temperature (sweating/Mephedrone sweat and hot flushes) \u2022 Elevated heart rate (tachycardia) and blood pressure, and chest pains \u2022 Dermatitis like symptoms (Itch and rash)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "\u2022 Neutral side effects: Lack of appetite, occasional loss of visual focus, [...] weight loss, possible diuretic. Negative side effects: Grinding teeth, \"Cotton mouth\", unable to acheive orgasm \u2022 Aside from his last session he has never experienced any negative symptoms at all, no raised heart beat, vasoconstriction , sweating, headaches, paranoia e.t.c nothing at all except sometimes cold hands the next day. \u2022 lot of people report that anxiety and paranoia are some of the side effects of taking mephedrone [...] is it also possible that alot of the chest pains people are experiencing is due to anxiety? \u2022 moisturize the affected areas of skin twice daily with E45 or a similar unperfumed dermatalogical lotion.",
"cite_spans": [
{
"start": 75,
"end": 80,
"text": "[...]",
"ref_id": null
},
{
"start": 511,
"end": 516,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Text",
"sec_num": null
},
{
"text": "Sublingual ingestion of the leaf (quid): reduces intensity of effects and can taste disgusting. When Salvia is consumed as a smokeable formulation the duration of the trip lasts 30 minutes or less, whereas if Salvia is consumed sublingually the effects lasts for 1 hour or more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "\u2022 The taste of sublingual salvia is foul and it is easy to have a dud trip unless large amounts of it are used. \u2022 SWIM has heard from many other users that chewing the fresh leaves of the Salvia plant allow for a much longer and mellower trip. [...] SWIM has read that a trip this way can last anywhere from a half on hour or longer. Dried leaves and/or salvia extract are smoked (using a butane lighter) either by pipe (considered to be the most effective but is considered to be quite painful) or water bong.",
"cite_spans": [
{
"start": 244,
"end": 249,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "\u2022 2. Use a water pipe. Its harsh and needs to be smoked hot so this should be self explanatory. 3. Use a torch style lighter [...] Salvinorin A has a VERY high boiling point (around 700 degrees F I believe) so a regular bic just wont do it Salvia is appealing to recreational users because of intense, unique, hallucinatory effects. Brief hallucinations occur rapidly after administration and are typically very vivid. Users report weird thoughts, feelings of unreality, feelings of immersion in bizarre non-Euclidian dimensions/geometries, feelings of floating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "\u2022 He noticed very clear [closed eye visuals], which looked similar to patterns on a persian rug, or ethnic oriental design. SWIM felt as if he was moving around, that he had got up and run and fallen, and that falling had shattered the space around his body as if I'd fallen through many glass framed pictures [...] \u2022 I was aware of my body and my friends and my life below, but I was [...] standing outside of time and outside of space. Figure 3 : Example snippets generated by f-LDA along with the corresponding reference text. For space, the references and snippets shown have been shortened in some cases. \"SWIM\" and \"SWIY\" stand for \"someone who isn't me/you\" and are used to avoid self-incrimination on the web forum.",
"cite_spans": [],
"ref_spans": [
{
"start": 438,
"end": 446,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "2009), in which tags are used to restrict the space of allowed topics in a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "We use f-LDA as a three-dimensional model which explicitly models triples, but we also obtain distributions for pairs such as (SALVIA,EFFECTS) by marginalizing across all distributions of the form (SALVIA, * ,EFFECTS). We trained f-LDA on two different data sets, yielding the following models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "\u2022 f-LDA-1: We use the 12K messages with tags and fill the set out with 13K messages with tags uniformly sampled from the 18 other drugs, for a total of 25K messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "\u2022 f-LDA-2: We use all 37K messages (many without tags) and fill the set out with 63K messages with tags uniformly sampled from the 18 other drugs, for a total of 100K messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "All f-LDA instances are run with 5000 iterations alternating between a sweep of Gibbs sampling followed by a step of gradient ascent on the hyperparameters. While we do not use the tags as strict labels during sampling, we initialize the Gibbs sampler so that each token in a document is assigned to its label given by the tag, when available. In the absence of tags (in f-LDA-2), we initialize tokens to the GENERAL components. We initialized \u03c9 to its prior mean (Eq. 3), while the variance \u03c3 2 and the initialization of bias \u03c9 (B) are chosen to optimize likelihood on the held-out development set.",
"cite_spans": [
{
"start": 529,
"end": 532,
"text": "(B)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "We optimized the hyperparameters and sparsity array using gradient descent after each Gibbs sweep. We use a decreasing step size of a/(t+1000), where t is the current iteration and a=10 for \u03b1 and 1 for \u03c9 and the sparsity values. To learn priors \u03b7, we ran our version of SAGE for 100 iterations of gradient ascent (fixed step size of 0.1). See Paul and Dredze (2012a) for examples of parameters (the top words associated with various triples) learned by this model on this corpus.",
"cite_spans": [
{
"start": 343,
"end": 366,
"text": "Paul and Dredze (2012a)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Salvia divinorum",
"sec_num": null
},
{
"text": "We created twelve reference summaries by editing together excerpts from the five Psychonaut Project reports ((Psychonaut), 2009) . Each reference is matched to drug-specific pairs and triples. For example, a paragraph describing the differences in effects of salvia between smoking and oral routes was matched to distributions for (SALVIA,EFFECTS), (SALVIA,SMOKING,EFFECTS), (SALVIA,ORAL,EFFECTS). Descriptions of creating tinctures and blotters for oral consumption were matched to (SALVIA,ORAL,CHEMISTRY). We consider pairs in addition to triples because not all summaries correspond to particular routes or aspects.",
"cite_spans": [
{
"start": 108,
"end": 128,
"text": "((Psychonaut), 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Generation",
"sec_num": "4.2"
},
{
"text": "For each tuple-specific word distribution (a pair or a triple), we create a \"summary\" by extracting a set of five text snippets which minimize KL-divergence to the target word distribution. We consider all overlapping text windows of widths {10,15,20} in the corpus as candidate snippets. Following Haghighi and Vanderwende (2009) , we greedily add snippets one by one with the lowest KL-divergence at each step until we have added five.",
"cite_spans": [
{
"start": 299,
"end": 330,
"text": "Haghighi and Vanderwende (2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Generation",
"sec_num": "4.2"
},
{
"text": "We only considered candidate snippets within the subforum for the particular drug, and snippets are based on the preprocessed topic model input with no stop words. Before presenting snippets to users, we then map the snippets back to the raw text by taking all sentences which are at least partly spanned by the window of tokens. Because each reference may be matched to more than one tuple, there may be more than five snippets which correspond to a reference. The \"Random\" counts have been scaled to fit the same range as the other systems, since fewer random snippets were shown to annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Generation",
"sec_num": "4.2"
},
{
"text": "Recall that the reports used as reference summaries were themselves created by reading web forums.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "Our hypothesis is that f-LDA could be used as an exploratory tool to expedite the creation of these reports. Thus in our evaluation we want to measure how useful the extracted snippets would be in informing the writing of such reports. We performed both human and automatic evaluation on the summaries generated by f-LDA (variants 1 and 2) as well as our baseline. We also included randomly selected snippets as a control (five per reference). Example output is shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 479,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "Three annotators were presented snippets pooled from all four systems we are evaluating alongside the corresponding reference text. Within each set corresponding to a reference summary, the snippets were shown in a random order. Annotators were asked to judge each snippet independently on a 5point Likert scale as to how useful each snippet would be in writing the reference text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgments of Quality",
"sec_num": "4.3.1"
},
{
"text": "The distribution of scores is shown in Figure 4 and summarized in Table 2 . Annotators generally agreed on the relative quality of snippets: the average correlation of scores between each pair of annotators was 0.49. Snippets produced by f-LDA were given more high scores and fewer low scores than the baseline, while the two f-LDA variants were rated comparably. The breakdown is more interesting when we compare scores for snippets that were matched to word distributions for pairs versus word distributions for triples. The gap in scores between f-LDA and the baseline increases when we look at the scores for only triples: f-LDA beats the baseline by a margin of 0.45 for snippets matched to triples and 0.21 for pairs. This suggests that we produce better triples by modeling them jointly. For triples, f-LDA-2 (which uses more data) beats f-LDA-1 (which uses only tagged data), while the reverse is true for pairs. While some of the randomly selected control snippets happened to be useful, the scores for these snippets were much lower than those extracted through model-based systems. This suggests that exploring the forums in a targeted way (e.g. through our topic model approach) would be more efficient than exploring the data in a non-targeted way (akin to the random approach).",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 66,
"end": 73,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Judgments of Quality",
"sec_num": "4.3.1"
},
{
"text": "Finally, we asked two expert annotators (faculty members in psychiatry and behavioral pharmacology, who have used drug forums in the past to study emerging drugs) to rate the snippets corresponding to mephedrone/MDPV. The best f-LDA system had an average score of 2.57 compared to a baseline score of 2.45 and random score of 1.63.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgments of Quality",
"sec_num": "4.3.1"
},
{
"text": "The human judgments effectively measured a form of precision, as the quality of snippets were judged by their correspondence to the reference text, without regard to how much of the reference text was covered by all snippets. We also used the automatic evaluation metric ROUGE (Lin, 2004) as a rough estimate of summary recall: this metric computes the percentage of n-grams in the reference text that appeared in the generated summaries.",
"cite_spans": [
{
"start": 277,
"end": 288,
"text": "(Lin, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation of Recall",
"sec_num": "4.3.2"
},
{
"text": "We computed ROUGE for both 1-grams and 2grams. When computing n-gram counts, we applied Porter's stemmer to all tokens. We excluded stop words from 1-gram counts but included them in 2gram counts where we care about longer phrases. 2 Results are shown in Table 2 . We find that f-LDA-1 has the highest score for both 1-and 2-grams, suggesting that it is extracting a more diverse set of relevant snippets. When performing a paired t-test across the 12 reference summaries, we find that f-LDA is better than the baseline with p-values 0.14 and 0.10 for 1-gram and 2-gram recall, respectively. f-LDA's recall advantage may come from the fact that it learns from a larger amount of data and it may learn more diverse word distributions by directly modeling triples. f-LDA-1 had slightly better recall (under ROUGE), while f-LDA-2 was slightly better according to the human annotators.",
"cite_spans": [
{
"start": 232,
"end": 233,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Evaluation of Recall",
"sec_num": "4.3.2"
},
{
"text": "We have proposed exploratory tools for the analysis of online drug communities. Such communities are an emerging source of drug research, but manually browsing through large corpora is impractical and important information could be missed. We have demonstrated that topic models are capable of modeling informative portions of text, and in particular multi-dimensional topic models can target desired structures such as the combination of aspect and route of administration for each drug. We have presented an extension to factorial LDA tailored to a particular application and data set which was demonstrated to induce desired properties. As a technical contribution, this study lays out practical guidelines for customizing and incorporating prior knowledge into multi-dimensional text models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "SAGE models sparsity on the weights via a Laplacian prior. Such sparsity is not modeled in f-LDA, so we ignore this here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In both cases, ROUGE scores were higher when stop words were included. f-LDA beats the baseline by similar margins regardless of whether we include stop words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Dr. Margaret S. Chisolm and Dr. Ryan Vandrey from the Johns Hopkins School of Medicine for providing the mephedrone/MDPV annotations, and Alex Lamb and Hieu Tran for assisting with the full annotations. We also thank Dr. Matthew W. Johnson for additional advice, and the anonymous reviewers for helpful feedback and suggestions. This research was partly supported by an NSF Graduate Research Fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Visualizing topic models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Chaney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Chaney and D. Blei. 2012. Visualizing topic models. In ICWSM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Designer drugs on the Internet: a phenomenon out-of-control? The emergence of hallucinogenic drug Bromo-Dragonfly",
"authors": [
{
"first": "O",
"middle": [],
"last": "Corazza",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Schifano",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Farre",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Deluca",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Davey",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Drummond",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Torrens",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Demetrovics",
"suffix": ""
},
{
"first": "L",
"middle": [
"Di"
],
"last": "Furia",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Flesland",
"suffix": ""
}
],
"year": 2011,
"venue": "Current Clinical Pharmacology",
"volume": "6",
"issue": "2",
"pages": "125--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Corazza, F. Schifano, M. Farre, P. Deluca, Z. Davey, C. Drummond, M. Torrens, Z. Demetrovics, L. Di Fu- ria, L. Flesland, et al. 2011. Designer drugs on the Internet: a phenomenon out-of-control? The emer- gence of hallucinogenic drug Bromo-Dragonfly. Cur- rent Clinical Pharmacology, 6(2):125-129.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Peer van der Kreeft, Daniela Zummo, and Norbert Scherbaum",
"authors": [
{
"first": "Ornella",
"middle": [],
"last": "Corazza",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Schifano",
"suffix": ""
},
{
"first": "Pierluigi",
"middle": [],
"last": "Simonato",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "Sulaf",
"middle": [],
"last": "Assi",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [],
"last": "Stair",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Corkery",
"suffix": ""
},
{
"first": "Giuseppina",
"middle": [],
"last": "Trincas",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Deluca",
"suffix": ""
},
{
"first": "Zoe",
"middle": [],
"last": "Davey",
"suffix": ""
},
{
"first": "Ursula",
"middle": [],
"last": "Blaszko",
"suffix": ""
},
{
"first": "Zsolt",
"middle": [],
"last": "Demetrovics",
"suffix": ""
},
{
"first": "Jacek",
"middle": [],
"last": "Moskalewicz",
"suffix": ""
},
{
"first": "Aurora",
"middle": [],
"last": "Enea",
"suffix": ""
},
{
"first": "Giuditta",
"middle": [],
"last": "Di Melchiorre",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Mervo",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Di Furia",
"suffix": ""
},
{
"first": "Magi",
"middle": [],
"last": "Farre",
"suffix": ""
},
{
"first": "Liv",
"middle": [],
"last": "Flesland",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Pasinetti",
"suffix": ""
},
{
"first": "Cinzia",
"middle": [],
"last": "Pezzolesi",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Pisarska",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Shapiro",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Siemann",
"suffix": ""
},
{
"first": "Arvid",
"middle": [],
"last": "Skutle",
"suffix": ""
},
{
"first": "Aurora",
"middle": [],
"last": "Enea",
"suffix": ""
},
{
"first": "Giuditta",
"middle": [],
"last": "Di Melchiorre",
"suffix": ""
},
{
"first": "Elias",
"middle": [],
"last": "Sferrazza",
"suffix": ""
}
],
"year": 2012,
"venue": "Phenomenon of new drugs on the Internet: the case of ketamine derivative methoxetamine",
"volume": "27",
"issue": "",
"pages": "145--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ornella Corazza, Fabrizio Schifano, Pierluigi Simonato, Suzanne Fergus, Sulaf Assi, Jacqueline Stair, John Corkery, Giuseppina Trincas, Paolo Deluca, Zoe Davey, Ursula Blaszko, Zsolt Demetrovics, Jacek Moskalewicz, Aurora Enea, Giuditta di Melchiorre, Barbara Mervo, Lucia di Furia, Magi Farre, Liv Fles- land, Manuela Pasinetti, Cinzia Pezzolesi, Agnieszka Pisarska, Harry Shapiro, Holger Siemann, Arvid Skutle, Aurora Enea, Giuditta di Melchiorre, Elias Sferrazza, Marta Torrens, Peer van der Kreeft, Daniela Zummo, and Norbert Scherbaum. 2012. Phenomenon of new drugs on the Internet: the case of ketamine derivative methoxetamine. Human Psychopharmacol- ogy: Clinical and Experimental, 27(2):145-149.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Effectiveness of and challenges faced by surveillance systems",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Raimondo",
"middle": [],
"last": "Bruno",
"suffix": ""
},
{
"first": "Lucinda",
"middle": [],
"last": "Burns",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Roxburgh",
"suffix": ""
}
],
"year": 2011,
"venue": "Drug Testing and Analysis",
"volume": "3",
"issue": "9",
"pages": "635--641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Dunn, Raimondo Bruno, Lucinda Burns, and Amanda Roxburgh. 2011. Effectiveness of and chal- lenges faced by surveillance systems. Drug Testing and Analysis, 3(9):635-641.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sparse additive generative models of text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisenstein, A. Ahmed, and E. P. Xing. 2011. Sparse additive generative models of text. In ICML.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Topicviz: Semantic navigation of document collections",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Duen",
"middle": [],
"last": "Horng",
"suffix": ""
}
],
"year": 2012,
"venue": "CHI Work-in-Progress Paper",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Duen Horng \"Polo\" Chau, Aniket Kit- tur, and Eric P. Xing. 2012. Topicviz: Semantic navigation of document collections. In CHI Work-in- Progress Paper.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "annual report on the state of the drugs problem in Europe. European Monitoring Centre for Drugs and Drug Addiction",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "EMCDDA. 2012. 2012 annual report on the state of the drugs problem in Europe. European Monitoring Centre for Drugs and Drug Addiction, Lisbon.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "6-methylenedioxy-2-aminoindane: from laboratory curiosity to 'legal high'",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cathal",
"suffix": ""
},
{
"first": "Sulaf",
"middle": [],
"last": "Gallagher",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [
"L"
],
"last": "Assi",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stair",
"suffix": ""
},
{
"first": "Ornella",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Corazza",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Corkery",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schifano",
"suffix": ""
}
],
"year": 2012,
"venue": "Human Psychopharmacology: Clinical and Experimental",
"volume": "5",
"issue": "2",
"pages": "106--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cathal T. Gallagher, Sulaf Assi, Jacqueline L. Stair, Suzanne Fergus, Ornella Corazza, John M. Corkery, and Fabrizio Schifano. 2012. 5,6-methylenedioxy-2- aminoindane: from laboratory curiosity to 'legal high'. Human Psychopharmacology: Clinical and Experi- mental, 27(2):106-112.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multi-document summarization by sentence extraction",
"authors": [
{
"first": "Jade",
"middle": [],
"last": "Goldstein",
"suffix": ""
},
{
"first": "Vibhu",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Kantrowitz",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 NAACL-ANLP Workshop on Automatic summarization",
"volume": "",
"issue": "",
"pages": "40--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jade Goldstein, Vibhu Mittal, Jaime Carbonell, and Mark Kantrowitz. 2000. Multi-document summarization by sentence extraction. In Proceedings of the 2000 NAACL-ANLP Workshop on Automatic summariza- tion, pages 40-48.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploring content models for multi-document summarization",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL '09: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "362--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In NAACL '09: Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 362-370.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Clinical toxicology of newer recreational drugs",
"authors": [
{
"first": "L",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "H",
"middle": [
"L"
],
"last": "Simon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2011,
"venue": "Clinical Toxicology",
"volume": "49",
"issue": "8",
"pages": "705--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon L. Hill and Simon H. L. Thomas. 2011. Clin- ical toxicology of newer recreational drugs. Clinical Toxicology, 49(8):705-719.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Costly turn on: Patterns of use and perceived consequences of mephedrone based head shop products amongst Irish injectors",
"authors": [
{
"first": "Marie",
"middle": [
"Claire"
],
"last": "",
"suffix": ""
},
{
"first": "Van",
"middle": [],
"last": "Hout",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Bingham",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal of Drug Policy",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie Claire Van Hout and Tim Bingham. 2012. Costly turn on: Patterns of use and perceived consequences of mephedrone based head shop products amongst Irish injectors. International Journal of Drug Policy.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incorporating lexical priors into topic models",
"authors": [
{
"first": "Jagadeesh",
"middle": [],
"last": "Jagarlamudi",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Raghavendra",
"middle": [],
"last": "Udupa",
"suffix": ""
}
],
"year": 2012,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jagadeesh Jagarlamudi, Hal Daum\u00e9 III, and Raghavendra Udupa. 2012. Incorporating lexical priors into topic models. In EACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Stan Szpakowicz Marie- Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74- 81, Barcelona, Spain, July.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Topic sentiment mixture: modeling facets and opinions in weblogs",
"authors": [
{
"first": "X",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wondra",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mei, X. Ling, M. Wondra, H. Su, and C. Zhai. 2007. Topic sentiment mixture: modeling facets and opin- ions in weblogs. In WWW.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Reconstructing Pompeian households",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2011,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno. 2011. Reconstructing Pompeian house- holds. In UAI.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Image and video disclosure of substance use on social media websites",
"authors": [
{
"first": "Elizabeth",
"middle": [
"M"
],
"last": "Morgan",
"suffix": ""
},
{
"first": "Chareen",
"middle": [],
"last": "Snelson",
"suffix": ""
},
{
"first": "Patt",
"middle": [],
"last": "Elison-Bowers",
"suffix": ""
}
],
"year": 2010,
"venue": "Online Interactivity: Role of Technology in Behavior Change",
"volume": "26",
"issue": "",
"pages": "1405--1411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth M. Morgan, Chareen Snelson, and Patt Elison- Bowers. 2010. Image and video disclosure of sub- stance use on social media websites. Computers in Human Behavior, 26(6):1405-1411. Online Interac- tivity: Role of Technology in Behavior Change.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "You are what you Tweet: Analyzing Twitter for public health",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2011,
"venue": "5th International AAAI Conference on Weblogs and Social Media (ICWSM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Paul and Mark Dredze. 2011. You are what you Tweet: Analyzing Twitter for public health. In 5th International AAAI Conference on Weblogs and Social Media (ICWSM).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Experimenting with drugs (and topic models): Multi-dimensional exploration of recreational drug discussions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2012,
"venue": "AAAI 2012 Fall Symposium on Information Retrieval and Knowledge Discovery in Biomedical Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Paul and Mark Dredze. 2012a. Experiment- ing with drugs (and topic models): Multi-dimensional exploration of recreational drug discussions. In AAAI 2012 Fall Symposium on Information Retrieval and Knowledge Discovery in Biomedical Text.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Factorial LDA: Sparse multi-dimensional text models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2012,
"venue": "Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Paul and Mark Dredze. 2012b. Factorial LDA: Sparse multi-dimensional text models. In Neu- ral Information Processing Systems (NIPS).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A two-dimensional topicaspect model for discovering multi-faceted topics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Paul and R. Girju. 2010. A two-dimensional topic- aspect model for discovering multi-faceted topics. In AAAI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bromo-Dragonfly, MDPV, Spice, Mephodrone, and Salvia Divinorum reports",
"authors": [],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Psychonaut WebMapping Research Group (Psycho- naut). 2009. Bromo-Dragonfly, MDPV, Spice, Mephodrone, and Salvia Divinorum reports. http://www.psychonautproject.eu/technical.php. Institute of Psychiatry, King's College London.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Labeled lda: a supervised topic model for credit attribution in multi-labeled corpora",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "248--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled lda: a super- vised topic model for credit attribution in multi-labeled corpora. In EMNLP, pages 248-256.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The emerging of xylazine as a new drug of abuse and its health consequences among drug users in Puerto Rico",
"authors": [
{
"first": "J",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Negr\u00f3n",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Col\u00f3n",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Padilla",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mill\u00e1n",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Matos",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Robles",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Urban Health",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Reyes, J. Negr\u00f3n, H. Col\u00f3n, A. Padilla, M. Mill\u00e1n, T. Matos, and R. Robles. 2012. The emerging of xylazine as a new drug of abuse and its health con- sequences among drug users in Puerto Rico. Journal of Urban Health, pages 1-8. SAMHSA. 2012. The DAWN report.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Drugs on the web: the Psychonaut 2002 EU project",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Schifano",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Deluca",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Baldacchino",
"suffix": ""
},
{
"first": "Teuvo",
"middle": [],
"last": "Peltoniemi",
"suffix": ""
},
{
"first": "Norbert",
"middle": [],
"last": "Scherbaum",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Torrens",
"suffix": ""
},
{
"first": "Magi",
"middle": [],
"last": "Farr\u00f5",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Mariangela",
"middle": [],
"last": "Rossi",
"suffix": ""
},
{
"first": "Dorte",
"middle": [],
"last": "Eastwood",
"suffix": ""
},
{
"first": "Claude",
"middle": [],
"last": "Guionnet",
"suffix": ""
},
{
"first": "Salman",
"middle": [],
"last": "Rawaf",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Agosti",
"suffix": ""
},
{
"first": "Lucia",
"middle": [
"Di"
],
"last": "Furia",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Brigada",
"suffix": ""
},
{
"first": "Aino",
"middle": [],
"last": "Majava",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Siemann",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Leoni",
"suffix": ""
},
{
"first": "Antonella",
"middle": [],
"last": "Tomasin",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Rovetto",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hamid Ghodse",
"suffix": ""
}
],
"year": 2006,
"venue": "Progress in Neuro-Psychopharmacology and Biological Psychiatry",
"volume": "30",
"issue": "4",
"pages": "640--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Schifano, Paolo Deluca, Alex Baldacchino, Teuvo Peltoniemi, Norbert Scherbaum, Marta Tor- rens, Magi Farr\u00f5, Irene Flores, Mariangela Rossi, Dorte Eastwood, Claude Guionnet, Salman Rawaf, Lisa Agosti, Lucia Di Furia, Raffaella Brigada, Aino Majava, Holger Siemann, Mauro Leoni, Antonella Tomasin, Francesco Rovetto, and A. Hamid Ghodse. 2006. Drugs on the web: the Psychonaut 2002 EU project. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 30(4):640 -646.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A database of National Institutes of Health (NIH) research using machine learned categories and graphically clustered grant awards",
"authors": [
{
"first": "Edmund",
"middle": [],
"last": "Talley",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Herr",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Gully",
"middle": [],
"last": "Burns",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Leenders",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Nature Methods",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edmund Talley, David Newman, Bruce Herr II, Hanna Wallach, Gully Burns, Miriam Leenders, and Andrew McCallum. 2011. A database of National Institutes of Health (NIH) research using machine learned cate- gories and graphically clustered grant awards. Nature Methods.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling online reviews with multi-grain topic models",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Ryan McDonald. 2008. Modeling on- line reviews with multi-grain topic models. In Interna- tional World Wide Web Conference (WWW), Beijing.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Just a click away: Recreational drug web sites on the Internet",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wax",
"suffix": ""
}
],
"year": 2002,
"venue": "Pediatrics",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Wax. 2002. Just a click away: Recreational drug web sites on the Internet. Pediatrics, 109(6).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The graphical model for f-LDA augmented with priors \u03b7 learned from labeled data ( \u00a73.1). In this work, K = 3.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Example of parameters learned by f-LDA. The highest weight words in the \u03c9 and \u03b7 vectors for three components are shown on the left. These are combined to form the prior for the word distribution \u03c6. The tripling of (COCAINE,SNORTING,HEALTH) results in high probability words about nose bleeds and nasal damage.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "(f ) i over the vocabulary for the ith component of factor f , which are believed to be good values for \u03c9(f )",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "The distribution of annotator scores ( \u00a74.3.1).",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>The forum tags shown in parentheses are grouped to-</td></tr><tr><td>gether to form aspects.</td></tr></table>",
"text": "The three factors of our model (details in \u00a73.1)."
}
}
}
}