{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:32:11.778857Z" }, "title": "A Risk Communication Event Detection Model via Contrastive Learning", "authors": [ { "first": "Mingi", "middle": [], "last": "Shin", "suffix": "", "affiliation": {}, "email": "mingi.shin@kaist.ac.kr" }, { "first": "Sungwon", "middle": [], "last": "Han", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sungkyu", "middle": [], "last": "Park", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Meeyoung", "middle": [], "last": "Cha", "suffix": "", "affiliation": {}, "email": "meeyoungcha@kaist.ac.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a time-topic cohesive model describing the communication patterns on the coronavirus pandemic from three Asian countries. The strength of our model is twofold. First, it detects contextualized events based on topical and temporal information via contrastive learning. Second, it can be applied to multiple languages, enabling a comparison of risk communication across cultures. We present a case study and discuss future implications of the proposed model.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a time-topic cohesive model describing the communication patterns on the coronavirus pandemic from three Asian countries. The strength of our model is twofold. First, it detects contextualized events based on topical and temporal information via contrastive learning. Second, it can be applied to multiple languages, enabling a comparison of risk communication across cultures. We present a case study and discuss future implications of the proposed model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The novel coronavirus disease (COVID-19) is affecting public health and the economy worldwide. The surge in social media usage during the pandemic led the online content to an excellent tool to examine risk communication (Lazer et al., 2018; Beaunoyer et al., 2020) . As more people seek and share information online, NGOs and especially the WHO have warned of the danger of the increasing misinformation on the pandemic. A new term, infodemic, was coined to describe this phenomenon.", "cite_spans": [ { "start": 221, "end": 241, "text": "(Lazer et al., 2018;", "ref_id": "BIBREF6" }, { "start": 242, "end": 265, "text": "Beaunoyer et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, the natural language processing (NLP) community's recent developments enable in-depth analysis of topical changes from online resources. Latent Dirichlet Allocation (LDA) can detect major topics from unstructured text data (e.g., extract topics in the context of a global pandemic (Park et al., 2020) ). Advanced language models like BERT can be used to learn representations (e.g., the pandemic discourse on Twitter (M\u00fcller et al., 2020) ). A language-agnostic version of BERT further extends its capability to handle multiple languages (Gencoglu, 2020) .", "cite_spans": [ { "start": 300, "end": 319, "text": "(Park et al., 2020)", "ref_id": null }, { "start": 436, "end": 457, "text": "(M\u00fcller et al., 2020)", "ref_id": "BIBREF8" }, { "start": 557, "end": 573, "text": "(Gencoglu, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite social media's potential to understand the risk communication pattern and infodemic during the pandemic, several challenges remain. One of them is the temporal aspect. The existing topic models which exclude temporal information cannot represent how the conversations/themes developed over time (Blei and Lafferty, 2006) . While some works propose the refinements of such models by incorporating the time or metadata information (Blei and Lafferty, 2006; Roberts et al., 2013) , they still need to dissect the data into arbitrarily chosen time chunks. Particularly in risk communication, where the public attention evolves quickly in a short period, the contextual information intertwining topic and time becomes a critical component (Atefeh and Khreich, 2015) . Considering the time aspect also allows identifying topics that are of significant influence yet over a short span of time.", "cite_spans": [ { "start": 303, "end": 328, "text": "(Blei and Lafferty, 2006)", "ref_id": "BIBREF2" }, { "start": 437, "end": 462, "text": "(Blei and Lafferty, 2006;", "ref_id": "BIBREF2" }, { "start": 463, "end": 484, "text": "Roberts et al., 2013)", "ref_id": "BIBREF11" }, { "start": 742, "end": 768, "text": "(Atefeh and Khreich, 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Addressing the limitation above, we present a time-topic cohesive model to detect contextualized events and topics over time. Our model marries key ideas from contrastive learning (hereafter CL) and multilingual BERT (hereafter mBERT). CL is a machine learning and computer vision technique to classify similar objects by devising a triplet loss function among one anchor and two targets (Dai and Lin, 2017) . By designing a triplet loss such as computing the difference of topic and time between anchor and target tweets, our model can jointly consider temporal and topical characteristics when detecting major events about the pandemic. mBERT allows us to apply the model to multiple countries for comparison.", "cite_spans": [ { "start": 388, "end": 407, "text": "(Dai and Lin, 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The final model is applied to a collection of Twitter messages gathered from three Asian countries: South Korea, Vietnam, and Iran. Based on the determined events, we can understand what information (or misinformation) was mainly talked about at what stage of the pandemic in each country. Unlike existing topic models, this new method also captures the temporal coherence of topics. We present a case study that shows how the risk communication on COVID-19 starts with several key events initially and then expands to diverse domains in South Korea.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We analyze GeoCoV19 (Qazi et al., 2020) , a multilingual dataset of tweets about COVID-19 with location information. The data comprise tweets that contain COVID-19-related keywords in multiple languages. The coverage of data is for 90 days, from February 1 to May 1, 2020. We only utilize tweets written in Korean, Farsi, and Vietnamese, all local languages corresponding to South Korea, Iran, and Vietnam. This was done by constraining the language to the lang attribute, the auto-detected language of the tweet text. The total number of tweets were 43,347 (% of retweets: 79), 19,174 (34), and 4,359 (16) for each language.", "cite_spans": [ { "start": 20, "end": 39, "text": "(Qazi et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Method 2.1 Data", "sec_num": "2" }, { "text": "The studied Asian countries had different epidemic situations. South Korea was one of the first countries affected by the virus, recording a surge in the increase of confirmed cases in February and March. By May, the country saw a flattening trend in the confirmed cases. Meanwhile, Iran is one of the most severely affected countries. Within our target period, the number of confirmed cases rapidly increased and maintained over several hundreds of cases. On the other hand, in Vietnam, the numbers have consistently stayed below a hundred throughout the data period.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2.1 Data", "sec_num": "2" }, { "text": "Preprocessing Data. We excluded retweets, URLs, language-dependent stopwords, and redundant whitespace. We also replaced the mentioned names with a UNK token prior to analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2.1 Data", "sec_num": "2" }, { "text": "LDA Topic Modeling. We utilized the LDA model to extract topical information from the text. We first tokenized the tweet text by standard Python libraries for each specific language. We then trained the LDA model for 50 topics. Each tweet was labeled with the most probable topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2.1 Data", "sec_num": "2" }, { "text": "We propose an event detection algorithm that considers word occurrence patterns and time concurrently. We regard tweets with similar word patterns but large time discrepancy as entries from different events. This approach for training is inspired by the CL approach. By constructing triplets that reflect the time and topical distance among tweets, optimizing triplet loss can directly lead the embedding to gather similar events within a short period. We then perform a clustering algorithm over the trained embeddings to extract the topic clusters. The structure of mBERT is utilized, and its pre-trained weights are used as initialization to distill contextualized information during training the embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "Our model adds a linear layer on the concatenation of the pooled output from mBERT and the normalized timestamp (min-max scaling) to embed tweets within the fixed sized vector. Then, we project the concatenated vector into L2-normalized space. As a pooling strategy, we used the output from the CLS-token which is trained for the next sentence prediction. The result model is mBERT which has a spherical embedding head on the top. To fine-tune this model, we construct two kinds of triplets: (1) LDA-dependent triplet, in which anchor and positive tweets shows the same topic from the trained LDA model while negative does not; (2) time-dependent triplet, in which timestamp from positive tweet is nearer to the anchor than negative. We used a combined loss between the two triplet losses as an objective function. Given a as an anchor Tweet, p as a positive sample from the dataset and n as a negative, two kinds of triplet loss (L tri ) can be defined below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "L tri topic = max {s(w a , w n ) \u2212 s(w a , w p ) + \u03c4 topic , 0}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L tri time = max {s(w a , w n ) \u2212 s(w a , w p ) + \u03c4 time , 0}", "eq_num": "(2)" } ], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "where s is a similarity function, \u03c4 topic and \u03c4 time are thresholds, and w a , w p , w n are the embeddings from the model using a, p, n as an input respectively. Since our embedding is L2-normalized, s is defined as a dot-product function which represents the cosine similarity. Two triplets can be denoted as L tri topic and L tri time , so the eventual loss (L total ) to be minimized has become L total = L tri topic + L tri time . Through grid search, we could find that 0.1 and 0.1 are optimized values for \u03c4 topic and \u03c4 time , respectively. Finally, we perform spherical k-means clustering to identify topic clusters. The Silhouette value is measured to determine the number of clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "3 Results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Event Detection over Time", "sec_num": "2.2" }, { "text": "Figure 1 illustrates two embedding examples from the various combinations on choosing \u03c4 in the case of South Korea. The figures are plotted via t-SNE by reducing dimensionality by 2. We could confirm that our hyperparameter setting (i.e., \u03c4 topic = 0.1, \u03c4 time = 0.1) results more distinctive clusters than other conditions. The same values for two \u03c4 may mean that the topic and time information equally contributes to the embeddings. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Results", "sec_num": "3.1" }, { "text": "We then compare the clustering results within the South Korean case based on ablation studies, which are utilizing 1) LDA; 2) LDA+mBERT (concatenating two normalized outputs, then run k-means clustering); 3) LDA+mBERT+CL (our model). The statistics of the clustering results are presented in Table 1 ). The number of tweets per event of our model varies less than those of LDA, although our model has fewer events. We used the average standard deviation (SD) and standard error (SE) in timestamp as a metric to evaluate whether detected events share the same temporal information. Concerning the timestamp metric, our model shows the smallest time dissimilarity per event on average, meaning our model well reflects the temporal information in practice. By combining these two observations, we conclude our model smoothed LDA by successfully contemplating the time information together as planned. ", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 299, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Clustering Results with Evaluation", "sec_num": "3.2" }, { "text": "For Iran and Vietnam, Figure 2 shows the trends of the detected events, respectively. When determining those countries' events, we have used the same hyperparameter values from the South Korean case. Particularly with South Korea, we have further qualitatively interpreted the events and merged similar topics, as depicted in Figure 3 . For instance, total 20 topics detected from the South Korea data are labeled and merged into 13 discriminative topics. Tweet volumes became larger from mid-February as the number of confirmed cases abruptly rose due to a regional church outbreak, and it lasted until the end of March. During this period, the number of events was relatively small as people focused on a few events like news about the global confirmed cases. As the situation eased from April, the events became more diverse. Also, we could see that the rumor events were relatively steady across the whole period. ", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 326, "end": 334, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Detected Events across Time by Country", "sec_num": "3.3" }, { "text": "We have extracted event trends based on the contextualized event detection model with CL in the paper. By introducing two kinds of the triplets: LDA-dependent and time-dependent triplets, our model efficiently trains the embedding to gather similar events within a short period. We also present the qualitative interpretation of the trends in South Korea as a possible application. As the proposed method can be executed by country, if we detect misinformation or disinformation prevalent in only one country first, we can quickly alarm other countries to deal with this issue preemptively. For future studies, we can think of utilizing other topic modeling algorithms beyond LDA, like BTM (Yan et al., 2013) or DocNADE (Larochelle and Lauly, 2012) by considering temporal traits together, then compare the performance to the current model. Some research has shown the efficacy of variational autoencoder on modeling topics (Miao et al., 2016) in terms of both topic coherence and perplexity. Besides, the framework of the current work is retrospective, and we can also try to build a monitoring/analyzing framework for detecting events in real-time. By investigating how the risk communication on COVID-19 proceeds in real-time, it would be easier to react to misinformation immediately.", "cite_spans": [ { "start": 690, "end": 708, "text": "(Yan et al., 2013)", "ref_id": "BIBREF12" }, { "start": 720, "end": 748, "text": "(Larochelle and Lauly, 2012)", "ref_id": "BIBREF5" }, { "start": 924, "end": 943, "text": "(Miao et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" } ], "back_matter": [ { "text": "This work was supported by the Institute for Basic Science (IBS-R029-C2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A survey of techniques for event detection in twitter", "authors": [ { "first": "Farzindar", "middle": [], "last": "Atefeh", "suffix": "" }, { "first": "Wael", "middle": [], "last": "Khreich", "suffix": "" } ], "year": 2015, "venue": "Computational Intelligence", "volume": "31", "issue": "1", "pages": "132--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farzindar Atefeh and Wael Khreich. 2015. A survey of techniques for event detection in twitter. Computational Intelligence, 31(1):132-164.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Covid-19 and digital inequalities: Reciprocal impacts and mitigation strategies", "authors": [ { "first": "Elisabeth", "middle": [], "last": "Beaunoyer", "suffix": "" }, { "first": "Sophie", "middle": [], "last": "Dup\u00e9r\u00e9", "suffix": "" }, { "first": "Matthieu", "middle": [ "J" ], "last": "Guitton", "suffix": "" } ], "year": 2020, "venue": "Computers in Human Behavior", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elisabeth Beaunoyer, Sophie Dup\u00e9r\u00e9, and Matthieu J Guitton. 2020. Covid-19 and digital inequalities: Reciprocal impacts and mitigation strategies. Computers in Human Behavior, page 106424.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dynamic topic models", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "John D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113-120.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Contrastive learning for image captioning", "authors": [ { "first": "Bo", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Dahua", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "898--907", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Dai and Dahua Lin. 2017. Contrastive learning for image captioning. In Advances in Neural Information Processing Systems, pages 898-907.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Large-scale, language-agnostic discourse classification of tweets during covid-19", "authors": [ { "first": "Oguzhan", "middle": [], "last": "Gencoglu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.00461" ] }, "num": null, "urls": [], "raw_text": "Oguzhan Gencoglu. 2020. Large-scale, language-agnostic discourse classification of tweets during covid-19. arXiv preprint arXiv:2008.00461.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A neural autoregressive topic model", "authors": [ { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Stanislas", "middle": [], "last": "Lauly", "suffix": "" } ], "year": 2012, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2708--2716", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. In Advances in Neural Infor- mation Processing Systems, pages 2708-2716.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The science of fake news", "authors": [ { "first": "M", "middle": [ "J" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Lazer", "suffix": "" }, { "first": "A", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Yochai", "middle": [], "last": "Baum", "suffix": "" }, { "first": "", "middle": [], "last": "Benkler", "suffix": "" }, { "first": "J", "middle": [], "last": "Adam", "suffix": "" }, { "first": "", "middle": [], "last": "Berinsky", "suffix": "" }, { "first": "M", "middle": [], "last": "Kelly", "suffix": "" }, { "first": "Filippo", "middle": [], "last": "Greenhill", "suffix": "" }, { "first": "Miriam", "middle": [ "J" ], "last": "Menczer", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Metzger", "suffix": "" }, { "first": "Gordon", "middle": [], "last": "Nyhan", "suffix": "" }, { "first": "David", "middle": [], "last": "Pennycook", "suffix": "" }, { "first": "", "middle": [], "last": "Rothschild", "suffix": "" } ], "year": 2018, "venue": "Science", "volume": "359", "issue": "6380", "pages": "1094--1096", "other_ids": {}, "num": null, "urls": [], "raw_text": "David MJ Lazer, Matthew A Baum, Yochai Benkler, Adam J Berinsky, Kelly M Greenhill, Filippo Menczer, Miriam J Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, et al. 2018. The science of fake news. Science, 359(6380):1094-1096.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural variational inference for text processing", "authors": [ { "first": "Yishu", "middle": [], "last": "Miao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2016, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1727--1736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International conference on machine learning, pages 1727-1736.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter", "authors": [ { "first": "Martin", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Marcel", "middle": [], "last": "Salath\u00e9", "suffix": "" }, { "first": "E", "middle": [], "last": "Per", "suffix": "" }, { "first": "", "middle": [], "last": "Kummervold", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.07503" ] }, "num": null, "urls": [], "raw_text": "Martin M\u00fcller, Marcel Salath\u00e9, and Per E Kummervold. 2020. Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter. arXiv preprint arXiv:2005.07503.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Wonjae Lee, and Meeyoung Cha. 2020. Risk communication in asian countries: Covid-19 discourse on twitter", "authors": [ { "first": "Sungkyu", "middle": [], "last": "Park", "suffix": "" }, { "first": "Sungwon", "middle": [], "last": "Han", "suffix": "" }, { "first": "Jeongwook", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Mir", "middle": [], "last": "Majid Molaie", "suffix": "" }, { "first": "Hoang Dieu", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Karandeep", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Jiyoung", "middle": [], "last": "Han", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.12218" ] }, "num": null, "urls": [], "raw_text": "Sungkyu Park, Sungwon Han, Jeongwook Kim, Mir Majid Molaie, Hoang Dieu Vu, Karandeep Singh, Jiyoung Han, Wonjae Lee, and Meeyoung Cha. 2020. Risk communication in asian countries: Covid-19 discourse on twitter. arXiv preprint arXiv:2006.12218.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Geocov19: A dataset of hundreds of millions of multilingual covid-19 tweets with location information", "authors": [ { "first": "Umair", "middle": [], "last": "Qazi", "suffix": "" }, { "first": "Muhammad", "middle": [], "last": "Imran", "suffix": "" }, { "first": "Ferda", "middle": [], "last": "Ofli", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Umair Qazi, Muhammad Imran, and Ferda Ofli. 2020. Geocov19: A dataset of hundreds of millions of multilin- gual covid-19 tweets with location information.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The structural topic model and applied social science", "authors": [ { "first": "E", "middle": [], "last": "Margaret", "suffix": "" }, { "first": "", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "M", "middle": [], "last": "Brandon", "suffix": "" }, { "first": "Dustin", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "", "middle": [], "last": "Tingley", "suffix": "" }, { "first": "M", "middle": [], "last": "Edoardo", "suffix": "" }, { "first": "", "middle": [], "last": "Airoldi", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems workshop on topic models: computation, application, and evaluation", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Edoardo M Airoldi, et al. 2013. The structural topic model and applied social science. In Advances in neural information processing systems workshop on topic models: computation, application, and evaluation, volume 4. Harrahs and Harveys, Lake Tahoe.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A biterm topic model for short texts", "authors": [ { "first": "Xiaohui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd international conference on World Wide Web", "volume": "", "issue": "", "pages": "1445--1456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd international conference on World Wide Web, pages 1445-1456.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Embedding comparison via t-SNE between two different \u03c4 settings in the case of South Korea. \u03c4 topic = 0.1, \u03c4 time = 0.01 (left) whereas 0.1, 0.1 (our case, right).", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Daily event trends in Iran (left) and Vietnam (right). Black bar graphs represent the number of daily confirmed cases.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Daily event trends after qualitatively assessed in South Korea.", "num": null }, "TABREF0": { "content": "
StatisticsLDALDA+mBERT LDA+mBERT+CL (ours)
# of EventsCount50 (fixed)6420
# of Tweets Mean (SD) 182.38 (489.43) 142.48 (136.63)455.95 (307.46)
Timestamp Average SD0.250.240.16
Average SE0.030.020.01
", "type_str": "table", "html": null, "text": "Statistics of the clustering results from three models in the case of South Korea.", "num": null } } } }