{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:35:04.706548Z" }, "title": "HITSZ-ICRC: A Report for SMM4H Shared Task 2020-Automatic Classification of Medications and Adverse Effect in Tweets", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology (Shenzhen)", "location": { "country": "China" } }, "email": "zhaoxiaoyux@126.com" }, { "first": "Ying", "middle": [], "last": "Xiong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology (Shenzhen)", "location": { "country": "China" } }, "email": "xiongying0929@gmail.com" }, { "first": "Buzhou", "middle": [], "last": "Tang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology (Shenzhen)", "location": { "country": "China" } }, "email": "tangbuzhou@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This is the system description of the Harbin Institute of Technology Shenzhen (HITSZ) team for the first and second subtasks of the fifth Social Media Mining for Health Applications (SMM4H) shared task in 2020. The first task is automatic classification of tweets that mention medications and the second task is automatic classification of tweets in English that report adverse effects. The system we propose for these tasks is based on bidirectional encoder representations from transformers (BERT) incorporating with knowledge graph and retrieving evidence from online information. Our system achieves an F1 of 0.7553 in task 1 and an F1 of 0.5455 in task 2.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This is the system description of the Harbin Institute of Technology Shenzhen (HITSZ) team for the first and second subtasks of the fifth Social Media Mining for Health Applications (SMM4H) shared task in 2020. The first task is automatic classification of tweets that mention medications and the second task is automatic classification of tweets in English that report adverse effects. The system we propose for these tasks is based on bidirectional encoder representations from transformers (BERT) incorporating with knowledge graph and retrieving evidence from online information. Our system achieves an F1 of 0.7553 in task 1 and an F1 of 0.5455 in task 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With greatly increasing number of social media users, plenty of personal information has been posted in social networking websites such as Twitter, including massive data about health and medicine. Social media has attracted more and more attention from researchers and industrials in the health and medicine domain. To utilize the imbalanced and informal expressions of clinical concepts, a system should be able to alleviate the noise of social media posts and detect medical mentions which requires professional knowledge. In 2020, the health language processing lab at University of Pennsylvania organized the Social Media Mining for Health Applications (SMM4H) shared task about using social media mining for health monitoring, which includes 5 subtasks. We participate in the first two subtasks: (1) Automatic classification of tweets that mention medications, and (2) Automatic classification of tweets in English that report adverse effects (AEs). In this report, we briefly introduce our system developed for the two subtasks. The system is based on pre-trained language model BERT (Devlin, Chang, Lee, & Toutanova, 2018) with external knowledge.", "cite_spans": [ { "start": 1091, "end": 1130, "text": "(Devlin, Chang, Lee, & Toutanova, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The 2020 SMM4H organizers provide a dataset of tweets with a label indicating whether the tweets mention medications for task1 (1-Yes, 0-No), and a dataset of tweets with a label indicating whether the tweets mention AEs for task2. Each tweet includes text, a date and an identified ID. Table 1 describes the statistics of the two individual datasets, where #1 and #0 denote the numbers of tweets that mention medications and not respectively, #all denotes the total number of tweets, and NA denotes that we do not know the number of those tweets. 1 For both tasks, we pre-process all tweets via removing the usernames, URLs, emoticons, non-ASCII characters and repeated punctuations.", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data and Pre-processing", "sec_num": "2" }, { "text": "Our system for both task 1 and task 2 is based on BERT, which is used to represent tweet. Below we describe in detail the methods for the two tasks one by one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "3" }, { "text": "In this task, we first deploy BERT directly (Shuai Chen, 2019), and then extend it to BERT_Med by introducing potential medications from DrugBank 1 . The architecture of BERT_Med is shown in Fig. 1 , where FC is a fully-connected layer. Given a piece of tweet, we first calculate its similarities with medications, and then select those medications with similarity less than 0.1 as its next tweet, finally input the tweet and the selected medications into BERT. The similarity between tweets and medications is measured by cosine using TF-IDF(Term Frequency-Inverse Document Frequency) as their representations. ", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 197, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methods for task1", "sec_num": "3.1" }, { "text": "For task 2, we propose the following two models based on BERT as shown in Fig. 2 . Strategy 1: To extract the internal information of AEs in tweets, we add the potential AEs corresponding to the drug mentions detected in the tweets. Drug mentions in tweets are detected by exact matching on Drugbank and the drug dictionary (Azadeh N, Abeed S, O'Connor Karen, et al. 2015) , and the AEs come from MedlinePlus 2 . Similar to BERT_MED in Fig. 1 , we treat the potential AEs as the next tweet of the given tweets and input them into BERT. In addition, we add a selfattention module (Ashish Vaswani, Noam Shazeer, Niki Parmar and et al. 2017) after BERT to redistribute representations of the tweet and potential AEs. Strategy 2: To further integrate external knowledge into strategy 1, we introduce medication representation learnt from MeSH 3 by TransE (Bordes A, Usunier N, Garcia-Duran A, et al. 2013) as shown in Fig. 2 . A similar self-attention module is employed to extract information from the concatenated entity embeddings learnt by TransE. Finally, we feed combination of the final representation of the texts and the medication entities into a linear transformation layer for the final prediction. ", "cite_spans": [ { "start": 324, "end": 372, "text": "(Azadeh N, Abeed S, O'Connor Karen, et al. 2015)", "ref_id": "BIBREF2" }, { "start": 579, "end": 638, "text": "(Ashish Vaswani, Noam Shazeer, Niki Parmar and et al. 2017)", "ref_id": null } ], "ref_spans": [ { "start": 74, "end": 80, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 436, "end": 442, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 914, "end": 920, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Methods for task2", "sec_num": "3.2" }, { "text": "For task1, we set batch size to 64, learning rate to 2e-5 and sequence length to 64 when training models. For task2, the batch size and sequence length of BERT are the same as the ones of task1 while the learning rates of BERT and attention layer are set to 1e-5 and 5e-5 respectively. We train all models using 5-fold cross validation on the training set. Table 2 : Results on training and test sets for task 1 and task 2. The results of our system provided by the organizers are listed in Table 2 , where NA denotes that we do not submit results as internet problem, BERT_Merged is the model that merges results of BERT_Strategy1 and BERT_Strategy2 by voting. Our system achieves the highest F-score of 0.7553 on task1 and 0.5455 on task2. For task 1, on the training set, BERT_Med achieves much higher recall, but much lower precision than BERT. For task2, BERT_Strategy1 and BERT_Strategy2 perform roughly the same on the training and test sets. BERT_Merged performs slightly worse than them on the training set, but a little higher on the test set. The difference between performances of our system on the training set and test set is very large.", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 364, "text": "Table 2", "ref_id": null }, { "start": 491, "end": 498, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "https://www.drugbank.ca/ 2 https://medlineplus.gov/ 3 https://www.nlm.nih.gov/mesh/meshhome.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "HITSZ-ICRC: A report for SMM4H shared task 2019-automatic classification and extraction of adverse effect mentions in tweets", "authors": [ { "first": "Shuai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yuanhang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaowei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Haoming", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task", "volume": "", "issue": "", "pages": "47--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuai Chen, Yuanhang Huang, Xiaowei Huang, Haoming Qin, Jun Yan, Buzhou Tang. 2019. HITSZ-ICRC: A report for SMM4H shared task 2019-automatic classification and extraction of adverse effect mentions in tweets. Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task. pp. 47-51.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features", "authors": [ { "first": "N", "middle": [], "last": "Azadeh", "suffix": "" }, { "first": "S", "middle": [], "last": "Abeed", "suffix": "" }, { "first": "O'connor", "middle": [], "last": "Karen", "suffix": "" } ], "year": 2015, "venue": "Journal of the American Medical Informatics Association", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Azadeh N, Abeed S, O'Connor Karen, et al. 2015. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association. (3):3.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Translating embeddings for modeling multi-relational data", "authors": [ { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "N", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "A", "middle": [], "last": "Garcia-Duran", "suffix": "" } ], "year": 2013, "venue": "Proc of NIPS", "volume": "", "issue": "", "pages": "2787--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bordes A, Usunier N, Garcia-Duran A, et al. 2013. Translating embeddings for modeling multi-relational data. Proc of NIPS. Cambridge, MA: MIT Press, 2013:2787-2795", "links": null } }, "ref_entries": { "FIGREF0": { "text": "BERT with potential drugs (BERT_Med)", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Model architectures in task 2.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "text": "Distribution of label over the training and test datasets of task 1 and task 2.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/.", "content": "
Dataset#1#0#all
Task1training set test set181 NA69,091 NA69,272 29,687
Task2training set test set2,374 NA23,298 NA25,672 4,659
Table 1
", "type_str": "table", "num": null, "html": null } } } }