{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:35:21.905354Z" }, "title": "SpeechTrans@SMM4H'20: Impact of preprocessing and n-grams on Automatic Classification of Tweets that Mention Medications", "authors": [ { "first": "Mohamed", "middle": [], "last": "Lichouri", "suffix": "", "affiliation": { "laboratory": "", "institution": "CRSTDLA / Algiers", "location": { "country": "Algeria" } }, "email": "m.lichouri@crstdla.dz" }, { "first": "Mourad", "middle": [], "last": "Abbas", "suffix": "", "affiliation": {}, "email": "m.abbas@crstdla.dz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The 2020 Social Media Mining for Health Applications Workshop (Klein et al., 2020) launched several natural language processing tasks using social media mining for health monitoring for automatic classification of tweets: that mention medications, multilingual tweets that report adverse effects, tweets reporting a birth defect pregnancy outcome, in addition to automatic extraction and normalization of adverse effects in English tweets, and automatic characterization of chatter related to prescription medication abuse in tweets.", "cite_spans": [ { "start": 62, "end": 82, "text": "(Klein et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are interested in the automatic classification of tweets that mention medication. A binary classification system has been experimented to achieve the task's aim, which is the distinction of tweets reporting medications from those that do not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we describe the dataset of task 1 that proposes to find tweets mentioning medications. Then, we present the pre-processing steps that we applied to clean the raw texts extracted from Twitter. The publicly available dataset (Weissenbacher et al., 2019) contains for each tweet: (i) the user ID, (ii) the tweet ID, and (iii) the binary annotation indicating the presence or absence of medications information. The dataset contains 69,272 tweets manually tagged. We noted that 0.26% of the dataset (181 tweets) mentioning medications are tagged as \"positive\", and 99.97% (69,091 tweets) that don't mention medication information are tagged as \"negative\".", "cite_spans": [ { "start": 240, "end": 268, "text": "(Weissenbacher et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "In our system, we applied three pre-processing steps. Where the first is the Tweets Preprocessor 1 developed by the AUTH team as part of the PlasticTwist Crowdsourcing module 2 . We used this tool to remove: all URLs, all mentions, all hashtags, Twitter reserved words (e.g. 'RT', 'via'), punctuation, single-letter words, blank spaces, stop-words, profane words, numbers. Whereas for the second step, we used Spacy tool 3 to parse the documents, and filter numbers, punctuation, white space, URL, while keeping the hashtag text. We also removed special characters, single-syllable tokens, mentions. We also This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System architecture", "sec_num": "3" }, { "text": "1 https://github.com/vasisouv/tweets-preprocessor 2 https://crowdsourcing.plastictwist.com/ 3 https://spacy.io/ 119 handled apostrophe, contraction check, spell correction as well as a lemmatization process as follows 4 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System architecture", "sec_num": "3" }, { "text": "We have done a contraction check to check if there is any contracted form, and replace it with its original form (\"aren't\" is replaced by \"are not\") followed by a Lemmatization process where we lemmatize each token using Spacy method '.lemma ', except for Pronouns, where they are kept as they are since Spacy lemmatizer transforms every pronoun to \"-PRON-\". Finally, we will run a Spell correction to deals with repeated characters such as \"sooooo goooood\". Finally, for the third pre-processing step, we have used some regex rules to remove punctuation and emojis. In this work, we used a Machine Learning approach. In order to prepare the corpus, we adopted three pre-processing steps which can be used individually or all together. After many experiments with multiple combinations of the aforementioned three pre-processing steps, we decided to keep two choices that gave the best performance: the 1st choice (applying steps 1 and 2 sequentially) and the 2nd choice (steps 1, 2, 3) .", "cite_spans": [], "ref_spans": [ { "start": 971, "end": 986, "text": "(steps 1, 2, 3)", "ref_id": null } ], "eq_spans": [], "section": "System architecture", "sec_num": "3" }, { "text": "After cleaning the data, we applied a TF-IDF vectorizer, and n-grams with multiple values of n (ranging from 3 to 20). We performed three tokenization process: word, character, and character with boundary (considers the space as a character) (Lichouri et al., 2018; Abbas et al., 2019) . The classification has been achieved using the Decision Tree algorithm. We applied re-sampling using k-Fold Cross-Validation (Pedregosa et al., 2011) . We set the values of k to 5 and 10.", "cite_spans": [ { "start": 242, "end": 265, "text": "(Lichouri et al., 2018;", "ref_id": "BIBREF2" }, { "start": 266, "end": 285, "text": "Abbas et al., 2019)", "ref_id": null }, { "start": 413, "end": 437, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "System architecture", "sec_num": "3" }, { "text": "As mentioned in the previous section, our system consists of applying multiple combinations of cleaners (preprocessing steps) and using n-grams and tokenizers. We selected the three best results found for development and test phase and addressed in table 1. Furthermore, we reported, in the same table, the average performance of all teams of the task for the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4" }, { "text": "Run Table 1 : Performance of the system in terms of precision, recall, and F1-score (Dev and Test dataset).", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "The size of the n-grams has an impact on the system's performance. Changing the value of n=3 to n=5, led to an improvement of Precision and F1-score by more than 15% and 1%, respectively, while Recall dropped out by more than 8%, as shown in table 1. The impact of preprocessing is noticeable through the run 3 (see table 1). In fact, using preprocessing steps (1 and 2) and n-grams with n=15 sequentially has given the best performance in the development phase with a precision of 91.67% and an F1-score of 74.58%. In the test phase, the third run still gives the best performance in terms of F1 score (63.70%) with a precision of 74.14% compared to runs 1 and 2 (62% and 58%) respectively (Table 1) .", "cite_spans": [], "ref_spans": [ { "start": 691, "end": 700, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "The approach adopted in this work relies first on a set of preprocessing steps applied to the tweets' dataset supplied in this task, and second, on the TF-IDF classifier with n-grams features, in addition to tokenization module. We have shown that adequate choices of preprocessing steps combination and values of n (n-grams) led to performance improvement. Compared to the average task performance, our system gives an F1 score of 63.70% with a precision which outperforms the average by nearly 4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/tthustla/twitter sentiment analysis part1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "St madar 2019 shared task: Arabic finegrained dialect identification", "authors": [], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "269--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mourad Abbas, Mohamed Lichouri, and Abed Alhakim Freihat. 2019. St madar 2019 shared task: Arabic fine- grained dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 269-273.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Overview of the fifth social media mining for health applications (#smm4h) shared tasks at coling 2020", "authors": [ { "first": "Ari", "middle": [ "Z" ], "last": "Klein", "suffix": "" }, { "first": "Ilseyar", "middle": [], "last": "Alimova", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Flores", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Magge", "suffix": "" }, { "first": "Zulfat", "middle": [], "last": "Miftahutdinov", "suffix": "" }, { "first": "Anne-Lyse", "middle": [], "last": "Minard", "suffix": "" }, { "first": "Karen", "middle": [ "O" ], "last": "Connor", "suffix": "" }, { "first": "Abeed", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Tutubalina", "suffix": "" }, { "first": "Davy", "middle": [], "last": "Weissenbacher", "suffix": "" }, { "first": "Graciela", "middle": [], "last": "Gonzalez-Hernandez", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Social Media Mining for Health Applications (SMM4H) Workshop Shared Task", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ari Z. Klein, Ilseyar Alimova, Ivan Flores, Arjun Magge, Zulfat Miftahutdinov, Anne-Lyse Minard, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2020. Overview of the fifth social media mining for health applications (#smm4h) shared tasks at coling 2020. In Proceedings of the Fifth Social Media Mining for Health Applications (SMM4H) Workshop Shared Task.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word-level vs sentence-level language identification: Application to algerian and arabic dialects", "authors": [ { "first": "Mohamed", "middle": [], "last": "Lichouri", "suffix": "" }, { "first": "Mourad", "middle": [], "last": "Abbas", "suffix": "" } ], "year": 2018, "venue": "Procedia Computer Science", "volume": "142", "issue": "", "pages": "246--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed Lichouri, Mourad Abbas, Abed Alhakim Freihat, and Dhiya El Hak Megtouf. 2018. Word-level vs sentence-level language identification: Application to algerian and arabic dialects. Procedia Computer Science, 142:246-253.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Scikitlearn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit- learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deep neural networks ensemble for detecting medication mentions in tweets", "authors": [ { "first": "Davy", "middle": [], "last": "Weissenbacher", "suffix": "" }, { "first": "Abeed", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Klein", "suffix": "" }, { "first": "O'", "middle": [], "last": "Karen", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Graciela", "middle": [], "last": "Magge", "suffix": "" }, { "first": "", "middle": [], "last": "Gonzalez-Hernandez", "suffix": "" } ], "year": 2019, "venue": "Journal of the American Medical Informatics Association", "volume": "26", "issue": "12", "pages": "1618--1626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davy Weissenbacher, Abeed Sarker, Ari Klein, Karen O'Connor, Arjun Magge, and Graciela Gonzalez-Hernandez. 2019. Deep neural networks ensemble for detecting medication mentions in tweets. Journal of the American Medical Informatics Association, 26(12):1618-1626.", "links": null } }, "ref_entries": {} } }