{ "paper_id": "U12-1020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:07:21.486633Z" }, "title": "Experiments with Clustering-based Features for Sentence Classification in Medical Publications: Macquarie Test's participation in the ALTA 2012 shared task", "authors": [ { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "postCode": "2109", "region": "NSW", "country": "Australia" } }, "email": "diego.molla-aliod@mq.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In our contribution to the ALTA 2012 shared task we experimented with the use of cluster-based features for sentence classification. In a first stage we cluster the documents according to the distribution of sentence labels. We then use this information as a feature in standard classifiers. We observed that the cluster-based feature improved the results for Naive-Bayes classifiers but not for better-informed classifiers such as MaxEnt or Logistic Regression.", "pdf_parse": { "paper_id": "U12-1020", "_pdf_hash": "", "abstract": [ { "text": "In our contribution to the ALTA 2012 shared task we experimented with the use of cluster-based features for sentence classification. In a first stage we cluster the documents according to the distribution of sentence labels. We then use this information as a feature in standard classifiers. We observed that the cluster-based feature improved the results for Naive-Bayes classifiers but not for better-informed classifiers such as MaxEnt or Logistic Regression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we describe the experiments that led to our participation to the ALTA 2012 shared task. The ALTA shared tasks 1 are programming competitions where all participants attempt to solve a problem based on the same data. The participants are given annotated sample data that can be used to develop their systems, and unannotated test data that is used to submit the results of their runs. There are no constraints about what techniques of information are used to produce the final results, other than that the process should be fully automatic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The 2012 task was about classifying sentences of medical publications according to the PIBOSO taxonomy. PIBOSO (Kim et al., 2011) is an alternative to PICO for the specification of the main types of information useful for evidence-based medicine. The taxonomy specifies the following types: Population, Intervention, Background, Outcome, Study design, and Other. The dataset was provided by NICTA 2 and consisted of 1,000 medical abstracts extracted from PubMed split into an annotated training set of 800 abstracts and an unannotated test set of 200 abstracts. The competition was hosted by \"Kaggle in Class\" 3 .", "cite_spans": [ { "start": 104, "end": 129, "text": "PIBOSO (Kim et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Each sentence of each abstract can have multiple labels, one per sentence type. The \"other\" label is special in that it applies only to sentences that cannot be categorised into any of the other categories. The \"other\" label is therefore disjoint from the other labels. Every sentence has at least one label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task can be approached as a multi-label sequence classification problem. As a sequence classification problem, one can attempt to train a sequence classifier such as Conditional Random Fields (CRF), as was done by Kim et al. (2011) . As a multi-label classification problem, one can attempt to train multiple binary classifiers, one per target label. We followed the latter approach.", "cite_spans": [ { "start": 218, "end": 235, "text": "Kim et al. (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "It has been observed that the abstracts of different publication types present different characteristics that can be exploited. This lead Sarker and Moll\u00e1 (2010) to the implementation simple but effective rule-based classifiers that determine some of the key publication types for evidence based medicine. In our contribution to the ALTA shared task, we want to use information about different publication types to determine the actual sentence labels of the abstract.", "cite_spans": [ { "start": 138, "end": 161, "text": "Sarker and Moll\u00e1 (2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "To recover the publication types one can attempt to use the meta-data available in PubMed. However, as mentioned by Sarker and Moll\u00e1 (2010) , only a percentage of the PubMed abstracts is annotated with the publication type. Also, time limitations did not let us attempt to recover the PubMed information before the competition deadline. Alternatively, one can attempt to use a classifier to determine the abstract type, as done by Sarker and Moll\u00e1 (2010) .", "cite_spans": [ { "start": 116, "end": 139, "text": "Sarker and Moll\u00e1 (2010)", "ref_id": "BIBREF1" }, { "start": 431, "end": 454, "text": "Sarker and Moll\u00e1 (2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "Our approach was based on a third option. We use the sentence distribution present in the abstract to determine the abstract type. In other words, we frame the task of determining the abstract type as a task of clustering. We attempt to determine natural clusters of abstracts according to the actual sentence distributions in the abstracts, and then use this information to determine the labels of the abstract sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "Our approach runs into a chicken-and-egg problem: to cluster the abstracts we need to know the distribution of their sentence labels. But to determine the sentence labels we need to know the cluster to which the abstract belongs. To break this cycle we use the following procedure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "At the training stage:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "1. Use the annotated data to train a set of classifiers (one per target label) to determine a first guess of the sentence labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "2. Replace the annotated information with the information predicted by these classifiers, and cluster the abstracts according to the distribution of predicted sentence labels (more on this below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "3. Train a new set of classifiers to determine the final prediction of the sentence labels. The classifier features include, among other features, information about the cluster ID of the abstract to which the sentence belongs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "Then, at the prediction stage:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "1. Use the first set of classifiers to obtain a first guess of the sentence labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "2. Use the clusters calculated during the training stage to determine the cluster ID of the abstracts of the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "3. Feed the cluster ID to the second set of classifiers to obtain the final sentence type prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "2" }, { "text": "The clustering phase clusters the abstracts according the distribution of sentence labels. In particular, each abstract is represented as a vector,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": "where each vector element represents the relative frequency of a sentence label. For example, if abstract A contains 10 sentences such that there are 2 with label \"background\", 1 with label \"population\", 2 with label \"study design\", 3 with label \"intervention\", 3 with label \"outcome\", and 1 with label \"other\", then A is represented as (0.2, 0.1, 0.2, 0.3, 0.3, 0.2, 0.1). Note that a sentence may have several labels, so the sum of all features of the vector is greater than or equal to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": "We use K-means to cluster the abstracts. We then use the cluster centroid information to determine the cluster ID of unseen abstracts at the prediction stage. In particular, at prediction type an abstract is assigned the cluster ID whose centroid is closest according to the clustering algorithm inherent distance measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": "In preliminary experiments we divided the abstracts into different zones and computed the label distributions in each zone. The rationale is that different parts of the abstract are expected to feature different label distributions. For example, the beginning of the abstract would have a relatively larger proportion of \"background\" sentences, and the end would have a relatively larger proportion of \"outcome\" sentences. However, our preliminary experiments did not show significant differences in the results with respect to the number of zones. Therefore, in the final experiments we used the complete sentence distribution of the as one unique zone, as described at the beginning of this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": "Our preliminary experiments gave best results for a cluster size of K = 4 and we used that number in the final experiments. We initially used NLTK's implementation of K-Means and submitted our results to Kaggle using this implementation. However, in subsequent experiments we replaced NLTK's implementation with our own implementation because NLTK's implementation was not stable and would often crash, especially for values of K >= 4. In our final implementation of K-Means we run 100 instances of the cluster algorithm with different initialisation values and choose the run with lower final cost. The chosen distance measure is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": "\ufffd i (x i \u2212 c i ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": ", where x i is feature i of the abstract, and c i is feature i of the centroid of the cluster candidate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering the abstracts", "sec_num": "2.1" }, { "text": "For the initial experiments we used NLTK's Naive Bayes classifiers. We experimented with the following features: p Sentence position in the abstract.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "np Normalised sentence position. The position is normalised by dividing the value of p with the total number of sentences in the abstract.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "w Word unigrams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "s Stem unigrams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "c Cluster ID as returned by the clustering algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "The results of the intial experiments are shown in Table 1 . Rows in the table indicate the first classifier, and columns indicate the second classifier. Thus, the best results (in boldface) are obtained with a first set of classifiers that use word unigrams plus the normalised sentence position, and a second set of classifiers that use the cluster information and the normalised sentence position.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "Due to time constraints we were not able to try all combinations of features, but we can observe that the cluster information generally improves the F 1 scores. We can also observe that the word information is not very useful, presumably because the correlation between some of the features degrades the performance of the Naive Bayes classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "In the second round of experiments we used NLTK's MaxEnt classifier. We decided to use MaxEnt because it handles correlated features and therefore better results are expected. As Table 1 shows, the results are considerably better. Now, word unigram features are decidedly better, but the impact of the cluster information is reduced. MaxEnt with cluster information is only marginally better than the run without cluster information, and in fact the difference was not greater than the variation of values that were produced among repeated runs of the algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "We performed very few experiments with the MaxEnt classifier because of a practical problem: shortly after running the experiments and submitting to Kaggle, NLTK's MaxEnt classifier stopped working. We attributed this to an upgrade of our system to a newer release of Ubuntu, which presumably carried a less stable version of NLTK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "We subsequently implemented a Logistic Regression classifier from scratch and carried a few further experiments. The most relevant ones are included in Table 1 . We only tested the impact using all features due to time constraints, and to the presumption that using only sentence positions would likely produce results very similar to those of the Naive Bayes classifiers, as was observed with the MaxEnt method.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "The Logistic Regression classifier used a simple gradient descent optimisation algorithm. Due to time constraints, however, we forced it to stop after 50 iterations. We observed that the runs that did not use the cluster information reached closer to convergence than those that used the cluster information, and we attribute to this the fact that the runs with cluster information had slightly worse F 1. Overall the results were slightly worse than with NLTK's MaxEnt classifiers, presumably due to the fact that the optimisation algorithm was stopped before convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "The value in boldface in the MaxEnt component of Table 1 shows the best result. This corresponds to a first and second set of classifiers that use all the available features. This set up of classifiers was used for the run submitted to Kaggle which achieved best results, with an AUC of 0.943. That placed us in third position in the overall ranking. Table 2 shows the results of several of the runs submitted to Kaggle. Note that, whereas in Table 1 we used a partition of 70% of the training set for training and 30% for testing, in Table 2 we used the complete training set for training and the unannotated test set for the submission to Kaggle. Note also that Kaggle used AUC as the evaluation measure. Column prob shows the results when we submitted class probabilities. Column theshold shows the results when we submitted labels 0 and 1 according to the classifier threshold. We observe the expected degradation of results due to the ties. Overall, F 1 and AU C (prob) preserved the same order, but AU C (threshold) presented discrepancies, again presumably because of the presence of ties.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 351, "end": 358, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 535, "end": 542, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "We tested the use of cluster-based features for the prediction of sentence labels of medical abstracts. We used multiple binary classifiers, one per sentence label, in two stages. standard features, and the second stage incorporated cluster-based information. We observed that, whereas cluster-based information improved results in Naive Bayes classifiers, it did not improve results in better informed classifiers such as MaxEnt or Logistic Regression. Time constraints did not allow us to perform comprehensive tests, but it appears that cluster-based information as derived in this study is not sufficiently informative. So, after all, a simple set of features based on word unigrams and sentence positions fed to multiple MaxEnt or Logistic Regression classifiers were enough to obtain reasonably good results for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Conclusions", "sec_num": "4" }, { "text": "Further work on this line includes the incor-poration of additional features at the clustering stage. It is also worth testing the impact of publication types as annotated by MetaMap or as generated by Sarker and Moll\u00e1 (2010) .", "cite_spans": [ { "start": 202, "end": 225, "text": "Sarker and Moll\u00e1 (2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Summary and Conclusions", "sec_num": "4" }, { "text": "http://alta.asn.au/events/sharedtask2012/ 2 http://www.nicta.com.au/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://inclass.kaggle.com/c/alta-nicta-challenge2 Diego Moll\u00e1. 2012. Experiments with Clustering-based Features for Sentence Classification in Medical Publications: Macquarie Test's participation in the ALTA 2012 shared task. In Proceedings of Australasian Language Technology Association Workshop, pages 139\u2212142.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic classification of sentences to support Evidence Based Medicine", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "David", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Cavedon", "suffix": "" }, { "first": "", "middle": [], "last": "Yencken", "suffix": "" } ], "year": 2011, "venue": "BMC bioinformatics", "volume": "12", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim, David Martinez, Lawrence Cavedon, and Lars Yencken. 2011. Automatic classification of sentences to support Evidence Based Medicine. BMC bioinformatics, 12 Suppl 2:S5, January.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Rule-based Approach for Automatic Identification of Publication Types of Medical Papers", "authors": [ { "first": "Abeed", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Fifteenth Australasian Document Computing Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abeed Sarker and Diego Moll\u00e1. 2010. A Rule-based Approach for Automatic Identification of Publica- tion Types of Medical Papers. In Proceedings of the Fifteenth Australasian Document Computing Sym- posium.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "content": "
p0.440 0.572
np0.5550.577
w0.4480.610 0.442
w + np s + np0.4710.6110.4680.485
With MaxEnt classifiers
\u2212c + p c + np c + w c + w + np c + s + np
p
np0.574
w0.6460.704
w + np ws + np0.7400.7590.758
With Logistic Regression classifiers
w + np\u2212 0.757c + p c + np c + w c + w + np c + s + np 0.747
", "html": null, "text": "The first stage used With Naive Bayes classifiers \u2212 c + p c + np c + w c + w + np c + s + np", "type_str": "table" }, "TABREF1": { "num": null, "content": "
F 1AU C (prob) AU C (threshold)
MaxEnt w + np \u2212 c + w + np 0.759 0.610 NB w \u2212 c + np 0.577 NB np \u2212 c + np 0.572 NB p \u2212 c + p NB w 0.4480.943 0.896 0.888 0.8730.673 0.727
NB w \u2212 c + w NB p0.442 0.4400.7930.654
", "html": null, "text": "F 1 scores with a Naive Bayes classifiers.", "type_str": "table" }, "TABREF2": { "num": null, "content": "", "html": null, "text": "Comparison between F 1 in our results and AU C in the results submitted to Kaggle.", "type_str": "table" } } } }