{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:36:14.418313Z" }, "title": "On the effectiveness of small, discriminatively pre-trained language representation models for biomedical text mining", "authors": [ { "first": "Ibrahim", "middle": [ "Burak" ], "last": "Ozyurt", "suffix": "", "affiliation": { "laboratory": "FDI Lab Dept. of Neuroscience UCSD La Jolla", "institution": "", "location": { "country": "USA" } }, "email": "iozyurt@ucsd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural language representation models such as BERT (Devlin et al., 2019) have recently shown state of the art performance in downstream NLP tasks and bio-medical domain adaptation of BERT (Bio-BERT (Lee et al., 2019)) has shown same behavior on biomedical text mining tasks. However, due to their large model size and resulting increased computational need, practical application of models such as BERT is challenging making smaller models with comparable performance desirable for real word applications. Recently, a new language transformers based language representation model named ELECTRA (Clark et al., 2020) is introduced, that makes efficient usage of training data in a generative-discriminative neural model setting that shows performance gains over BERT. These gains are especially impressive for smaller models. Here, we introduce two small ELECTRA based model named Bio-ELECTRA and Bio-ELECTRA++ that are eight times smaller than BERT Base and Bio-BERT and achieves comparable or better performance on biomedical question answering, yes/no question answer classification, question answer candidate ranking and relation extraction tasks. Bio-ELECTRA is pre-trained from scratch on PubMed abstracts using a consumer grade GPU with only 8GB memory. Bio-ELECTRA++ is the further pre-trained version of Bio-ELECTRA trained on a corpus of open access full papers from PubMed Central. While, for biomedical named entity recognition, large BERT Base model outperforms Bio-ELECTRA++, Bio-ELECTRA and ELECTRA-Small++, with hyperparameter tuning Bio-ELECTRA++ achieves results comparable to BERT.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Neural language representation models such as BERT (Devlin et al., 2019) have recently shown state of the art performance in downstream NLP tasks and bio-medical domain adaptation of BERT (Bio-BERT (Lee et al., 2019)) has shown same behavior on biomedical text mining tasks. However, due to their large model size and resulting increased computational need, practical application of models such as BERT is challenging making smaller models with comparable performance desirable for real word applications. Recently, a new language transformers based language representation model named ELECTRA (Clark et al., 2020) is introduced, that makes efficient usage of training data in a generative-discriminative neural model setting that shows performance gains over BERT. These gains are especially impressive for smaller models. Here, we introduce two small ELECTRA based model named Bio-ELECTRA and Bio-ELECTRA++ that are eight times smaller than BERT Base and Bio-BERT and achieves comparable or better performance on biomedical question answering, yes/no question answer classification, question answer candidate ranking and relation extraction tasks. Bio-ELECTRA is pre-trained from scratch on PubMed abstracts using a consumer grade GPU with only 8GB memory. Bio-ELECTRA++ is the further pre-trained version of Bio-ELECTRA trained on a corpus of open access full papers from PubMed Central. While, for biomedical named entity recognition, large BERT Base model outperforms Bio-ELECTRA++, Bio-ELECTRA and ELECTRA-Small++, with hyperparameter tuning Bio-ELECTRA++ achieves results comparable to BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transformers based language representation learning methods such as Bidirectional Encoder Rep-resentations from Transformers (BERT) (Devlin et al., 2019) are becoming increasingly popular for downstream biomedical NLP tasks due to their performance advantages . The performance of these models comes at a steep increase in computation cost both at training and inference time. For example, we use a BERT based re-ranker as the final step in our biomedical question answering system (Ozyurt et al., 2020) , where 60% of the question answering time latency is due to the BERT classifier with 110 million parameters. The increased size of the transformer models is correlated with the increased performance (Devlin et al., 2019) . Since the computational cost involved at inference time for large models is a bottleneck in their practical applications in the real world especially for real time applications such as semantic search and question answering, new approaches to achieve similar performance on smaller models are getting increasingly popular. A popular approach on this end is distilling BERT to a smaller classifier such as DistillBERT (Sanh et al., 2019) , TinyBERT (Jiao et al., 2019) and MobileBERT (Sun et al., 2020) . However, a small and efficient model without going through the trouble of training a large model and mimicking it in a smaller model is more preferable.", "cite_spans": [ { "start": 132, "end": 153, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 482, "end": 503, "text": "(Ozyurt et al., 2020)", "ref_id": "BIBREF12" }, { "start": 704, "end": 725, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 1145, "end": 1164, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF15" }, { "start": 1176, "end": 1195, "text": "(Jiao et al., 2019)", "ref_id": null }, { "start": 1211, "end": 1229, "text": "(Sun et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "BERT uses a masked language modeling (MLM) approach by masking 15% of the training sentences and learning to guess the masked tokens in a generative manner. This results BERT using only 15% of the training data. A recent approach called ELEC-TRA (Clark et al., 2020) , introduced a new language modeling approach where a discriminative model is trained to detect whether each token in the corrupted input was replaced by a co-trained generator model sample or not. ELECTRA is computationally more efficient than BERT and outperforms BERT given the same model size, data and computation resources (Clark et al., 2020) . The improvements over BERT is most impressive at small model sizes, which makes it an excellent candidate in pursuit of small and efficient language representation models for biomedical text mining.", "cite_spans": [ { "start": 246, "end": 266, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" }, { "start": 596, "end": 616, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce two small and efficient ELECTRA based domain-specific language representation models trained on PubMed abstracts and on PubMed Central (PMC) open-access full papers, respectively, with a domain specific vocabulary achieving comparable or (in some cases) better results on several biomedical text mining tasks to BERT Base model that have 8 times more parameters resulting in 8 times decrease in inference time. The models are trained on a modest consumer grade GPU with only 8GB RAM which is much lower bar for pre-training of domain-specific language representation models than for BERT and variants. The performance on biomedical named entity recognition (NER) of small ELECTRA models are not as impressive as in the question answering related tasks compared to BERT. However, Bio-ELECTRA++ NER performance can be significantly improved by hyperparameter tuning to achieve comparable performance to BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Bio-ELECTRA/Bio-ELECTRA++ Both ELECTRA and BERT are pre-trained on English Wikipedia and BooksCorpus as general purpose language models. They both also use WordPiece tokenization (Wu et al., 2016) which represents words as constructed from character ngrams of highest co-occurrence to allow out-ofvocabulary (OOV) words to be represented. Given a vocabulary size, the character n-grams (subwords) making up the vocabulary are determined from the corpus by using an objective similar to the compression algorithms to find the subwords that would generate each unique word in the corpus. OOV words are then generated by combination of subwords from the subwords vocabulary. Since the vocabulary of BERT and ELECTRA (Clark et al., 2020) are generated from general purpose corpora, a lot of biomedical domain specific words need to be composed from subwords that does not convey enough information by themselves. For example the gene BRCA1 in BERT/ELECTRA vocabulary represented as B##R##CA##1, mostly formed from single letter embedded representations. For Bio-ELECTRA, the vocabulary is generated using SentencePiece byte-pair-encoding (BPE) model (Sennrich et al., 2016) from PubMed abstract texts from 2017. Using this domain-specific vocabulary BRCA1 is represented as BRCA##1. In this case, the composition from parts conveys more information since the learned vector embedding of BRCA subword is more likely to capture, for example, its breast cancer relatedness. 19.2 million most recent PubMed abstracts (having PMID greater than 10 million) as of March 2020 are used for Bio-ELECTRA pre-training. Sentences extracted from the paper title and abstract text are used to build the pre-training corpus of about 2.5 billion words. Using the PubMed abstract corpus and 2017 PubMed abstracts generated SentencePiece vocabulary a ELECTRA-Small model (14M trainable parameters) with a maximum sequence size of 256 and batch size of 64 is pre-trained from scratch on a RTX 2070 8GB GPU in four stages for 1.8 million steps lasting 24 days. Original ELECTRA Small was trained on a V100 32GB GPU in 4 days with a batch size of 128 for one million steps. However, the distributed ELEC-TRA Small++ (Clark et al., 2020) , which was used for our comparison experiments, was trained on the XLNet (Yang et al., 2019) corpus (about 33 billion subword corpus) with maximum sequence size of 512 for 4 million steps. Since the batch size of Bio-ELECTRA is half the size of the ELECTRA Small due to our GPUs memory size, two million steps are equivalent to one million ELECTRA training steps. ELECTRA Small++ is trained four times more than Bio-ELECTRA and trained on much larger corpus.", "cite_spans": [ { "start": 179, "end": 196, "text": "(Wu et al., 2016)", "ref_id": null }, { "start": 713, "end": 733, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" }, { "start": 1146, "end": 1169, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF16" }, { "start": 2190, "end": 2210, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" }, { "start": 2285, "end": 2304, "text": "(Yang et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training", "sec_num": "2.1" }, { "text": "For the second stage of pre-training, full-text papers from open access subset of PubMed Central (PMC-OAI) as of May 2020 are used. Sentences extracted from all sections except the references section of the full-length papers are used to build a 12.3 billion words corpus. Bio-ELECTRA is further pre-trained for additional 1.8 million steps using this 12.3 billion words corpus on the same RTX 2070 8GB GPU for additional 24 days. The resulting pre-trained model is called Bio-ELECTRA++ analogous to ELECTRA Small++.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training", "sec_num": "2.1" }, { "text": "The syntactic and semantic language modeling information latently captured in the pre-trained weights of transformer models combined with a classification layer were found to provide state-of-the-art results in many NLP tasks (Devlin et al., 2019; Clark et al., 2020) . We fine-tune Bio-ELECTRA, Bio-ELECTRA++, ELECTRA Small++ and BERT Base for biomedical question answering, yes/no question answer classification, named entity recognition (NER), biomedical question answer candidate ranking and relation extraction tasks.", "cite_spans": [ { "start": 226, "end": 247, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF4" }, { "start": 248, "end": 267, "text": "Clark et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "For biomedical question answering, we used BERT and ELECTRA architectures for SQuAD (Rajpurkar et al., 2016) for SQuAD v1.1. Similar to Wiese et al. and Lee et al. (Wiese et al., 2017; , we have combined our BioASQ (Tsatsaronis et al., 2015) 8b training data generated factoid and list questions based training set with out-of-domain SQuAD v1.1 data set to increase performance over the smaller BioASQ data.", "cite_spans": [ { "start": 84, "end": 108, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF13" }, { "start": 136, "end": 184, "text": "Wiese et al. and Lee et al. (Wiese et al., 2017;", "ref_id": null }, { "start": 215, "end": 241, "text": "(Tsatsaronis et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "The biomedical yes/no question answer classification task is similar to sentiment (hedging for biomedical literature) detection where the polarity (positive/negative) of a candidate sentence needs to be detected in the context of a question. For ELECTRA and BERT, we have used their official codebase from GitHub slightly extended for our specific classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "Named entity recognition involves detection of names of biomedical entities in sentences and usually used for downstream tasks such as information extraction and question answering. For ELEC-TRA and Bio-ELECTRA/Bio-ELECTRA++, we have used the ELECTRA architecture for entity level tasks adapted for BIO annotation scheme. For BERT, we have used HuggingFace Transformers Python library single output layer entity classification architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "In biomedical question answering, after retrieving relevant documents, the sentences containing the answer need to be filtered and ranked for the end user. Given a set of answer candidate sentences per question, where the sentences answering the question are marked as relevant, the ranking problem can be cast as a 0/1 loss classification problem and the learned probability estimates can be used to rank the candidate sentences by relevance. Due to highly unbalanced nature of this data set (on average one positive example per 99 negative examples), we have also investigated a weighted loss function. This ranking approach is also compared to cosine distance based ranking on sentence embeddings generated by Sentence-BERT (Reimers and Gurevych, 2019) with and without domain adaptation. For Sentence-BERT domain adaptation, we had further trained Sentence-BERT Siamese BERT classifier model with the training portion of our ranking data.", "cite_spans": [ { "start": 727, "end": 755, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "In biomedical relation extraction, a predetermined set of relations among two biomedical entities of interest are classified. For BERT and ELECTRA, relation extraction can be cast as a sentence classification task where the biomedical entities of interest are anonymized using pre-defined tokens to indicate to the classifier the identity of the named entities are not important compared to the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "For each fine-tuning experiment, ten randomly initialized models are trained and average testing performances and standard deviations are reported. Default BERT and ELECTRA hyperparameters including the number of epochs (two for QA task and three for classification/NER tasks) are used for corresponding experiments. More performance can be squeezed out of the fine-tuned models by hyperparameter tuning. For data sets with an explicit development set, we have investigated the effect of the hyperparameter tuning. All of the ELECTRA based fine-tuning trainings are conducted on a GTX 1060 6GB GPU, while the eight times larger BERT models required training on our RTX 2070 8GB GPU. For BERT experiments, cased BERT Base model is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning for Biomedical Text Mining Tasks", "sec_num": "2.2" }, { "text": "For biomedical question answering and yes/no answer classification tests, we have generated training and testing data sets from the publicly available 2020 BioASQ (Tsatsaronis et al., 2015) Task B (8b) training data set. BioASQ 8b training set consists of 3243 questions together with ideal and exact answers and gold standard snippets. The questions come in four categories (i.e. factoid, list, yes/no and summary). Factoid and list questions are usually answered by a word or phrase (multiple word/phrases for list questions) making them amendable for extractive answer span detection type exact question answering for which general purpose question answering data sets are available such as SQUAD (Rajpurkar et al., 2016) . Snippets matching their corresponding exact answer(s) are selected for the bio-medical question answering labeled set generation. For about 30% of the fac-toid/list questions no snippet can be aligned with their corresponding ideal answers. We analyzed those cases and were able to recover additional 152 questions after manual inspection for synonyms and transliterations to include in our labeled data set. The labeled data set is split into 85%/15% training/testing data sets of size 9557 and 1809, respectively.", "cite_spans": [ { "start": 163, "end": 189, "text": "(Tsatsaronis et al., 2015)", "ref_id": null }, { "start": 700, "end": 724, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "For yes/no answer classification, the ideal answer text of each BioASQ yes/no questions is used as the context and the exact answer (i.e. 'yes' or 'no') as label for binary classification. The ideal answers are cleaned up to remove the exact answer (yes or no) that sometimes occur at the beginning of the ideal answer. The labeled data is split into 85%/15% training/testing data sets of size 728 and 128, respectively. BioASQ yes/no questions are skewed towards yes answers where about 80% of the answers were 'yes'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "For named entity recognition tests, we have used publicly available datasets used by Crichton et. al (Crichton et al., 2017) . Four common biomedical entity types are considered, namely disease, drug/chemical, gene/protein and species.", "cite_spans": [ { "start": 101, "end": 124, "text": "(Crichton et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "For our biomedical QA system, we have annotated up to 100 answer candidates per question as returned by the first answer ranker of our QA system as relevant or not (up to the first occurrence of a correct answer). The resulting annotated data set consists of a training set (44933 sentences for 492 questions) and a testing set (9064 sentences for 100 questions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "For biomedical relation extraction, we have used two datasets; GAD (Bravo et al., 2015 ) (a gene-disease relation dataset) and CHEMPROT (Krallinger et al., 2017 ) (a proteinchemical multi-relation dataset).", "cite_spans": [ { "start": 67, "end": 86, "text": "(Bravo et al., 2015", "ref_id": "BIBREF1" }, { "start": 136, "end": 160, "text": "(Krallinger et al., 2017", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "For GAD, we have used the pre-processed version from Bio-BERT (Lee et al., 2019) Github repository. For CHEMPROT, we have adapted the pre-processed data from the Github repository of the relation extraction model described in (Lim and Kang, 2018) for our ELECTRA/Bio-ELECTRA/BERT experiments.", "cite_spans": [ { "start": 226, "end": 246, "text": "(Lim and Kang, 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "The datasets used in our experiments are summarized in Table 1 . The datasets and source code are available on Github (https://github.com/ SciCrunch/bio_electra). The Bio-ELECTRA models are available on Zenodo (https://doi. The effect of the increased number of training steps on the BioASQ question answering task is shown in Figure 1 on exact-match evaluation measure where the 95% confidence intervals are also shown, Even at 880K (or 440K in terms of ELECTRA Small++ pre-training with doubled batch size) training steps the performance of the Bio-ELECTRA is strong relative to BERT Base as shown in Table 2 . Similar to what is observed in general purpose downstream question answering tasks (Devlin et al., 2019; Clark et al., 2020) , more pre-training improves downstream performance in biomedical question answering.", "cite_spans": [ { "start": 696, "end": 717, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF4" }, { "start": 718, "end": 737, "text": "Clark et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 327, "end": 335, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 603, "end": 610, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "The biomedical factoid/list question answering results are shown in Table 2 . We have used official SQUAD evaluation measures exact answer span match percentage and F 1 measure. While BERT Base model had slightly better performance, taken into account their 8 times smaller size and 45 times less training time (Clark et al., 2020) , the performance of both Bio-ELECTRA and ELEC-TRA Small++ models are impressive. With one fourth of the training of ELECTRA Small++, Bio-ELECTRA has nearly same performance as the ELECTRA Small++. The best performance among ELECTRA models is observed for the Bio-ELECTRA++ model further decreasing already small performance gap between ELECTRA Small++ and BERT. BioASQ yes/no question answer classification (Krallinger et al., 2015) Drug/Chemical 29478/29486/25346 BC2GM (Smith et al., 2008) Gene/Protein 15197/3061/6325 NCBI Disease (Dogan et al., 2014) Disease 5134/787/960 LINNAEUS (Gerner et al., 2010 (Bravo et al., 2015) Gene-disease 4796/-/534 CHEMPROT (Krallinger et al., 2017) Protein-chemical 16521/10361/14396 task results are shown in Table 3 . We have used the official BioASQ yes/no question evaluation measure of precision, recall and F 1 applied on both yes and no questions separately. While, both Bio-ELECTRA and Bio-ELECTRA++ outperforms BERT Base, BIO-ELECTRA++ is the clear winner due to its superior performance on questions with negative answer. The high standard deviations for Bio-ELECTRA and BERT Base are due to one random run in each case being stuck in a local minimum where the classifier always answers yes (since BioASQ yes/no questions are highly unbalanced towards the 'yes' answer (80% yes/20% no)).", "cite_spans": [ { "start": 311, "end": 331, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" }, { "start": 740, "end": 765, "text": "(Krallinger et al., 2015)", "ref_id": "BIBREF8" }, { "start": 804, "end": 824, "text": "(Smith et al., 2008)", "ref_id": "BIBREF17" }, { "start": 867, "end": 887, "text": "(Dogan et al., 2014)", "ref_id": "BIBREF5" }, { "start": 918, "end": 938, "text": "(Gerner et al., 2010", "ref_id": "BIBREF6" }, { "start": 939, "end": 959, "text": "(Bravo et al., 2015)", "ref_id": "BIBREF1" }, { "start": 993, "end": 1018, "text": "(Krallinger et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1080, "end": 1087, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "The test results for biomedical NER experiments are shown in Table 4 . Similar to BioBERT , we have used precision, recall and F 1 as evaluation measures. Here, the large BERT Base language representation model showed, the largest benefit over smaller models at the cost 8 times longer inference time. Bio-ELECTRA++ outperformed Bio-ELECTRA on all datasets and was better (in terms of mean F1 performance) than ELECTRA Small++ in three of the four NER entity types, while ELECTRA Small++ was slightly better than Bio-ELECTRA++ on the 'disease' entity type.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "The test results for biomedical question answer candidate ranking experiments are shown in Table 5. We have used the mean reciprocal rank (MRR) to evaluate the ranking performance on the test set. Here, all of the ELECTRA models outperformed BERT, while Bio-ELECTRA++ being the best performing among them. Sentence-BERT sentence embeddings question-answer cosine similarity based approaches performed the worst.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "The test results for biomedical relation extraction experiments are shown in Table 6 . For the multirelation dataset CHEMPROT, micro-averaged precision, recall and F 1 metrics are used. For GAD dataset, Bio-ELECTRA performed best closely followed by Bio-ELECTRA++. BERT showed best performance on the CHEMPROT dataset , followed by Bio-ELECTRA++.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "Bio-ELECTRA++ outperformed Electra-SMALL++ in 8 out of 9 datasets spanning all five tasks. Against BERT, Bio-ELECTRA++ models showed, besides named entity recognition tasks, either competitive or better (in 3 out of 9 datasets) performance despite having only one eights of the BERT model's capacity (parameter size).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "For all our BERT and ELECTRA experiments, we have used default parameters without any hyperparameter optimization. To investigate the effect of hyperparameter optimization on the test performance, we have selected the named entity datasets and CHEMPROT relation extraction dataset, which have a development set to use for hyperparameter optimization. Using hyperopt (Bergstra et al., 2013) Python package, we searched for the optimum F 1 value on the corresponding development set of each dataset for the following hyperparameters; the learning rate among the values 1e-5, 5e-5, 1e-4 and 5e-4, number of epochs among the values 3, 5, 15 and 20 and batch size among the values 12, 24, 32 and 64. The best performing hyperparameter combination for each data set is then used to train ten randomly initialized Bio-ELECTRA++ based classifiers. The test results of the effect of the hyperparameter optimization on Bio-ELECTRA++ are shown in Table 7 . In all datasets, hyperparameter optimization resulted in substantial improvement over Bio-ELECTRA++ classifiers without hyper-parameter optimization. For the NER datasets, the improved test performance caught up with the BERT test performance. Hyperpameter optimized bio-ELECTRA++ relation extraction classifier outperformed BERT. While BERT performance would also profit from hyperparameter optimization, BERT finetuning is more than an order of magnitude slower than Bio-ELECTRA++ finetuning impeding on its practicality.", "cite_spans": [ { "start": 366, "end": 389, "text": "(Bergstra et al., 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 936, "end": 943, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Effect of hyperparameter optimization on Bio-ELECTRA++", "sec_num": "3.3.1" }, { "text": "In this paper, we have shown that small domainspecific language representation models that make more efficient use of pre-training data can achieve comparable or better (in some cases) downstream performance on several biomedical text mining tasks to BERT Base with eight times more parameters. Two domain-specific biomedical language representation models based on recently introduced ELECTRA architecture named Bio-ELECTRA and Bio-ELECTRA++ were pre-trained on a consumer grade GPU with only 8GB memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "While, Bio-ELECTRA performance is highly competitive to BERT Base for question answering and classification tasks, its performance lags behing BERT Base for NER tasks. To further improve the performance of Bio-ELECTRA, we pre-trained it further with a second biomedical corpus of full papers from PMC open access initiative. The resulting biomedical language representation model, Bio-ELECTRA++, outperformed Bio-ELECTRA in 8 out of 9 datasets. After hyperparameter finetuning, the performance lead of BERT Base over Bio-ELECTRA++ on NER tasks is drastically decreased making Bio-ELECTRA++ competitive or superior to BERT in all biomedical text mining tasks tested.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [ { "text": "This work was supported by the NIDDK Information Network (dkNET; http://dknet.org) via NIHs National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) award U24DK097771.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures", "authors": [ { "first": "J", "middle": [], "last": "Bergstra", "suffix": "" }, { "first": "D", "middle": [], "last": "Yamins", "suffix": "" }, { "first": "D", "middle": [ "D" ], "last": "Cox", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 30th International Conference on International Conference on Machine Learning", "volume": "28", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Bergstra, D. Yamins, and D. D. Cox. 2013. Making a science of model search: Hyperparameter optimiza- tion in hundreds of dimensions for vision architec- tures. In Proceedings of the 30th International Con- ference on International Conference on Machine Learning -Volume 28, ICML13, page I115I123. JMLR.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research", "authors": [ { "first": "Alex", "middle": [], "last": "Bravo", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Pi\u00f1ero", "suffix": "" }, { "first": "N\u00faria", "middle": [], "last": "Queralt-Rosinach", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rautschka", "suffix": "" }, { "first": "Laura", "middle": [ "I" ], "last": "Furlong", "suffix": "" } ], "year": 2015, "venue": "BMC bioinformatics", "volume": "16", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Bravo, Janet Pi\u00f1ero, N\u00faria Queralt-Rosinach, Michael Rautschka, and Laura I Furlong. 2015. Ex- traction of relations between genes and diseases from text and large-scale data analysis: implica- tions for translational research. BMC bioinformat- ics, 16(1):55.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Electra: Pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A neural network multi-task learning approach to biomedical named entity recognition", "authors": [ { "first": "G", "middle": [], "last": "Crichton", "suffix": "" }, { "first": "S", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Chiu", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "BMC Bioinformatics", "volume": "", "issue": "368", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Crichton, S. Pyysalo, and Chiu. 2017. A neural network multi-task learning approach to biomedi- cal named entity recognition. BMC Bioinformatics, 18(368).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ncbi disease corpus: a resource for disease name recognition and concept normalization", "authors": [ { "first": "Robert", "middle": [], "last": "Rezarta Islamaj Dogan", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Leaman", "suffix": "" }, { "first": "", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2014, "venue": "Journal of biomedical informatics", "volume": "47", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Linnaeus: a species name identification system for biomedical literature", "authors": [ { "first": "Martin", "middle": [], "last": "Gerner", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Nenadic", "suffix": "" }, { "first": "Casey M", "middle": [], "last": "Bergman", "suffix": "" } ], "year": 2010, "venue": "BMC bioinformatics", "volume": "11", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Gerner, Goran Nenadic, and Casey M Bergman. 2010. Linnaeus: a species name identification sys- tem for biomedical literature. BMC bioinformatics, 11(1):85.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding", "authors": [ { "first": "Xiaoqi", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "Yichun", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Linlin", "middle": [], "last": "Li", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The chemdner corpus of chemicals and drugs and its annotation principles", "authors": [ { "first": "Martin", "middle": [], "last": "Krallinger", "suffix": "" }, { "first": "Obdulia", "middle": [], "last": "Rabal", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Leitner", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Vazquez", "suffix": "" }, { "first": "David", "middle": [], "last": "Salgado", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Leaman", "suffix": "" }, { "first": "Yanan", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Daniel", "middle": [ "M" ], "last": "Lowe", "suffix": "" }, { "first": "Roger", "middle": [ "A" ], "last": "Sayle", "suffix": "" }, { "first": "Riza", "middle": [ "Theresa" ], "last": "Batista-Navarro", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Rak", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Huber", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "S\u00e9rgio", "middle": [], "last": "Matos", "suffix": "" }, { "first": "David", "middle": [], "last": "Campos", "suffix": "" }, { "first": "Buzhou", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Tsendsuren", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "Keun", "middle": [], "last": "Ho Ryu", "suffix": "" }, { "first": "Senthil", "middle": [], "last": "Sv Ramanan", "suffix": "" }, { "first": "", "middle": [], "last": "Nathan", "suffix": "" }, { "first": "Marko", "middle": [], "last": "Slavko\u017eitnik", "suffix": "" }, { "first": "Lutz", "middle": [], "last": "Bajec", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Weber", "suffix": "" }, { "first": "", "middle": [], "last": "Irmer", "suffix": "" }, { "first": "A", "middle": [], "last": "Saber", "suffix": "" }, { "first": "Jan", "middle": [ "A" ], "last": "Akhondi", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Kors", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "", "middle": [], "last": "An", "suffix": "" } ], "year": 2015, "venue": "", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Krallinger, Obdulia Rabal, Florian Leit- ner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M. Lowe, Roger A. Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rockt\u00e4schel, S\u00e9rgio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, SV Ramanan, Senthil Nathan, Slavko\u017ditnik, Marko Bajec, Lutz Weber, Matthias Irmer, Saber A. Akhondi, Jan A. Kors, Shuo Xu, Xin An, Ut- pal Kumar Sikdar, Asif Ekbal, Masaharu Yoshioka, Thaer M. Dieb, Miji Choi, Karin Verspoor, Ma- dian Khabsa, C. Lee Giles, Hongfang Liu, Koman- dur Elayavilli Ravikumar, Andre Lamurias, Fran- cisco M. Couto, Hong-Jie Dai, Richard Tzong- Han Tsai, Caglar Ata, Tolga Can, Anabel Usi\u00e9, Rui Alves, Isabel Segura-Bedmar, Paloma Mart\u00ednez, Julen Oyarzabal, and Alfonso Valencia. 2015. The chemdner corpus of chemicals and drugs and its an- notation principles. Journal of Cheminformatics, 7(1).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Overview of the BioCreative VI chemical-protein interaction track", "authors": [ { "first": "Martin", "middle": [], "last": "Krallinger", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the BioCreative VI Workshop", "volume": "", "issue": "", "pages": "141--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Krallinger et al. 2017. Overview of the BioCre- ative VI chemical-protein interaction track. In Pro- ceedings of the BioCreative VI Workshop, pages 141-146, Bethesda, MD.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "Bioinformatics", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Chemical-gene relation extraction using recursive neural network. Database", "authors": [ { "first": "Sangrak", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sangrak Lim and Jaewoo Kang. 2018. Chemical-gene relation extraction using recursive neural network. Database, 2018. Bay060.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bio-AnswerFinder: a system to find answers to questions from biomedical texts", "authors": [ { "first": "Ibrahim", "middle": [], "last": "Burak Ozyurt", "suffix": "" }, { "first": "Anita", "middle": [], "last": "Bandrowski", "suffix": "" }, { "first": "Jeffrey", "middle": [ "S" ], "last": "Grethe", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ibrahim Burak Ozyurt, Anita Bandrowski, and Jef- frey S Grethe. 2020. Bio-AnswerFinder: a system to find answers to questions from biomedical texts. Database, 2020. Baz137.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3982--3992", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Overview of biocreative ii gene mention recognition", "authors": [ { "first": "Larry", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Lorraine", "middle": [ "K" ], "last": "Tanabe", "suffix": "" }, { "first": "Rie", "middle": [], "last": "Johnson Nee Ando", "suffix": "" }, { "first": "Cheng-Ju", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "I-Fang", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Chun-Nan", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Yu-Shi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" }, { "first": "Christoph", "middle": [ "M" ], "last": "Friedrich", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Torii", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Craig", "middle": [ "A" ], "last": "Struble", "suffix": "" }, { "first": "Richard", "middle": [ "J" ], "last": "Povinelli", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "William", "middle": [ "A" ], "last": "Baumgartner", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Hunter", "suffix": "" }, { "first": ";", "middle": [], "last": "", "suffix": "" }, { "first": "W. John", "middle": [], "last": "Wilbur", "suffix": "" } ], "year": 2008, "venue": "Genome Biology", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Larry Smith, Lorraine K. Tanabe, Rie Johnson nee Ando, Cheng-Ju Kuo, I-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M. Friedrich, Kuzman Ganchev, Manabu Torii, Hong- fang Liu, Barry Haddow, Craig A. Struble, Richard J. Povinelli, Andreas Vlachos, William A. Baumgart- ner, Lawrence Hunter, Bob Carpenter, Richard Tzong-Han Tsai, Hong-Jie Dai, Feng Liu, Yifei Chen, Chengjie Sun, Sophia Katrenko, Pieter Adri- aans, Christian Blaschke, Rafael Torres, Mariana Neves, Preslav Nakov, Anna Divoli, Manuel Ma\u00f1a- L\u00f3pez, Jacinto Mata, and W. John Wilbur. 2008. Overview of biocreative ii gene mention recognition. Genome Biology, 9(2).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Mobilebert: a compact task-agnostic BERT for resource-limited devices. CoRR, abs", "authors": [ { "first": "Zhiqing", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hongkun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Renjie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Denny", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. Mobilebert: a compact task-agnostic BERT for resource-limited devices. CoRR, abs/2004.02984.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition", "authors": [ { "first": "George", "middle": [], "last": "Tsatsaronis", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Balikas", "suffix": "" }, { "first": "Prodromos", "middle": [], "last": "Malakasiotis", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Partalas", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Zschunke", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Michael R Alvers", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "", "middle": [], "last": "Krithara", "suffix": "" } ], "year": null, "venue": "Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artieres, Axel Ngonga", "volume": "16", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Artieres, Axel Ngonga, Norman Heino, Eric Gaussier, Lil- iana Barrio-Alvers, Michael Schroeder, Ion An- droutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical se- mantic indexing and question answering competi- tion. BMC Bioinformatics, 16:138.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural domain adaptation for biomedical question answering", "authors": [ { "first": "Georg", "middle": [], "last": "Wiese", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "Mariana", "middle": [], "last": "Neves", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "281--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georg Wiese, Dirk Weissenborn, and Mariana Neves. 2017. Neural domain adaptation for biomedical question answering. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 281-289, Vancou- ver, Canada. Association for Computational Linguis- tics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "5753--5763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems 32, pages 5753-5763. Curran Associates, Inc.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "org/10.5281/zenodo.3971235)." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Change in the exact match performance for BioASQ question answering as a function of increased pre-training of Bio-ELECTRA 3.2 Effect of amount of pre-training on the Bio-ELECTRA performance" }, "TABREF0": { "type_str": "table", "html": null, "text": "Bimedical text mining data sets", "num": null, "content": "
Biomedical Question Answering Dataset
Dataset# training examples# testing examples
BioASQ 8b-factoid95571809
Biomedical Yes/No Question Answer Classification Dataset
Dataset# training examples# testing examples
BioASQ 8b-yes/no728128
Named Entity Recognition Datasets
DatasetEntity Type# training/dev/testing entities
BC4CHEMD
" }, "TABREF2": { "type_str": "table", "html": null, "text": "Biomedical Question Answering Test Results", "num": null, "content": "
ModelExact MatchF1
Bio-ELECTRA (1.8M)57.51 (0.88)66.87 (0.63)
Bio-ELECTRA++57.93 (0.66)67.48 (0.44)
ELECTRA Small++57.78 (0.64)67.10 (0.55)
BERT59.98 (0.66)70.25 (0.48)
" }, "TABREF3": { "type_str": "table", "html": null, "text": "Biomedical Yes/No Question Answer Classification Test Results Bio-ELECTRA (1.8M) 87.99 (2.95) 97.94 (1.35) 92.66 (1.56) 77.14 (26.47) 46.92 (16.39) 58.18 (19.91) Bio-ELECTRA++ 91.24 (1.57) 95.29 (2.31) 93.19 (0.75) .57) 95.49 (2.64) 90.99 (1.00) 65.15 (22.99) 43.46 (15.20) 51.71 (17.49)", "num": null, "content": "
ModelP (Yes)R (Yes)F1 (Yes)P (No)R (No)F1 (No)
78.91 (7.41)63.85 (7.92)69.84 (3.87)
ELECTRA Small++88.18 (0.71) 94.31 (1.74) 91.14 (1.00)69.92 (7.34)50.38 (3.19)58.40 (3.61)
BERT Base87.02 (2
" }, "TABREF4": { "type_str": "table", "html": null, "text": "Biomedical Named Entity Recognition Test Results", "num": null, "content": "
TypeDatasetMetrics ELECTRA Small++ Bio-ELECTRA Bio-ELECTRA++BERT
DiseaseNCBI diseaseP76.96 (0.80)73.47 (0.92)75.44 (1.06)85.43 (0.62)
R85.79 (0.64)83.88 (0.64)85.19 (0.77)87.08 (0.76)
F181.13 (0.69)78.32 (0.52)80.01 (0.68)86.24 (0.55)
Drug/chem.BC4CHEMDP81.62 (0.53)82.76 (0.42)83.65 (0.18)91.36 (0.13)
R80.85 (0.47)83.51 (0.46)83.95 (0.27)89.46 (0.22)
F181.23 (0.15)83.13 (0.18)83.80 (0.19)90.40 (0.11)
Gene/proteinBC2GMP67.92 (0.40)67.54 (0.48)69.34 (0.43)83.95 (0.27)
R75.13 (0.29)75.03 (0.16)76.09 (0.28)84.30 (0.31)
F171.34 (0.27)71.08 (0.23)72.55 (0.30)84.13 (0.23)
SpeciesLINNAEUSP86.82 (1.16)85.90 (1.53)86.01 (1.55)96.01 (0.31)
R83.25 (1.42)82.38 (0.72)84.07 (0.92)93.90 (0.17)
F184.99 (1.02)84.10 (0.79)85.02 (0.59)94.94 (0.17)
" }, "TABREF5": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: Biomedical Question Answer Candidate
Reranking Test Results
ModelMRR
Electra Small++0.281 (0.014)
Electra Small++ (weighted)0.281 (0.008)
Bio-ELECTRA0.325 (0.011)
Bio-ELECTRA (weighted)0.332 (0.013)
Bio-ELECTRA++0.335 (0.017)
Bio-ELECTRA++ (weighted)0.332 (0.013)
BERT Base0.246 (0.007)
SBERT bert-base-nli-mean-tokens0.181
SBERT domain-adaptation0.163
" }, "TABREF6": { "type_str": "table", "html": null, "text": "Biomedical Relation Extraction Test Results", "num": null, "content": "
RelationDatasetMetrics ELECTRA Small++ Bio-ELECTRA Bio-ELECTRA++BERT
Gene-diseaseGADP71.06 (1.27)72.99 (1.09)72.48 (0.55)72.72 (1.08)
R91.35 (1.76)92.70 (1.58)91.71 (1.17)88.72 (2.12)
F179.92 (0.97)81.66 (0.73)80.96 (0.35)79.91 (1.07)
Protein-chemical CHEMPROTP59.64 (2.41)61.38 (1.90)64.66 (1.63)69.75 (1.18)
R59.34 (2.09)60.40 (3.12)63.85 (2.35)69.87 (1.79)
F159.41 (0.88)60.86 (2.22)64.22 (1.40)69.80 (1.36)
" }, "TABREF7": { "type_str": "table", "html": null, "text": "Effect of Hyperparameter Optimization on the Bio-ELECTRA++ Test Performance", "num": null, "content": "
DatasetMetrics Bio-ELECTRA++ Bio-ELECTRA++ optBERT
BC4CHEMDP83.65 (0.18)88.45 (0.17)91.36 (0.13)
R83.95 (0.27)87.44 (0.20)89.96 (0.22)
F183.80 (0.19)87.94 (0.09)90.40 (0.11)
BC2GMP69.34 (0.43)77.73 (0.38)83.95 (0.27)
R76.09 (0.28)80.87 (0.34)84.30 (0.31)
F172.55 (0.30)79.27 (0.31)84.13 (0.23)
NCBI diseaseP75.44 (1.06)83.40 (0.79)85.43 (0.62)
R85.19 (0.77)86.36 (0.65)87.08 (0.76)
F180.01 (0.68)84.85 (0.65)86.24 (0.55)
LINNAEUSP86.01 (1.55)93.77 (1.25)96.01 (0.31)
R84.07 (0.92)96.28 (0.65)93.90 (0.17)
F185.02 (0.59)95.01 (0.84)94.94 (0.17)
CHEMPROTP64.66 (1.63)73.23 (0.86)69.75 (1.18)
R63.85 (2.35)71.46 (0.79)69.87 (1.79)
F164.22 (1.40)72.33 (0.71)69.80 (1.36)
" } } } }