{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:23:31.289480Z" }, "title": "Multifaceted Domain-Specific Document Embeddings", "authors": [ { "first": "Julian", "middle": [], "last": "Risch", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Potsdam", "location": { "settlement": "Potsdam", "country": "Germany" } }, "email": "" }, { "first": "Philipp", "middle": [], "last": "Hager", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Potsdam", "location": { "settlement": "Potsdam", "country": "Germany" } }, "email": "" }, { "first": "Ralf", "middle": [], "last": "Krestel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Potsdam", "location": { "settlement": "Potsdam", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Current document embeddings require large training corpora but fail to learn high-quality representations when confronted with a small number of domain-specific documents and rare terms. Further, they transform each document into a single embedding vector, making it hard to capture different notions of document similarity or explain why two documents are considered similar. In this work, we propose our Faceted Domain Encoder, a novel approach to learn multifaceted embeddings for domain-specific documents. It is based on a Siamese neural network architecture and leverages knowledge graphs to further enhance the embeddings even if only a few training samples are available. The model identifies different types of domain knowledge and encodes them into separate dimensions of the embedding, thereby enabling multiple ways of finding and comparing related documents in the vector space. We evaluate our approach on two benchmark datasets and find that it achieves the same embedding quality as state-of-the-art models while requiring only a tiny fraction of their training data.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Current document embeddings require large training corpora but fail to learn high-quality representations when confronted with a small number of domain-specific documents and rare terms. Further, they transform each document into a single embedding vector, making it hard to capture different notions of document similarity or explain why two documents are considered similar. In this work, we propose our Faceted Domain Encoder, a novel approach to learn multifaceted embeddings for domain-specific documents. It is based on a Siamese neural network architecture and leverages knowledge graphs to further enhance the embeddings even if only a few training samples are available. The model identifies different types of domain knowledge and encodes them into separate dimensions of the embedding, thereby enabling multiple ways of finding and comparing related documents in the vector space. We evaluate our approach on two benchmark datasets and find that it achieves the same embedding quality as state-of-the-art models while requiring only a tiny fraction of their training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many documents have an inherently multifaceted nature, a characteristic that domain experts could exploit when searching through large document collections. For example, doctors could search through medical archives for documents containing similar disease descriptions or related uses of a specific drug. However, one of the major challenges of information retrieval in such document collections is domain-specific language use:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Training datasets to learn document representations are limited in size, 2. documents might express the same information by using completely different terms (vocabulary mismatch) or different levels of granularity (granularity mismatch),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. and the lack of context knowledge prevents drawing even simple logical conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Domain-specific embeddings are available for a variety of domains, including scientific literature (Beltagy et al., 2019) , patents (Risch and Krestel, 2019) , and the biomedical domain (Kalyan and Sangeetha, 2020) . However, these approaches require large amounts of training data and computing resources. In this paper, we introduce and demonstrate our Faceted Domain Encoder, a document embedding approach that produces comparative results on considerably smaller document collections and requires fewer computing resources. Further, it provides a multifaceted view of texts while also addressing the challenges of domain-specific language use. To this end, we introduce external domain knowledge to the embedding process, tackling the problem of vocabulary and granularity mismatches. A screenshot of the demo is shown in Figure 1. The interactive demo, our source code, and the evaluation datasets are available online: https: //hpi.de/naumann/s/multifaceted-embeddings and a screencast is available on YouTube: https://youtu. be/HHcsX2clEwg.", "cite_spans": [ { "start": 99, "end": 121, "text": "(Beltagy et al., 2019)", "ref_id": "BIBREF0" }, { "start": 132, "end": 157, "text": "(Risch and Krestel, 2019)", "ref_id": "BIBREF12" }, { "start": 186, "end": 214, "text": "(Kalyan and Sangeetha, 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 826, "end": 832, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A popular approach for introducing external domain knowledge to the embedding process uses retrofitting of word vectors based on a graph of semantic relationships as a post-processing step (Faruqui et al., 2015) . Similarly, Zhang et al. (2019) train fastText embeddings on biomedical journal articles and additionally on sequences of medical terms sampled from a knowledge graph. Dis2Vec uses a lexicon of medical terms to bring Word2Vec vectors of domain terms closer together and to push out-of-domain vectors further away (Ghosh et al., 2016) . Unlike Dis2Vec, which concerns only whether a word is in the domain vocabulary or not, our approach handles diverse types Figure 1 : The demo shows nearest neighbor documents and highlights entities within the same categories (\"facets\").", "cite_spans": [ { "start": 189, "end": 211, "text": "(Faruqui et al., 2015)", "ref_id": "BIBREF5" }, { "start": 225, "end": 244, "text": "Zhang et al. (2019)", "ref_id": "BIBREF17" }, { "start": 526, "end": 546, "text": "(Ghosh et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 671, "end": 679, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Stop word removal and lemmatization can be turned off for increased readability. The user interface allows to adjust the weights of the facets of the document embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "of relationships between domain terms. Nguyen et al. (2017) propose an extension of Doc2Vec, adding vectors for domain concepts as input for learning medical document embeddings. Roy et al. (2017) annotate words in the input text with a list of matching entities and relationships from a knowledge graph and extend Word2Vec to jointly learn embeddings for words and annotations. Their abstraction of the graph structure as text annotations enables the inclusion of different node types and edge connections into word embeddings. Another work (Liu et al., 2020) proposed K-BERT, which extends BERT (Devlin et al., 2019) by expanding input sentences with entities from a knowledge graph.", "cite_spans": [ { "start": 39, "end": 59, "text": "Nguyen et al. (2017)", "ref_id": "BIBREF11" }, { "start": 179, "end": 196, "text": "Roy et al. (2017)", "ref_id": "BIBREF13" }, { "start": 542, "end": 560, "text": "(Liu et al., 2020)", "ref_id": "BIBREF15" }, { "start": 597, "end": 618, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Multifaceted embeddings capture more than one view of a document. propose a multifaceted network embedding and apply community detection algorithms to learn separate embeddings for each community. Liu et al. (2019) suggest an extension to the deepwalk graph embedding, which learns separate node embeddings for different facets of nodes in a knowledge graph. Similar to our approach, they propose to concatenate the obtained facet embeddings into a single representation. We learn separate embeddings for types of domain knowledge and concatenate them into an overall document representation.", "cite_spans": [ { "start": 197, "end": 214, "text": "Liu et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our Faceted Domain Encoder is a supervised learning approach using a Siamese neural network to encode documents and a knowledge graph as a source for additional domain information. The architecture is visualized in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 223, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Faceted Domain Encoder", "sec_num": "3" }, { "text": "The network encodes two documents at-a-time with a bidirectional GRU layer and predicts a similarity score for each pair. By computing the pair's target similarity score based on our knowledge graph, we train the network to adjust its document representations to the relationships between domain terms in the graph. We introduce multiple facets in this process by grouping nodes in the graph into multiple categories. Our model represents different aspects of domain knowledge in different category embeddings by learning not a single embedding vector but an embedding per graph category. We train one embedding for each graph category per document and concatenate them into a single embedding vector to represent the entire document. This representation enables the fast discovery of related documents by performing a conventional nearest neighbor search either based on the whole document or specific category embeddings. To control which category contributes the most to the doc- Figure 2: Our model is based on a Siamese network architecture, which encodes two documents in parallel and compares them in the last (top) layer. It is trained to minimize the difference between the documents' cosine distance in the embedding space and their graphbased ground-truth distance. Colors symbolize different facets of the embeddings, which are learned based on node categories in the knowledge graph. ument vector's overall direction, we apply corpus normalization inspired by Liu et al. (2019) .", "cite_spans": [ { "start": 1473, "end": 1490, "text": "Liu et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "To cope with limited amounts of training data, our approach leverages external domain knowledge during the training process. We represent this external domain knowledge in the form of a knowledge graph. Each node in the graph represents an entity, e.g., the name of a disease. Each entity belongs to a category, modeled as a node attribute. For example, entities in a medical graph are grouped into diseases, chemicals, or body parts. Categories define the different types of domain knowledge that the model learns to embed into different subparts of the document embedding. Edges between nodes represent relationships, e.g., chemicals in the same group in the periodic table. The entity linking requires a dictionary mapping from words to entities and handles synonyms mapping to the same entity. For the demo, we created a knowledge graph from the taxonomy underlying Medical Subject Headings (MeSH). Figure 3 shows a small excerpt of the graph.", "cite_spans": [], "ref_spans": [ { "start": 903, "end": 911, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "After parsing and deduplicating the official dataset, MeSH comprises 29,641 concepts (entities) and 271,846 synonyms, which are organized in a hierarchy ranging from broad concepts to specific sub-concepts. Following previous work (Guo et al., 2020), we transform the hierarchy into a net- Figure 3: This excerpt of our graph representation of the Medical Subject Headings (MeSH) hierarchy visualizes entities as nodes with their color corresponding to categories (\"facets\"). The edges and the node numbers reveal the hierarchical relationships, e.g., the broader concept of \"Behavior\" and the specific mental illness \"Depression\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "work graph prevailing the relationships between concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "Our approach learns separate embeddings for different categories of domain terms. However, not all categories might be useful when it comes to representing the overall document. We illustrate this problem with a fictional example from the medical domain. Our approach might learn that an article covers a seldom form of cancer (disease category) in the lung and stomach (anatomy category), and the study originates in the United States (location category). Concatenating these three embeddings gives equal weight to each category. The closest document in embedding space needs to be similar in all of the three categories. This might lead to counterintuitive results with the most relating article covering a stomach disease in a small town in Ohio, instead of a document just covering lung cancer. When reading the text again, we might weigh the given information differently based on its specificity and expect the form of cancer to be more important than the geographic location of the study. Note that this problem is magnified when combining up to sixteen categories in the case of our medical dataset. We illustrate the problem with an actual example from our demo in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 1174, "end": 1182, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Equally Weighted Categories", "sec_num": "3.2" }, { "text": "A second problem can arise when a single, seemingly unimportant category dominates the document embedding. Some documents mention a single term very often, e.g., the word \"patient\". A high frequency of less-informative words can lead to individual categories collecting vastly more word embeddings than others and taking over the entire document embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Equally Weighted Categories", "sec_num": "3.2" }, { "text": "The root cause of both issues is an unintended difference in magnitude between the category embeddings. When concatenating multiple embeddings into a new vector, the category embeddings with the highest magnitude will decide the overall direction of the embedding vector. We address this issue with a simple normalization and weighting process to control which category embeddings contribute the most to the overall direction of the document vector. This approach is similar to what Liu et al. proposed in their work on multifaceted graph embeddings but differs in that we also apply normalization and propose new weighting strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Equally Weighted Categories", "sec_num": "3.2" }, { "text": "We propose two strategies to compute category weights: corpus-idf and document-tfidf. The first strategy, corpus-idf, sums the inverse-documentfrequency of all terms in the category across the entire vocabulary. We normalize the resulting values for all categories to sum to one. This strategy applies the same category weights to all documents in the entire corpus. The motivation is to identify categories that contain the most important words in a collection of documents. This strategy is closely related to the number of unique mentioned tokens in each category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category Normalization Strategies", "sec_num": "3.3" }, { "text": "The second strategy, document-tfidf, computes category weights for individual documents by summing the inverse-document-frequency value of all category terms in the document. Since terms can occur multiple times, the result is similar to the tf-idf value when computed for each category. Additionally, we sum the idf of all words without a category and split the weight equally among all categories. Thereby, we avoid zero weights for categories in the overall embedding. The idea behind this weighting scheme is to have a document-level proxy metric to indicate which categories are important for the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category Normalization Strategies", "sec_num": "3.3" }, { "text": "For our experiments, we use two Semantic Textual Similarity (STS) benchmarks from the biomedical domain, BIOSSES (Soganc\u0131oglu et al., 2017) and Med-STS . The benchmarks comprise sentence pairs with relatedness scores assigned by domain experts. They measure embed-ding quality by comparing the annotator score with the embedding similarity of both sentences based on Pearson correlation.", "cite_spans": [ { "start": 113, "end": 139, "text": "(Soganc\u0131oglu et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To this end, BIOSSES contains 100 sentence pairs collected from medical articles and judged by five domain experts at a scale of 0 to 4. We perform stratified 10-fold cross-validation as proposed by the benchmark authors. We divide the dataset into ten equally-sized subsets using the annotator scores for stratification. Stratification ensures that each split has a similar distribution of related and unrelated sentence pairs. We train ten separate models on the subsets, always using one subset for testing and the remaining nine for training. Note that we still use 30 percent of the training dataset for validation and early stopping: we stop the training process after the first epoch in which the loss on the validation set stops decreasing. Med-STS contains 1,068 sentence pairs from medical records collected internally in the U.S. Mayo Clinics. Two domain experts judged each sentence pair on a scale from 0 to 5. The dataset authors proposed a train-test split of 750 to 350 sentence pairs. Additionally, we use 30 percent, or 225 pairs, of our training set for validation and early stopping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The experiment results listed in Table 1 show that our Faceted Domain Encoder outperforms the domain-agnostic embeddings from fastText (Bojanowski et al., 2017) and Universal Sentence Encoder (Cer et al., 2018) on both benchmarks. The corpus-idf normalization is better than the document-tfidf normalization strategy on the BIOSSES dataset but not on the Med-STS dataset. In comparison with the domain-specific embeddings from BioWordVec (Zhang et al., 2019) and BioSentVec , our approach achieves almost the same performance on Med-STS, which is remarkable given that our Faceted Domain Encoder requires no pre-training on large corpora in contrast to the other presented models. For BIOSSES, only BioSentVec outperforms our approach by a large margin.", "cite_spans": [ { "start": 135, "end": 160, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 192, "end": 210, "text": "(Cer et al., 2018)", "ref_id": "BIBREF2" }, { "start": 438, "end": 458, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The user interface comprises three main parts: top center, bottom center, and sidebar. In the top center, the user can select a source document and one or all of the categories (\"facets\"). Further, either a preprocessed (stop word removal, lemmatization) or a raw document version can be selected for the viewed documents and word highlighting can be ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interactive User Interface", "sec_num": "5" }, { "text": "Pre-Trained BIOSSES Med-STS Avg. fastText English (Bojanowski et al., 2017) 0.51 0.68 Universal Sentence Encoder (Cer et al., 2018) 0.35 * 0.71 * Avg. BioWordVec (Zhang et al., 2019) 0.69 * 0.75 * BioSentVec 0 ", "cite_spans": [ { "start": 50, "end": 75, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 113, "end": 131, "text": "(Cer et al., 2018)", "ref_id": "BIBREF2" }, { "start": 162, "end": 182, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding", "sec_num": null }, { "text": "Current document embeddings require large amounts of training data and provide only a single view of document similarity, which prevents searches with different notions of similarity. In this paper, we introduced and demonstrated an approach for multifaceted domain-specific document embeddings. It is tailored to small document collections of only a few hundred training samples and leverages knowledge graphs to enhance the learned embeddings. Experiments on two benchmark datasets show that our model outperforms state-of-the-art domain-agnostic embeddings and is on par with specialized biomedical document embeddings trained on extensive document collections while only using a tiny fraction of their training data. Our demo provides a faceted view into documents by learning to identify different types of domain knowledge and encoding them into specific dimensions of the embeddings. Thereby, it enables novel ways to compare documents and provides a comparatively high level of interpretability of neural-network-based document similarity measures. A promising path for future work is to remove our neural networks' reliance on ground truth data by designing a semi-supervised approach in which the model learns to update its training goal while discovering new domain terms by itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "Anchor Document Title: Years of potential life lost: another indicator of the impact of cutaneous malignant melanoma on society.year potential life lose ypll indicator premature mortality complement traditional incidence mortality rate facilitate comparison different cancer calculate ypll cutaneous melanoma 11 cancer routinely record track surveillance epidemiology end results seer ypll cutaneous melanoma rank eighth person young 65 year [\u2026] Nearest neighbor without category normalization Title: Obesity and colorectal adenomatous polyps. Figure 4 : Different weighting of the categories (\"facets\") changes the distances of the documents in the embedding space and the nearest neighbors of the anchor document. Corpus-idf normalization allows to take into account the frequency of the entities within the corpus. The impact of the most frequent words on the embeddings can thus be reduced. Stop word removal and lemmatization can be turned off for increased readability.", "cite_spans": [ { "start": 442, "end": 445, "text": "[\u2026]", "ref_id": null } ], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SciB-ERT: A Pretrained Language Model for Scientific Text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3606--3611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. In Proceedings of the Conference on Empir- ical Methods in Natural Language Processing and the International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3606- 3611.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching Word Vectors with Subword Information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Associa- tion for Computational Linguistics (TACL), 5:135- 146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Sheng Yi Kong", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "", "middle": [], "last": "St", "suffix": "" }, { "first": "Noah", "middle": [], "last": "John", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Guajardo-C\u00e9spedes", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Yun", "middle": [ "Hsuan" ], "last": "Tar", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Sung", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Strope", "suffix": "" }, { "first": "", "middle": [], "last": "Kurzweil", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "169--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuan, Chris Tar, Yun Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder for English. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 169-174.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BioSentVec: Creating Sentence Embeddings for Biomedical Texts", "authors": [ { "first": "Qingyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Healthcare Informatics (ICHI)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. BioSentVec: Creating Sentence Embeddings for Biomedical Texts. In Proceedings of the Interna- tional Conference on Healthcare Informatics (ICHI), pages 1-5.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the Conference of the North American Chapter of the Association for Com- putational Linguistics (NAACL), pages 4171-4186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "K", "middle": [], "last": "Sujay", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Hovy", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "1606--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the Conference of the North Ameri- can Chapter of the Association for Computational Linguistics (NAACL), pages 1606-1615.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Characterizing diseases from unstructured text: A vocabulary driven Word2vec approach", "authors": [ { "first": "Saurav", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Prithwish", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "John", "middle": [ "S" ], "last": "Brownstein", "suffix": "" }, { "first": "Naren", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on Information and Knowledge Management (CIKM)", "volume": "", "issue": "", "pages": "1129--1138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saurav Ghosh, Prithwish Chakraborty, Emily Cohn, John S. Brownstein, and Naren Ramakrishnan. 2016. Characterizing diseases from unstructured text: A vocabulary driven Word2vec approach. In Proceed- ings of the International Conference on Information and Knowledge Management (CIKM), pages 1129- 1138.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "MeSHHeading2vec: a new method for representing MeSH headings as vectors based on graph embedding algorithm", "authors": [ { "first": "Zhu-Hong", "middle": [], "last": "Zhen-Hao Guo", "suffix": "" }, { "first": "De-Shuang", "middle": [], "last": "You", "suffix": "" }, { "first": "Hai-Cheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Zhan-Heng", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Yan-Bin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Briefings in Bioinformatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen-Hao Guo, Zhu-Hong You, De-Shuang Huang, Hai-Cheng Yi, Kai Zheng, Zhan-Heng Chen, and Yan-Bin Wang. 2020. MeSHHeading2vec: a new method for representing MeSH headings as vectors based on graph embedding algorithm. Briefings in Bioinformatics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "SECNLP: A survey of embeddings in clinical natural language processing", "authors": [ { "first": "Katikapalli", "middle": [], "last": "Subramanyam Kalyan", "suffix": "" }, { "first": "S", "middle": [], "last": "Sangeetha", "suffix": "" } ], "year": 2020, "venue": "Bioinformatics", "volume": "101", "issue": "", "pages": "1--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katikapalli Subramanyam Kalyan and S. Sangeetha. 2020. SECNLP: A survey of embeddings in clinical natural language processing. Bioinformatics, 101:1- 21.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Is a single vector enough? Exploring node polysemy for network embedding", "authors": [ { "first": "Ninghao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qiaoyu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Yuening", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hongxia", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jingren", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Knowledge Discovery and Data Mining (SIGKDD)", "volume": "", "issue": "", "pages": "932--940", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ninghao Liu, Qiaoyu Tan, Yuening Li, Hongxia Yang, Jingren Zhou, and Xia Hu. 2019. Is a single vector enough? Exploring node polysemy for network em- bedding. In Proceedings of the International Con- ference on Knowledge Discovery and Data Mining (SIGKDD), pages 932-940.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph", "authors": [ { "first": "Weijie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhiruo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Ju", "suffix": "" } ], "year": null, "venue": "Proceedings of the Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "2901--2908", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. In Proceedings of the Conference on Artifi- cial Intelligence (AAAI), pages 2901-2908.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning concept-driven document embeddings for medical information search", "authors": [ { "first": "Gia", "middle": [ "Hung" ], "last": "Nguyen", "suffix": "" }, { "first": "Lynda", "middle": [], "last": "Tamine", "suffix": "" }, { "first": "Laure", "middle": [], "last": "Soulier", "suffix": "" }, { "first": "Nathalie", "middle": [], "last": "Souf", "suffix": "" } ], "year": 2017, "venue": "Lecture Notes in Computer Science", "volume": "", "issue": "17", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gia Hung Nguyen, Lynda Tamine, Laure Soulier, and Nathalie Souf. 2017. Learning concept-driven doc- ument embeddings for medical information search. Lecture Notes in Computer Science, 10259(17).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Domain-specific word embeddings for patent classification", "authors": [ { "first": "Julian", "middle": [], "last": "Risch", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Krestel", "suffix": "" } ], "year": 2019, "venue": "Data Technologies and Applications", "volume": "53", "issue": "1", "pages": "108--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Risch and Ralf Krestel. 2019. Domain-specific word embeddings for patent classification. Data Technologies and Applications, 53(1):108-122.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning Domain-Specific Word Embeddings from Sparse Cybersecurity Texts", "authors": [ { "first": "Arpita", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Youngja", "middle": [], "last": "Park", "suffix": "" }, { "first": "Shimei", "middle": [], "last": "Pan", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arpita Roy, Youngja Park, and SHimei Pan. 2017. Learning Domain-Specific Word Embeddings from Sparse Cybersecurity Texts. In arXiv preprint: 1709.07470.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "BIOSSES: A semantic sentence similarity estimation system for the biomedical domain", "authors": [ { "first": "Gizem", "middle": [], "last": "Soganc\u0131oglu", "suffix": "" }, { "first": "Hakime", "middle": [], "last": "\u00d6zt\u00fcrk", "suffix": "" }, { "first": "Arzucan", "middle": [], "last": "\u00d6zg\u00fcr", "suffix": "" } ], "year": 2017, "venue": "Bioinformatics", "volume": "33", "issue": "14", "pages": "49--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gizem Soganc\u0131oglu, Hakime \u00d6zt\u00fcrk, and Arzucan \u00d6zg\u00fcr. 2017. BIOSSES: A semantic sentence sim- ilarity estimation system for the biomedical domain. Bioinformatics, 33(14):49-58.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "MedSTS: a resource for clinical semantic textual similarity", "authors": [ { "first": "Yanshan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Naveed", "middle": [], "last": "Afzal", "suffix": "" }, { "first": "Sunyang", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Feichen", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Majid", "middle": [], "last": "Rastegar-Mojarad", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Language Resources and Evaluation", "volume": "54", "issue": "1", "pages": "57--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanshan Wang, Naveed Afzal, Sunyang Fu, Liwei Wang, Feichen Shen, Majid Rastegar-Mojarad, and Hongfang Liu. 2020. MedSTS: a resource for clini- cal semantic textual similarity. Language Resources and Evaluation, 54(1):57-72.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multi-facet Network Embedding: Beyond the General Solution of Detection and Representation", "authors": [ { "first": "Liang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaochun", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Guo", "middle": [], "last": "Yuanfang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "499--506", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Yang, Xiaochun Cao, and Guo Yuanfang. 2018. Multi-facet Network Embedding: Beyond the Gen- eral Solution of Detection and Representation. In Proceedings of the Conference on Artificial Intelli- gence (AAAI), pages 499-506.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BioWordVec, improving biomedical word embeddings with subword information and MeSH", "authors": [ { "first": "Yijia", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qingyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Hongfei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Scientific Data", "volume": "6", "issue": "1", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Zhang, Qingyu Chen, Zhihao Yang, Hongfei Lin, and Zhiyong Lu. 2019. BioWordVec, improving biomedical word embeddings with subword infor- mation and MeSH. Scientific Data, 6(1):1-9.", "links": null } }, "ref_entries": { "TABREF2": { "num": null, "content": "", "type_str": "table", "text": "Pearson correlation on STS benchmarks (* marks results reported by", "html": null } } } }