ACL-OCL / Base_JSON /prefixT /json /textgraphs /2021.textgraphs-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:58:11.791976Z"
},
"title": "MG-BERT: Multi-Graph Augmented BERT for Masked Language Modeling",
"authors": [
{
"first": "Parishad",
"middle": [],
"last": "Behnamghader",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University of Technology Tehran",
"location": {
"country": "Iran"
}
},
"email": "pbehnamghader@ce.sharif.edu"
},
{
"first": "Hossein",
"middle": [],
"last": "Zakerinia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University of Technology Tehran",
"location": {
"country": "Iran"
}
},
"email": "hzakerynia@ce.sharif.edu"
},
{
"first": "Mahdieh",
"middle": [],
"last": "Soleymani Baghshah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University of Technology Tehran",
"location": {
"country": "Iran"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pre-trained models like Bidirectional Encoder Representations from Transformers (BERT), have recently made a big leap forward in Natural Language Processing (NLP) tasks. However, there are still some shortcomings in the Masked Language Modeling (MLM) task performed by these models. In this paper, we first introduce a multi-graph including different types of relations between words. Then, we propose Multi-Graph augmented BERT (MG-BERT) model that is based on BERT. MG-BERT embeds tokens while taking advantage of a static multi-graph containing global word co-occurrences in the text corpus beside global real-world facts about words in knowledge graphs. The proposed model also employs a dynamic sentence graph to capture local context effectively. Experimental results demonstrate that our model can considerably enhance the performance in the MLM task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Pre-trained models like Bidirectional Encoder Representations from Transformers (BERT), have recently made a big leap forward in Natural Language Processing (NLP) tasks. However, there are still some shortcomings in the Masked Language Modeling (MLM) task performed by these models. In this paper, we first introduce a multi-graph including different types of relations between words. Then, we propose Multi-Graph augmented BERT (MG-BERT) model that is based on BERT. MG-BERT embeds tokens while taking advantage of a static multi-graph containing global word co-occurrences in the text corpus beside global real-world facts about words in knowledge graphs. The proposed model also employs a dynamic sentence graph to capture local context effectively. Experimental results demonstrate that our model can considerably enhance the performance in the MLM task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, pre-trained models have led to promising results in various Natural Language Processing (NLP) tasks. Recently, Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) has received much attention as a pre-trained model that can be easily fine-tuned for a wide range of NLP tasks. BERT is pre-trained using two unsupervised tasks, Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). In (Ettinger, 2019) , some psycholinguistic diagnostics are introduced for assessing the linguistic capacities of pre-trained language models. These diagnostic tests consist of commonsense and pragmatic inferences, role-based event prediction, and negation. Ettinger (2019) observes some shortcomings in BERT's results and demonstrates that although BERT sometimes predicts the first candidate for the masked token almost correctly, some of its top candidates contradict each other. Besides, in the tests targeting commonsense and pragmatic inference, it is illustrated that BERT can not precisely fill the gaps based on just the input context (Ettinger, 2019) .",
"cite_spans": [
{
"start": 191,
"end": 212,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 445,
"end": 461,
"text": "(Ettinger, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 700,
"end": 715,
"text": "Ettinger (2019)",
"ref_id": "BIBREF6"
},
{
"start": 1086,
"end": 1102,
"text": "(Ettinger, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we incorporate co-occurrences and global information about words through graphs describing relations of words along with local contexts considered by BERT. The intention is to find more reliable and meaningful embeddings that result in better performance in MLM task. Utilizing external information about the corpus and the world in the form of graphs helps the model fill the gaps in the MLM task more easily and with more certainty. We take advantage of the rich information source accessible in knowledge graphs and also condensed information of words co-occurrence in graphs using Relational Graph Convolutional Network (R-GCN) to enrich the embedding of tokens. We also utilize the words in the current context as a dynamic complete graph using an attention mechanism. These graphs can considerably influence the performance of BERT in the MLM task as shown in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Knowledge graphs (KGs) are valuable sources of facts about real-world entities. Many studies have been recently introduced to utilize knowledge graphs for various purposes, such as recommender systems (Wang et al., 2019a,b; He et al., 2020) or link prediction (Feng et al., 2016; Nguyen et al., 2018; Sun et al., 2019; . Recently, using BERT along with knowledge graphs has also been attended for knowledge graph completion and analysis. Yao et al. (2019) employ KG-BERT in triple classification, link prediction, and relation prediction tasks. Furthermore, knowledge graphs are used in NLP tasks such as text classification (K M et al., 2018; Ostendorff et al., 2019; , named entity recognition (Dekhili et al., 2019) , and language modeling (Ahn et al., 2016; Logan et al., 2019) . ERNIE (Zhang et al., 2019b) is an enhanced language representation model incorporating knowledge graphs. In addition to BERT's pre-training objectives, it uses an additional objective that intends to select appropriate entities from the knowledge graph to complete randomly masked entity alignments. Moreover, named entity mentions in the text are recognized and aligned to their corresponding entities in KGs.",
"cite_spans": [
{
"start": 201,
"end": 223,
"text": "(Wang et al., 2019a,b;",
"ref_id": null
},
{
"start": 224,
"end": 240,
"text": "He et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 260,
"end": 279,
"text": "(Feng et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 280,
"end": 300,
"text": "Nguyen et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 301,
"end": 318,
"text": "Sun et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 438,
"end": 455,
"text": "Yao et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 625,
"end": 643,
"text": "(K M et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 644,
"end": 668,
"text": "Ostendorff et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 696,
"end": 718,
"text": "(Dekhili et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 743,
"end": 761,
"text": "(Ahn et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 762,
"end": 781,
"text": "Logan et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 790,
"end": 811,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other types of graphs have also been utilized in NLP tasks in some studies. For instance, Text GCN (Yao et al., 2018) applies Graph Convolutional Network (GCN) to the task of text classification. This paper's employed graph is a text graph created based on token co-occurrences and document-token relations in a corpus. Moreover, VGCN-BERT (Lu and Nie, 2019) enriches the word embeddings of an input sentence using the text graph inspired by Text GCN (Yao et al., 2018) and examines the obtained model in FIRE hate language detection tasks (Mandl et al., 2019) .",
"cite_spans": [
{
"start": 99,
"end": 117,
"text": "(Yao et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 340,
"end": 358,
"text": "(Lu and Nie, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 451,
"end": 469,
"text": "(Yao et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 540,
"end": 560,
"text": "(Mandl et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we aim to improve BERT's performance (in the MLM task) by incorporating a static multi-graph that includes both the knowledge graph and global co-occurrence graphs derived from the corpus as well as a dynamic graph including input sentence tokens. Static text graphs have been recently employed in VGCN-BERT (Lu and Nie, 2019) via a modified version of GCN that extends the input by a fixed number of embeddings. However, the modification of embeddings in this work is only based on input tokens. Neither other vocabularies in the static text graphs nor real-world facts (available in KGs) affect the final embeddings of tokens. On the other hand, while ENRIE (Zhang et al., 2019b) and KEPLER (Wang et al., 2019c) utilize KGs to reach an improved model, they do not employ other graphs derived from the corpus. Also, ERNIE does not learn graph-based embedding during representation learning and only adopts embeddings trained by TransE (Bordes et al., 2013) . However, in our model, since we incorporate a multi-graph by extending BERT architecture and providing a graph layer of an R-GCN module and attention mechanism, a multi-graph augmented representation learning model is obtained.",
"cite_spans": [
{
"start": 675,
"end": 696,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF30"
},
{
"start": 708,
"end": 728,
"text": "(Wang et al., 2019c)",
"ref_id": "BIBREF23"
},
{
"start": 951,
"end": 972,
"text": "(Bordes et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "GCN (Kipf and Welling, 2017) is one of the most popular models for graph node embedding. R-GCN (Schlichtkrull et al., 2018) extends GCN to provide node embedding of multi-relational graphs:",
"cite_spans": [
{
"start": 95,
"end": 123,
"text": "(Schlichtkrull et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "h (l+1) i = \u03c3( r\u2208R j\u2208N r i 1 c i,r W (l) r h (l) j + W (l) 0 h (l) i ), where h (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "i is the l-th layer's hidden state of node",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "v i , W (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "r is the weight matrix for relation r in layer l, and W 0 is the weight matrix for self-loops. N r i is the set of v i 's neighbours under relation r and c i,r is a normalization constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "This section presents the overall architecture of our model, called Multi-Graph augmented BERT (MG-BERT). MG-BERT takes advantage of BERT's power in capturing context of an input text as well as a graph module including an R-GCN layer over a static multi-graph and a graph attention layer over a dynamic sentence graph. This static multi-graph includes global information about words available as facts in KGs in addition to dependencies between tokens of the input text and other words in the vocabulary which are discovered by computing co-occurrence statistics in the corpus. Two graphs are used to condense co-occurrences of words in the corpus inspired by Text GCN (Yao et al., 2018) that are also employed by VGCN-BERT (Lu and Nie, 2019) . One of these graphs includes local co-occurrences of terms that is computed based on point-wise mutual information (PMI) of terms i and j which is calculated by:",
"cite_spans": [
{
"start": 670,
"end": 688,
"text": "(Yao et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 725,
"end": 743,
"text": "(Lu and Nie, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(i) = #W (i) #W , p(i, j) = #W (i, j) #W , PMI(i, j) = log p(i, j) p(i)p(j) .",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "In the above equations, #W (i) and #W (i, j) denote the number of fixed size windows containing term i and both of the terms i and j, respectively. #W is the whole number of windows in the corpus. The other graph includes the document level co-occurrence of tokens in the corpus computed based on term frequency-inverse document frequency (TF-IDF). The knowledge graph is also incorporated in this multi-graph. Formally, the weighted edges between token i and token j for three types of relations R = {KG, PMI, TF-IDF} in the multi-graph are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "\uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 A T F \u2212IDF ij = \u03bb T d\u2208docs T id T jd A KG ij = \u03bb K e\u2208KG KG ie KG ej if i, j \u2208 KG A P M I ij = \u03bb P PMI(i, j) if PMI(i, j) > 0 A * ij = 1 if i = j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "(2) where T id denotes the TF-IDF of token i in document d, PMI(i, j) shows PMI calculated by Eq. 1, and KG e 1 e 2 is nonzero when a relation between these two entities exists in the knowledge graph. Note that we add a self-connection relation to our knowledge graph for maintaining one-hop links, while also considering two hops as in Eq. 2 to employ indirect relations through paths of length two in the knowledge graph. \u03bb K , \u03bb P , and \u03bb T are also hyperparameters that can control the impact of three types of relations on tokens' embeddings. To utilize the multi-graph introduced above, we add a single-layer R-GCN described in Section 3 to the BERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "Furthermore, we use a graph attention mechanism to capture local information via a dynamic and complete graph in which nodes represent all tokens of the input sentence. The complete dynamic graph is used in order to obtain context-dependent new embeddings while the R-GCN layer itself provides the same new embeddings for a specific token even if the token appears in different contexts. This happens because the single R-GCN layer always performs on the same static multi-graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "As shown in Fig. 1 , the whole graph module is placed immediately after the BERT token embeddings layer since the hidden states of the whole vocabulary are available in this layer. We pass the entire multi-graph to the R-GCN module so that the global dependencies would affect embeddings of tokens properly using Eq. 3. We also use an attention mechanism as in Eq. 4 to consider the local context. The new embedding of token i in sentence s is computed as: denote the attention parameters), and h i is the i th token's embedding from the BERT token embeddings layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i =(1 \u2212 \u03bb dyn ) r\u2208R\u00c2 r i h i W r (3) + \u03bb dyn \uf8eb \uf8ed K k=1 j\u2208s \u03b1 k ij h j W V al k \uf8f6 \uf8f8 W O ,",
"eq_num": "(4)"
}
],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 k ij = exp((h i W Query k ).(h j W Key k )) t\u2208s exp((h i W Query k ).(h t W Key k )) ,",
"eq_num": "(5)"
}
],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "Next, we aggregate the obtained tokens' embeddings by the graph module with position embeddings and segment embeddings (similar to BERT). Afterward, we feed these representations to BERT encoders to find final embeddings. The proposed model architecture is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 274,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "In the training phase, a token from each sentence is randomly masked, and the model is trained to predict the masked token based on both the context and the incorporated static multi-graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "In this section, we explain the details of training MG-BERT and conduct experiments to evaluate and compare our model with the related methods recently proposed. Datasets. During training, we use the WN18 knowledge graph, derived from WordNet, as an unlabeled graph (Bordes et al., 2014) . We also experiment MG-BERT and other recent models on CoLA, SST-2, and Brown datasets (Warstadt et al., 2019; Socher et al., 2013; Francis and Kucera, 1979) . The detailed description of these datatsets is given in Appendix A.",
"cite_spans": [
{
"start": 266,
"end": 287,
"text": "(Bordes et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 376,
"end": 399,
"text": "(Warstadt et al., 2019;",
"ref_id": "BIBREF24"
},
{
"start": 400,
"end": 420,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 421,
"end": 446,
"text": "Francis and Kucera, 1979)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Parameter Setting. In order to capture word co-occurence statistics of the corpus, we use the BERT's tokenizer on sentences and set the sliding window size to 20 when calculating the PMI value. The whole BERT module in MG-BERT is first initialized with the pre-trained bert-base-uncase version of BERT in PyTorch and the model is trained on the MLM task with cross entropy loss (Wolf et al., 2019) . Regarding Eq. 2, different hyper-parameter settings have been used for each dataset. \u03bb K , \u03bb P , and \u03bb T are set to 0.01, 0.001, and 0.001, respectively in both CoLA and Brown datasets and 0.001, 1.0, and 0.001 in SST-2 dataset. The hyperparameter \u03bb dyn is also set to 0.8. The graph attention mechanism is performed with 12 heads. The R-GCN and graph attention layers' output dimension are also set to 768 that equals to the dimension of the BERT token embeddings layer to substitute easily BERT's token embeddings with the embeddings derived from the graph module. We also employ the normalization trick introduced in GCN (Kipf and Welling, 2017) to normalize each adjacency matrix in the multi-graph.",
"cite_spans": [
{
"start": 378,
"end": 397,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Compared methods. To assess our model, we compare it with BERT as the baseline. Moreover, ERNIE and VGCN-BERT are being compared as the recent methods utilizing knowledge graph and text graph, respectively (Zhang et al., 2019b; Lu and Nie, 2019) . We also compare MG-BERT with MG-BERT(base) which doesn't use the dynamic graph incorporating the context according to Eq. 4. All these models are fine-tuned on the text datasets for a fair evaluation.",
"cite_spans": [
{
"start": 206,
"end": 227,
"text": "(Zhang et al., 2019b;",
"ref_id": "BIBREF30"
},
{
"start": 228,
"end": 245,
"text": "Lu and Nie, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Results. We evaluate our model using Hits@1 and Hits@5 metrics. Hits@k shows the proportion of correct tokens appearing in the top k results for each sample. In Table 1 , we report the results of evaluations performed on the test sets of CoLA, SST-2, and Brown datasets. These results demonstrate that the proposed method outperforms other models and taking advantage of the graph module with dataset-specific hyper-parameters improves the performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The reason to our superiority over VGCN-BERT (Lu and Nie, 2019) is that it doesn't take advantage of real-world facts (available in KGs). Moreover, as opposed to MG-BERT, it modifies initial embeddings of tokens only based on input tokens of each sentence and other vocabularies in the text graphs don't influence the final embeddings of tokens. On the other hand, ERNIE (Zhang et al., 2019b) doesn't take full advantage of graphs since it doesn't use graphs derived from the corpus. Be-sides, it does not learn graph-based embeddings during representation learning. It is worth mentioning that the entity embedding model used in ERNIE has been trained on a huge subset of Wikidata 1 , which is almost 120 times bigger than WN18 knowledge graph employed in our method.",
"cite_spans": [
{
"start": 45,
"end": 63,
"text": "(Lu and Nie, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 371,
"end": 392,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The superiority of MG-BERT over MG-BERT(base) demonstrates the importance of the dynamic sentence graph and the results of MG-BERT(base) itself shows that utilizing the static multi-graph has been useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Hits@1 Hits@5 In addition, evaluation results of different variations of MG-BERT(base) on CoLA dataset, considering different graphs, are represented in Table 2 , demonstrating the effect of each graph on the performance. The experimental results indicate the role of exploiting various graphs in language representation learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Graphs",
"sec_num": null
},
{
"text": "K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphs",
"sec_num": null
},
{
"text": "We also compare MG-BERT and MG-BERT(base) with other models using perplexity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphs",
"sec_num": null
},
{
"text": "CoLA SST-2 Brown Hits@1 Hits@5 Hits@1 Hits@5 Hits@1 Hits@5 BERT (Devlin et al., 2019) 68 Table 3 . In this paper, the perplexity is only calculated on the masked tokens as:",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "P P L = exp n i=1 \u2212 log\u0177 [M ASK] i , where\u0177 [M ASK] i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "is the predicted probability of the masked token in the i-th sample. A model with higher perplexity allocates lower probability to the correct masked tokens, which is not desired. The results shown in Table 3 generally demonstrate the fact that both MG-BERT and ERNIE solve the MLM task with more certainty compared to BERT and VGCN-BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We also illustrate some examples of MLM task performed by MG-BERT(base) and BERT in Appendix B. These examples demonstrate that realworld information of knowledge graph and global information of co-occurrence graphs remarkably compensate BERT's shortage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In this paper, we proposed a language representation learning model that enhances BERT by augmenting it with a graph module (i.e. an R-GCN layer over a static multi-graph, including global dependencies between words, and a graph attention layer over a dynamic sentence graph). The static multi-graph utilized in this work consists of a knowledge graph as a source of information about real-world facts and two other graphs built based on word co-occurrences in local windows and documents in the corpus. Therefore, the proposed model utilizes the local context, the corpuslevel co-occurence statistics, and the global word dependencies (through incorporating a knowledge graph) to find the input tokens' embeddings. The results generally show the superiority of the proposed model in the Masked Language Modeling task compared to both the BERT model and the recent models employing knowledge or text graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://www.wikidata.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural knowledge language model",
"authors": [
{
"first": "Heeyoul",
"middle": [],
"last": "Sungjin Ahn",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "P\u00e4rnamaa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungjin Ahn, Heeyoul Choi, Tanel P\u00e4rnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. CoRR, abs/1608.00318.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A semantic matching energy function for learning with multi-relational data. Machine Learning",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "233--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Ma- chine Learning, pages 233-259.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Translating embeddings for modeling multirelational data",
"authors": [],
"year": null,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Translating embeddings for modeling multi- relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2787-2795. Curran Associates, Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Augmenting named entity recognition with commonsense knowledge",
"authors": [
{
"first": "Gaith",
"middle": [],
"last": "Dekhili",
"suffix": ""
},
{
"first": "Tan",
"middle": [
"Ngoc"
],
"last": "Le",
"suffix": ""
},
{
"first": "Fatiha",
"middle": [],
"last": "Sadat",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Workshop on Widening NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaith Dekhili, Tan Ngoc Le, and Fatiha Sadat. 2019. Augmenting named entity recognition with com- monsense knowledge. In Proceedings of the 2019 Workshop on Widening NLP, page 142, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What bert is not: Lessons from a new suite of psycholinguistic diagnostics for language models",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "34--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger. 2019. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for lan- guage models. Transactions of the Association for Computational Linguistics, 8:34-48.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "GAKE: Graph aware knowledge embedding",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Minlie Andd",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "641--651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Feng, Yang Huang, Minlie andd Yang, and Xi- aoyan Zhu. 2016. GAKE: Graph aware knowl- edge embedding. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 641-651, Os- aka, Japan. The COLING 2016 Organizing Commit- tee.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Brown corpus manual",
"authors": [
{
"first": "W",
"middle": [
"N"
],
"last": "Francis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kucera",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. N. Francis and H. Kucera. 1979. Brown corpus manual. Technical report, Department of Linguis- tics, Brown University, Providence, Rhode Island, US.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lightgcn: Simplifying and powering graph convolution network for recommendation",
"authors": [
{
"first": "K",
"middle": [
"H"
],
"last": "Xiangnan He",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Yaliang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yongdong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangnan He, K. H. Deng, Xiang Wang, Yaliang Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution net- work for recommendation. ArXiv, abs/2002.02126.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning beyond datasets: Knowledge graph augmented neural networks for natural language processing",
"authors": [
{
"first": "K M",
"middle": [],
"last": "Annervaz",
"suffix": ""
},
{
"first": "Somnath",
"middle": [],
"last": "Basu Roy Chowdhury",
"suffix": ""
},
{
"first": "Ambedkar",
"middle": [],
"last": "Dukkipati",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "313--322",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1029"
]
},
"num": null,
"urls": [],
"raw_text": "Annervaz K M, Somnath Basu Roy Chowdhury, and Ambedkar Dukkipati. 2018. Learning beyond datasets: Knowledge graph augmented neural net- works for natural language processing. In Proceed- ings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 313-322, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semi-Supervised Classification with Graph Convolutional Networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR '17.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Barack's wife hillary: Using knowledge graphs for fact-aware language modeling",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Logan",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5962--5971",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1598"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware lan- guage modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5962-5971, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Raligraph at hasoc 2019: Vgcn-bert: Augmenting bert with graph embedding for offensive language detection",
"authors": [
{
"first": "Zhibin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2019,
"venue": "FIRE (Working Notes), volume 2517 of CEUR Workshop Proceedings",
"volume": "",
"issue": "",
"pages": "221--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibin Lu and Jian-Yun Nie. 2019. Raligraph at hasoc 2019: Vgcn-bert: Augmenting bert with graph em- bedding for offensive language detection. In FIRE (Working Notes), volume 2517 of CEUR Workshop Proceedings, pages 221-228. CEUR-WS.org.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "Sandip",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Daksh",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Mohana",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "Chintak",
"middle": [],
"last": "Mandlia",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation, FIRE '19",
"volume": "",
"issue": "",
"pages": "14--17",
"other_ids": {
"DOI": [
"10.1145/3368567.3368584"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the hasoc track at fire 2019: Hate speech and offensive content identifi- cation in indo-european languages. In Proceedings of the 11th Forum for Information Retrieval Evalu- ation, FIRE '19, page 14-17, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A novel embedding model for knowledge base completion based on convolutional neural network",
"authors": [
{
"first": "Tu",
"middle": [
"Dinh"
],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Dat",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dinh",
"middle": [],
"last": "Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "327--333",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2053"
]
},
"num": null,
"urls": [],
"raw_text": "Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embed- ding model for knowledge base completion based on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 327-333, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Enriching bert with knowledge graph embeddings for document classification",
"authors": [
{
"first": "Malte",
"middle": [],
"last": "Ostendorff",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "Juli\u00e1n",
"middle": [
"Moreno"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Rehm",
"suffix": ""
},
{
"first": "Bela",
"middle": [],
"last": "Gipp",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malte Ostendorff, Peter Bourgonje, Maria Berger, Juli\u00e1n Moreno Schneider, Georg Rehm, and Bela Gipp. 2019. Enriching bert with knowledge graph embeddings for document classification. ArXiv, abs/1909.08402.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"N"
],
"last": "Kipf",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "ESWC 2018, Proceedings, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
"volume": "",
"issue": "",
"pages": "593--607",
"other_ids": {
"DOI": [
"10.1007/978-3-319-93417-4_38"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convo- lutional networks. In The Semantic Web -15th In- ternational Conference, ESWC 2018, Proceedings, Lecture Notes in Computer Science (including sub- series Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 593-607.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rotate: Knowledge graph embedding by relational rotation in complex space",
"authors": [
{
"first": "Zhiqing",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhi-Hong",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2019,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. CoRR, abs/1902.10197.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multi-task feature learning for knowledge graph enhanced recommendation",
"authors": [
{
"first": "Hongwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fuzheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Miao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Minyi",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2000,
"venue": "The World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3308558.3313411"
]
},
"num": null,
"urls": [],
"raw_text": "Hongwei Wang, Fuzheng Zhang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2019a. Multi-task feature learning for knowledge graph enhanced rec- ommendation. In The World Wide Web Conference, WWW '19, page 2000-2010, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Kgat: Knowledge graph attention network for recommendation",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiangnan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yixin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19",
"volume": "",
"issue": "",
"pages": "950--958",
"other_ids": {
"DOI": [
"10.1145/3292500.3330989"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019b. Kgat: Knowledge graph at- tention network for recommendation. In Proceed- ings of the 25th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, KDD '19, page 950-958, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "KE-PLER: A unified model for knowledge embedding and pre-trained language representation",
"authors": [
{
"first": "Xiaozhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhaocheng",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2019c. KE- PLER: A unified model for knowledge embedding and pre-trained language representation. CoRR, abs/1911.06136.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural network acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "625--641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Graph convolutional networks for text classification",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2018. Graph convolutional networks for text classification. CoRR, abs/1809.05679.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Kgbert: Bert for knowledge graph completion",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg- bert: Bert for knowledge graph completion. ArXiv, abs/1909.03193.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Integrating semantic knowledge to tackle zero-shot text classification",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Yike",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1031--1040",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1108"
]
},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo. 2019a. Integrating semantic knowledge to tackle zero-shot text classification. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1031-1040, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning hierarchy-aware knowledge graph embeddings for link prediction",
"authors": [
{
"first": "Zhanqiu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianyu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "3065--3072",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. 2020. Learning hierarchy-aware knowl- edge graph embeddings for link prediction. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3065- 3072. AAAI Press.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019b. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The architecture of MG-BERT. The \"Aggregate\" phase includes an aggregation of new tokens' embeddings with the position embeddings and the segment embeddings of the BERT model.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"text": "where\u00c2 r refers to the normalized adjacency matrix of relation r, W s are trainable weight matrices (i.e. W r s denote parameters of the R-GCN layer and W",
"content": "<table><tr><td>Query k</td><td>, W Key k</td><td>, and W V al k</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Experimental results of variations of MG-</td></tr><tr><td>BERT(base) using different graphs on CoLA dataset.</td></tr><tr><td>The symbols K, P, and T stand for employing KG, PMI,</td></tr><tr><td>and TF-IDF relations, respectively.</td></tr></table>",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "Hits@k esults on CoLA, SST-2, and Brown datasets. The best score is highlighted in bold and the second best score is highlighted with underline.",
"content": "<table><tr><td>Model</td><td colspan=\"3\">CoLA SST-2 Brown</td></tr><tr><td>BERT</td><td>1.33</td><td>1.43</td><td>1.66</td></tr><tr><td>(Devlin et al., 2019)</td><td>\u00b10.01</td><td>\u00b10.01</td><td>\u00b10.02</td></tr><tr><td>ERNIE</td><td>1.23</td><td>1.20</td><td>1.71</td></tr><tr><td>(Zhang et al., 2019b)</td><td>\u00b10.01</td><td>\u00b10.01</td><td>\u00b10.02</td></tr><tr><td>VGCN-BERT</td><td>1.32</td><td>1.41</td><td>1.75</td></tr><tr><td>(Lu and Nie, 2019)</td><td>\u00b10.01</td><td>\u00b10.01</td><td>\u00b10.02</td></tr><tr><td>MG-BERT(base)</td><td>1.26</td><td>1.45</td><td>1.82</td></tr><tr><td/><td>\u00b10.02</td><td>\u00b10.01</td><td>\u00b10.01</td></tr><tr><td>MG-BERT</td><td>1.23</td><td>1.25</td><td>1.63</td></tr><tr><td/><td>\u00b10.01</td><td>\u00b10.01</td><td>\u00b10.01</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "Perplexity results on CoLA, SST-2, and Brown datasets. The best score is highlighted in bold.",
"content": "<table/>",
"num": null
}
}
}
}