{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:01:44.370226Z" }, "title": "Generation and Evaluation of Concept Embeddings Via Fine-Tuning Using Automatically Tagged Corpus", "authors": [ { "first": "Kanako", "middle": [], "last": "Komiya", "suffix": "", "affiliation": {}, "email": "kanako.komiya.nlp@vc.ibaraki.ac.jp" }, { "first": "Daiki", "middle": [], "last": "Yaginuma", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Masayuki", "middle": [], "last": "Asahara", "suffix": "", "affiliation": {}, "email": "masayu-a@ninjal.ac.jp" }, { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "", "affiliation": {}, "email": "hiroyuki.shinnou.0828@vc.ibaraki.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word embeddings are used in various fields of natural language processing. The use of word embeddings and concept or word sense embeddings demonstrated effectiveness in many tasks, such as machine translation and text summarization. However, it is difficult to obtain a sufficiently large concept-tagged corpus, as the annotation of concept-tags is timeconsuming. Therefore, in this paper, we propose a method for generating concept embeddings of Word List by Semantic Principles, a Japanese thesaurus, using both a corpus tagged by an all-words word sense disambiguation (WSD) system and a manually tagged corpus. We generated concept embeddings via fine-tuning using both an automatically tagged corpus and a small manually tagged corpus. In this paper, we propose a novel method of evaluating concept embeddings using the tree structure of Word List by Semantic Principles. Experiments revealed the effectiveness of fine-tuning. The best performance was achieved when the concept embeddings were initially trained with a corpus tagged by an all-words WSD system and retrained with a manually tagged corpus.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Word embeddings are used in various fields of natural language processing. The use of word embeddings and concept or word sense embeddings demonstrated effectiveness in many tasks, such as machine translation and text summarization. However, it is difficult to obtain a sufficiently large concept-tagged corpus, as the annotation of concept-tags is timeconsuming. Therefore, in this paper, we propose a method for generating concept embeddings of Word List by Semantic Principles, a Japanese thesaurus, using both a corpus tagged by an all-words word sense disambiguation (WSD) system and a manually tagged corpus. We generated concept embeddings via fine-tuning using both an automatically tagged corpus and a small manually tagged corpus. In this paper, we propose a novel method of evaluating concept embeddings using the tree structure of Word List by Semantic Principles. Experiments revealed the effectiveness of fine-tuning. The best performance was achieved when the concept embeddings were initially trained with a corpus tagged by an all-words WSD system and retrained with a manually tagged corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we propose a technique for generating concept embeddings using fine-tuning and two types of corpora. In recent years, word embeddings, which are distributed representations of words with low-dimensional vectors, and concept 1 (or word sense) embeddings demonstrated their effectiveness in a number of tasks, such as machine translation and text summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Word embeddings are usually generated using text corpora. It is possible to generate concept embeddings by the same method used to generate word embeddings if the word sequence (i.e., text corpus) is replaced with a concept sequence constructed from a concept-tagged corpus. However, it is difficult to obtain a sufficiently large concept-tagged corpus because the annotation of concept tags is time-consuming. There have been several studies that assigned word senses using the all-words word sense disambiguation (WSD) method (Edmonds and Cotton, 2001) , (Snyder and Palmer, 2004) , (Navigli et al., 2007) , (Iacobacci et al., 2016) , (Raganato et al., 2017a) , (Raganato et al., 2017b) , , . As a result, it is possible to create a concept-tagged corpus using the methods proposed in these studies. However, the results of all-words WSD systems are not always correct; therefore, an automatically tagged corpus created via all-words WSD may not be suitable for generating concept embeddings.", "cite_spans": [ { "start": 528, "end": 554, "text": "(Edmonds and Cotton, 2001)", "ref_id": "BIBREF0" }, { "start": 557, "end": 582, "text": "(Snyder and Palmer, 2004)", "ref_id": "BIBREF13" }, { "start": 585, "end": 607, "text": "(Navigli et al., 2007)", "ref_id": "BIBREF8" }, { "start": 610, "end": 634, "text": "(Iacobacci et al., 2016)", "ref_id": "BIBREF1" }, { "start": 637, "end": 661, "text": "(Raganato et al., 2017a)", "ref_id": "BIBREF10" }, { "start": 664, "end": 688, "text": "(Raganato et al., 2017b)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we generate concept embeddings of Word List by Semantic Principles (WLSP) (National Institute for Japanese Language and Linguistics, 1964) , a Japanese thesaurus, from manually and automatically tagged corpora. First, concept embeddings are generated from a concept-tagged corpus tagged by an all-words WSD system and are finetuned using a small, highly accurate corpus in which the concept tags are manually annotated. For comparison, we also generate the following concept em-beddings: (1) concept embeddings generated from only a small, highly accurate corpus in which the concept tags are manually annotated, (2) concept embeddings generated from only a concept-tagged corpus tagged by an all-words WSD system, and (3) concept embeddings initially trained with a small, highly accurate corpus in which the concept tags are manually annotated and fine-tuned using a concepttagged corpus tagged by an all-words WSD system. The obtained concept embeddings are evaluated by rankings measured by the distances between the concept embeddings based on the tree structure of WLSP, which is a proposed evaluation method in this paper.", "cite_spans": [ { "start": 135, "end": 153, "text": "Linguistics, 1964)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, word embeddings have been widely used in various fields of natural language processing. In addition, there have been a number of studies on the generation of concept (or word sense) embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For example, a study by Ouchi et al. (2016) , to construct distributed representations of word senses, the authors utilized the distributed representations of synonyms of each word sense. In addition, Yamaki et al. (2017) proposed a method for constructing sense embeddings using training data with sense tags and the multi-sense skip-gram (MSSG) model, which considers the frequency of each word sense. However, these studies did not use a sense-tagged corpus, but rather, a regular text corpus and word embeddings.", "cite_spans": [ { "start": 24, "end": 43, "text": "Ouchi et al. (2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Word embeddings are usually generated using a text corpus that is a word sequence. Concept or word sense embeddings can be generated using the same tools as for a sense-tagged corpus, that is, a word sense sequence or concept sequence instead of a text corpus. However, it is generally difficult to obtain a sufficiently large sense-tagged corpus, as only several are available and most are small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "If there are insufficient tagged corpora, automatic generation of tagged corpora may be helpful. A concept-tagged corpus can be automatically created with the all-words WSD system. There are several studies on all-words WSD systems. For example, in studies by Raganato et al. (2017a) and , all-words WSD is considered a label-ing problem in which every word is assigned a concept tag. Using an automatic tagger, it is possible to create a concept-tagged corpus. However, an automatic tagger does not always produce correct results. For example, there may be cases in which concept tags are not assigned to new words. In these cases, the concept-tagged corpus would not be suitable for generating concept embeddings.", "cite_spans": [ { "start": 260, "end": 283, "text": "Raganato et al. (2017a)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Therefore, in this study, we generate concept embeddings of WLSP using two types of corpora: a large corpus in which the concept tags are assigned using the all-words WSD method and a manually tagged corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We generated four types of vectors using two corpora tagged with concepts from WLSP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation of Concept Embeddings", "sec_num": "3" }, { "text": "WLSP is a Japanese thesaurus in which a word is classified and ordered according to its meaning. A WLSP record is composed of the record ID number, lemma number, record type, class, division, section, article, concept number, paragraph number, small paragraph number, word number, lemma with explanatory note, lemma without explanatory note, reading and reverse reading. The concept number consists of a category, medium item, and classification item. In WLSP, some words are polysemous; for example,\"\u5b50\u4f9b (child or children)\" is a polyseme, and two concepts are registered in WLSP: 1.2050 and 1.2130 (Table 1) .", "cite_spans": [], "ref_spans": [ { "start": 599, "end": 608, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "WLSP", "sec_num": "3.1" }, { "text": "The tree structure of WLSP is illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 54, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "WLSP", "sec_num": "3.1" }, { "text": "In this study, we used two concept-tagged corpora based on the Balanced Corpus of Contemporary Written Japanese (BCCWJ) (Maekawa et al., 2014) . The first corpus is a large corpus in which concept tags were automatically assigned using the all-words WSD method. We used an all-words WSD tagger proposed by . Hereinafter, this corpus is referred to as the all-words WSD corpus. The second corpus is a small corpus in which concept tags were manually assigned. We used the annotation data of WLSP by the National Institute of Japanese Language and Linguistics (Kato et al., ", "cite_spans": [ { "start": 120, "end": 142, "text": "(Maekawa et al., 2014)", "ref_id": "BIBREF3" }, { "start": 558, "end": 571, "text": "(Kato et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "Division Section Article 1.2050", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class", "sec_num": null }, { "text": "Nominal words Agent Human Young or old 1.2130", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class", "sec_num": null }, { "text": "Nominal words Agent Family Child or descendant Table 1 : Concept tags and their corresponding class, division, section, and article of \"\u5b50\u4f9b (child or children)\" from Word List by Semantic Principles ). This corpus is in its infancy. Hereinafter, this corpus is referred to as the manual corpus. There are two types of BCCWJ: the core and non-core data. For the core data, the word tokenization is manually conducted, but for the non-core data, word tokenizer, MeCab with Unidic dictionary is used for the word tokenization. The core data includes approximately 1,300,000 words and the non-core data includes approximately 25,800,000,000 words. The core data is included in the non-core data. We used the non-core data including the core data for the allwords WSD corpus, with the concept tag annotation via the all-words WSD system. The manual corpus is the part of the core data with manual annotation of the concept tags, which includes approximately 340,000 words. Examples of the text corpus and a generated concept sequence are presented in Table 2. In the table, an original Japanese text, its English translation and concept sequence are shown. The concepts of \"\u306a \u304f\" and \"\u306a\u3044\" are both 3.1200 because they are the same words after lemmatization. Table 3 presents the number of words, vocabulary, and concepts in each corpus.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 1", "ref_id": null }, { "start": 1045, "end": 1067, "text": "Table 2. In the table,", "ref_id": "TABREF0" }, { "start": 1251, "end": 1258, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Class", "sec_num": null }, { "text": "In this study, word2vec 2 (Mikolov et al., 2013a; Mikolov et al., 2013b; Mikolov et al., 2013c ) was used to generate concept embeddings. Then, finetuning was performed. Fine-tuning is a method in which generated distributed representations are 2 https://code.google.com/archive/p/ word2vec/ given as initial values and retrained with a new corpus. The following four types of concept embeddings were created:", "cite_spans": [ { "start": 26, "end": 49, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF4" }, { "start": 50, "end": 72, "text": "Mikolov et al., 2013b;", "ref_id": "BIBREF5" }, { "start": 73, "end": 94, "text": "Mikolov et al., 2013c", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Vectors", "sec_num": "3.3" }, { "text": "\u2022 All-words WSD vector: concept embeddings were trained with the all-words WSD corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vectors", "sec_num": "3.3" }, { "text": "\u2022 All-words WSD-fine vector: concept embeddings were trained with the all-words WSD corpus and retrained with a manual corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vectors", "sec_num": "3.3" }, { "text": "\u2022 Manual vector: concept embeddings were trained with a manual corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vectors", "sec_num": "3.3" }, { "text": "\u2022 Manual-fine vector: concept embeddings were trained with a manual corpus and retrained with the all-words WSD corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vectors", "sec_num": "3.3" }, { "text": "When fine-tuning the embeddings, vectors of the new words in the new corpus were generated if the number of occurrences of the new words exceeded the threshold value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vectors", "sec_num": "3.3" }, { "text": "We evaluated the concept embeddings using WLSP. Because WLSP has a tree structure, we assume that concepts that belong to the same node are similar to each other. Figure 2 presents an example of leaves of WLSP. In this figure, we assume that the concept of wolf is closer to that of hyena than that of cat or dog. Based on this assumption, evaluation of the generated concept embeddings was performed.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 171, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Concept Embeddings", "sec_num": "4" }, { "text": "Text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Concept Embeddings", "sec_num": "4" }, { "text": "\u30e2\u30ce \u3067 \u306a\u304f \u5fc3 \u3067 \u306f \u306a\u3044 \u306e \u304b English translation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Concept Embeddings", "sec_num": "4" }, { "text": "It is not a thing but a heart, isn't it? Concept sequence 1.4000 \u3067 3.1200 1.3000 \u3067 \u306f 3.1200 \u306e \u304b ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Concept Embeddings", "sec_num": "4" }, { "text": "The evaluation procedures were as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "1. For each concept c of the concept embeddings e, identify a corresponding leaf node n in WLSP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "For example, if c is the concept of wolf, the corresponding node n includes concepts such as hyena. In Figure 2 , n is Leaf 1. In this method, we assume that every concept has at least two words so that the distance between them can be calculated.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "2. Obtain a sibling leaf node set N of n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "A sibling leaf node set N includes a node that contains a concept such as cat and another node that contains a concept such as dog. In Figure 2 , N includes Leaves 2 and 3.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 144, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "3. Calculate d c , the average distance between e and the concept embeddings of all concepts in n except for c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "For this step, we calculated d c , the average distance between the concept embeddings of wolf and the concept embeddings of hyena and other concepts in n (Leaf 1). We used the arithmetic mean to average the distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "4. Calculate the average distances d 1 ...d |N | between e and the concept embeddings of all concepts in each leaf node in N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "We calculated the average distance between concept embeddings of wolf and the concept embeddings of all concepts from the node containing cat, and obtained d 1 . Likewise, we calculated the average distance between the concept embeddings of wolf and the concept embeddings of all concepts from the node containing dog, and obtained d 2 . Following this step, we obtained the averaged distances", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "d 1 ...d |N | .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "5. Obtain the ranking of n compared with all nodes in N based on the average distance from e i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "We compared d 6. Obtain the closest distance d close and the closest leaf node to e based on the average distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "We obtained the closet leaf node to e. For example, if the closest leaf node to the concept wolf was the node that contained the concept dog, d 2 would be the shortest, which signifies that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "d close = d 2 . 7. Obtain d c \u2212 d close .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "We calculated d c \u2212 d close , which is the difference between the average distances from the concept in first place. In other words, we calculated the difference between the average distance between wolf and concepts such as hyena, which is the node that wolf belongs to in WLSP, and the average distance between wolf and concepts such as dog, which was in first place. If all rankings of n were first place, the difference would be zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "In this manner, we evaluated the concept embeddings that were generated using ranking and d c \u2212 d close .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Procedure", "sec_num": "4.1" }, { "text": "For the parameters used in the calculation of word2vec, we used 200 dimensions, 5 window sizes, 1,000 batch sizes, and 5 iterations. We used CBOW as the algorithm. The training parameters used for fine-tuning were identical to the ones used when the original concept embeddings were generated in advance. Cosine similarity was used to compare the distances between the generated concept embeddings. Table 4 presents the average ranking of the correct nodes, which are the nodes that each concept whose embeddings were generated by this method belonged to. Table 4 also displays the average difference between the closest leaf node and the correct nodes, and the average number of leaf nodes. This table indicates that the poorest ranking of each concept embeddings was 6.868 for the manual vector. Because the average number of leaf nodes was 42, the average ranking of a random selected node was approximately 21. This suggests that, even when concept embeddings were generated using the worst method, the ranking of the nodes produced better results than when the random baseline was used. Table 4 indicates that the average ranking and difference of the all-words WSD-fine vector were smaller than those of the all-words WSD vector. In addition, the average ranking and difference of manual-fine vector were smaller than those of the manual vector.", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 406, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 556, "end": 563, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 1092, "end": 1099, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.2" }, { "text": "Smaller values of the average ranking and difference indicate better performance; therefore, these results demonstrate that fine-tuning improved the vectors. Table 4 also indicates that the results of the all-words WSD vector were superior to those of the manual vector, while the all-words WSD-fine vector was superior to the manual-fine vector. The poorest results were associated with the manual vector. These results suggest that the all-words WSD corpus is effective for generating concept embeddings without fine-tuning or for initial training of fine-tuning. We believe that a large corpus is necessary for generating improved word embeddings. We used the same parameters of word2vec for all vectors, which were tuned so that the results of the manual vector, the method with the poorest performance, could achieve the best performance. The other three vectors (i.e., all-words WSD vector, all-words WSD-fine vector, manual-fine vector) could be improved if the parameters were tuned for each method. This is because the results of vectors often improve when the parameters are tuned depending on the size and characteristics of the corpora. Table 5 presents the evaluation results of the all-words WSD vector and manual vector generated with 10 iterations. Other parameters are identical to those used in the experiments presented in Table 4 . The results in Table 5 are inferior to those in Table 4 ; therefore, extensive experiments are necessary to tune the parameters suitable for each corpus.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 1149, "end": 1156, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 1342, "end": 1349, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 1367, "end": 1374, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 1400, "end": 1407, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "The number of words in the all-words WSD corpus was approximately 69 times larger than the number of words in the manual corpus (see Table 3 ). In addition, according to , the accuracy of the WSD system was approximately 80% for all words and approximately 70% for all ambiguous words in the test corpus (the annotation data of WLSP). In our experiments, the test corpus would be identical to the manual corpus and sub-corpus of the all-words WSD corpus if concept tags were removed and manually tagged. Therefore, we assume that the accuracy of the all-words WSD corpus would be approximately 70% or 80%. The results of the concept embeddings trained with the all-words WSD corpus were superior to the results of the concept embeddings trained with the manual corpus regardless of whether fine-tuning was used. This demonstrates that the all-words WSD corpus was superior to the manual corpus in generating concept embeddings. In other words, our experiments revealed that the corpus that was concept-tagged with 70% or 80% accuracy and whose size was approximately 69 times larger, was more suitable for generating concept. However, it cannot be claimed that when generating concept embeddings, the corpus size is more important than the accuracy of the concept tags of the corpus. Therefore, we conducted additional experiments to investigate the effect of the size of the all-words WSD corpus. Table 6 presents the average ranking of correct nodes, average difference from the concept in first place, and the number of leaf nodes according to the size of the all-words WSD corpus. We tested 10% to 100% of the size of the entire corpus in increments of 10%. This figure indicates that the average ranking monotonically improved from 10% to 60%, worsened at 70%, 80% and 90%, and achieved the best value when the entire corpus was used.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 3", "ref_id": "TABREF1" }, { "start": 1398, "end": 1405, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Finally, according to Table 4 , we can observe the effect of order of the data used for training and retraining of word-embeddings. All-words WSD-fine vector and manual-fine vector use both the manual corpus and the all-words WSD corpus. The difference of two method is order of the data. It indicates that not only the size of the data but also the order of the data used for training and fine-tuning is important to improve the quality of word embeddings.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "However, additional experiments are necessary to investigate the relationship between accuracy and corpus size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "For future work, other algorithm for word2vec, skip-gram can be tried instead of CBOW algorithm. Also, other word embeddings such as GloVe or fast-Text could be other options.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In this study, we generated concept embeddings using a concept-tagged corpus that was tagged by an all-words WSD system, and using fine-tuning. In addition, we evaluated the concept embeddings using rankings measured by the distances between the concept embeddings based on the tree structure of WLSP. We compared four concept embeddings: 1) concept embeddings that were trained with a concept-tagged corpus tagged by an all-words WSD system, 2) concept embeddings that were trained with a small and manually tagged corpus, 3) concept embeddings of 1) that were fine-tuned with a small and manually tagged corpus, and 4) concept embeddings of 2) that were fine-tuned with a cocept-tagged corpus tagged by an all-words WSD system. Experiments revealed that fine-tuning was effective in generating better concept embeddings when we utilized a small, manually tagged corpus and a corpus that was concept-tagged by an all-words WSD system. The all-words WSD-fine vector, which represented the concept embeddings initially trained with a large corpus automatically tagged by an all-words WSD system and fine-tuned with a small, manually tagged corpus, was superior when the concept embeddings were evaluated using the tree structure of WLSP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Percentage of corpus used 10%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "30% 40% 50% 60% 70% 80% 90% 100% All-words WSD vector 5.458 4.531 4.156 4.055 3.843 3.707 3.848 3.705 3.770 3.455 All-words WSD-fine vector 4.689 4.004 3.750 3.663 3.449 3.474 3.470 3.447 3.613 3.087 manual-fine vector 5.184 4.694 4.331 4.205 3.896 3.888 4.054 3.917 4.017 3.619 Table 6 : Evaluation by ranking using distance according to the size of the all-words word sense disambiguation (WSD) corpus", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 286, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "20%", "sec_num": null }, { "text": "Concept refers to a meaning unit of Word List by Semantic Principles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by JSPS KAKENHI Grants Number 18K11421, 17H00917, and a project of the Center for Corpus Development, NINJAL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Senseval-2: Overview", "authors": [ { "first": "Philip", "middle": [], "last": "Edmonds", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Cotton", "suffix": "" } ], "year": 2001, "venue": "Proceedings of *SEMEVAL 2001", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Edmonds and Scott Cotton. 2001. Senseval-2: Overview. In Proceedings of *SEMEVAL 2001, pages 1--5.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Embeddings for word sense disambiguation: An evaluation study", "authors": [ { "first": "Ignacio", "middle": [], "last": "Iacobacci", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL 2016", "volume": "", "issue": "", "pages": "897--907", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of ACL 2016, pages 897--907.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Annotation of 'word list by semantic principles' labels for the balanced corpus of contemporary written Japanese", "authors": [ { "first": "Sachi", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Masayuki", "middle": [], "last": "Asahara", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Yamazaki", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation", "volume": "", "issue": "", "pages": "1--3", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sachi Kato, Masayuki Asahara, and Makoto Yamazaki. 2018. Annotation of 'word list by semantic princi- ples' labels for the balanced corpus of contemporary written Japanese. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Com- putation, Hong Kong, 1-3 December. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Balanced corpus of contemporary written Japanese. Language resources and evaluation", "authors": [ { "first": "Kikuo", "middle": [], "last": "Maekawa", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Yamazaki", "suffix": "" }, { "first": "Toshinobu", "middle": [], "last": "Ogiso", "suffix": "" }, { "first": "Takehiko", "middle": [], "last": "Maruyama", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Ogura", "suffix": "" }, { "first": "Wakako", "middle": [], "last": "Kashino", "suffix": "" }, { "first": "Hanae", "middle": [], "last": "Koiso", "suffix": "" }, { "first": "Masaya", "middle": [], "last": "Yamaguchi", "suffix": "" }, { "first": "Makiro", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Yasuharu", "middle": [], "last": "Den", "suffix": "" } ], "year": 2014, "venue": "", "volume": "48", "issue": "", "pages": "345--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kikuo Maekawa, Makoto Yamazaki, Toshinobu Ogiso, Takehiko Maruyama, Hideki Ogura, Wakako Kashino, Hanae Koiso, Masaya Yamaguchi, Makiro Tanaka, and Yasuharu Den. 2014. Balanced corpus of con- temporary written Japanese. Language resources and evaluation, 48(2):345-371.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICLR Workshop 2013", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of ICLR Work- shop 2013, pages 1-12.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NIPS 2013", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS 2013, pages 1-9.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Wen Tau Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL 2013", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of NAACL 2013, pages 746-751.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "National Institute for Japanese Language and Linguistics", "authors": [], "year": 1964, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "National Institute for Japanese Language and Linguis- tics. 1964. Word List by Semantic Principles. Shuuei Shuppan, In Japanese.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word sense disambiguation: A unified evaluation framework and empirical comparison", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Kenneth", "middle": [ "C" ], "last": "Litkowski", "suffix": "" }, { "first": "Orin", "middle": [], "last": "Hargraves", "suffix": "" } ], "year": 2007, "venue": "Proceedings of *SEMEVAL 2007", "volume": "", "issue": "", "pages": "30--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, Kenneth C. Litkowski, and Orin Har- graves. 2007. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of *SEMEVAL 2007, pages 30-35.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Construction of word sense embeddings from word embeddings using synonyms", "authors": [ { "first": "Katsuyuki", "middle": [], "last": "Ouchi", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" }, { "first": "Kanako", "middle": [], "last": "Komiya", "suffix": "" }, { "first": "Minoru", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NLP 2016", "volume": "", "issue": "", "pages": "99--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katsuyuki Ouchi, Hiroyuki Shinnou, Kanako Komiya, and Minoru Sasaki. 2016. Construction of word sense embeddings from word embeddings using synonyms. Proceedings of NLP 2016 (in Japanese), pages 99- 102.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Neural sequence learning models for word sense disambiguation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Claudio", "middle": [ "Delli" ], "last": "Bovi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP 2017", "volume": "", "issue": "", "pages": "1156--1167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017a. Neural sequence learning models for word sense disambiguation. In Proceedings of EMNLP 2017, pages 1156-1167.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Semeval-2007 task 07: Coarse-grained english all-words task", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EACL 2017", "volume": "", "issue": "", "pages": "99--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017b. Semeval-2007 task 07: Coarse-grained english all-words task. In Proceedings of EACL 2017, pages 99-110.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "All-words wsd with wlsp number as s sense label using a bidirectional lstm", "authors": [ { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Kanako", "middle": [], "last": "Komiya", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Language Resources", "volume": "", "issue": "", "pages": "2--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyuki Shinnou, Rui Suzuki, and Kanako Komiya. 2018. All-words wsd with wlsp number as s sense la- bel using a bidirectional lstm. Proceedings of the Lan- guage Resources Workshop 2018 (in Japanese), pages 2-4.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The english all-words task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2004, "venue": "Proceedings of *SEMEVAL 2004", "volume": "", "issue": "", "pages": "41--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder and Martha Palmer. 2004. The english all-words task. In Proceedings of *SEMEVAL 2004, pages 41-43.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "All-words word sense disambiguation using concept embeddings. Proceedings of", "authors": [ { "first": "Rui", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Kanako", "middle": [], "last": "Komiya", "suffix": "" }, { "first": "Masayuki", "middle": [], "last": "Asahara", "suffix": "" }, { "first": "Minoru", "middle": [], "last": "Sasaki", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "1006--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Suzuki, Kanako Komiya, Masayuki Asahara, Minoru Sasaki, and Hiroyuki Shinnou. 2018. All-words word sense disambiguation using concept embeddings. Pro- ceedings of LREC 2018, pages 1006-1011.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Construction of word sense embeddings using training data", "authors": [ { "first": "Shoma", "middle": [], "last": "Yamaki", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" }, { "first": "Kanako", "middle": [], "last": "Komiya", "suffix": "" }, { "first": "Minoru", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 2017, "venue": "Proceedings of NLP 2017", "volume": "", "issue": "", "pages": "78--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shoma Yamaki, Hiroyuki Shinnou, Kanako Komiya, and Minoru Sasaki. 2017. Construction of word sense embeddings using training data. Proceedings of NLP 2017 (in Japanese), pages 78-81.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Tree structure of Word List by Semantic Principles 2018", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "1 ...d |N | and d c , and obtained the ranking of d c . For example, if d c was the second shortest in d 1 ...d |N | and d c , n was in second place.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "html": null, "type_str": "table", "content": "
Concept EmbeddingsWordsVocabulary Concepts
All-words WSD corpus 23,968,82675,028851
Manual corpus347,0943,164916
", "text": "Example of concept-tagged corpus", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "", "text": "Number of words, vocabulary, and concepts in each corpus", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
All-words WSD vector2.9450.05942
All-words WSD-fine vector2.6440.04642
Manual vector6.8680.10242
Manual-fine vector3.1430.04942
", "text": "Concept Embeddings Avg. Ranking Avg. Difference from First Place Number of Leaf Nodes", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
: Evaluation by ranking measured by distance
Concept Embeddings Avg. Ranking Avg. Difference from First Place Number of Leaf Nodes
All-words WSD vector3.2170.04342
Manual vector7.520.10542
", "text": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "", "text": "Evaluation by ranking using distance with 10 iterations", "num": null } } } }