{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:26:53.571890Z" }, "title": "CogALex-VI Shared Task: Transrelation -A Robust Multilingual Language Model for Multilingual Relation Identification", "authors": [ { "first": "Lennart", "middle": [], "last": "Wachowiak", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Vienna", "location": { "settlement": "Vienna", "country": "Austria" } }, "email": "lennartw99@univie.ac.at" }, { "first": "Christian", "middle": [], "last": "Lang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Vienna / Vienna", "location": { "country": "Austria" } }, "email": "" }, { "first": "Barbara", "middle": [], "last": "Heinisch", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Vienna / Vienna", "location": { "country": "Austria" } }, "email": "barbara.heinisch@univie.ac.at" }, { "first": "Dagmar", "middle": [], "last": "Gromann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Vienna / Vienna", "location": { "country": "Austria" } }, "email": "dagmar.gromann@univie.ac.at" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe our submission to the CogALex-VI shared task on the identification of multilingual paradigmatic relations building on XLM-RoBERTa (XLM-R), a robustly optimized and multilingual BERT model. In spite of several experiments with data augmentation, data addition and ensemble methods with a Siamese Triple Net, Translrelation, the XLM-R model with a linear classifier adapted to this specific task, performed best in testing and achieved the best results in the final evaluation of the shared task, even for a previously unseen language.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We describe our submission to the CogALex-VI shared task on the identification of multilingual paradigmatic relations building on XLM-RoBERTa (XLM-R), a robustly optimized and multilingual BERT model. In spite of several experiments with data augmentation, data addition and ensemble methods with a Siamese Triple Net, Translrelation, the XLM-R model with a linear classifier adapted to this specific task, performed best in testing and achieved the best results in the final evaluation of the shared task, even for a previously unseen language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Determining whether a semantic relation exists between words and which type of relation it represents is a central challenge in numerous NLP tasks, such as extracting terminological concept systems and paraphrase generation. Adding a multilingual dimension renders this task at the same time more relevant and more challenging. Recent approaches rely on aligned vector spaces for individual languages (Bojanowski et al., 2017) or meta-learning approaches (Yu et al., 2020) for hypernymy detection and a Siamese Triple Net for antonymy-synonymy distinction inherent in word embeddings (Samenko et al., 2020) . However, in general a distinction of paradigmatic relations with word embeddings is difficult (im Walde, 2020) . In a multilingual scenario, frequently lexical resources are utilized to reinforce the model's transfer learning abilities (Geng et al., 2020) . Given relatively small training datasets and a necessity to support a previously unknown language, we decided to rely on a multilingual pretrained language model. The CogALex-VI shared task focuses on the identification of semantic relations of the types synonymy (e.g. chap and man), antonymy (e.g. big and small), hypernymy (e.g. screech and noise), or random (e.g. ink and closure) between a given word pair. Random indicates that the word pair is unrelated. The shared task provided two subtasks. For the first subtask, participating teams were allowed to design monolingual systems being provided training and validation data for the languages Mandarin Chinese, German, and English. For the second subtask, participating teams were expected to design a single multilingual system that can correctly classify semantic relations in all three languages as well as a previously unknown surprise language, which turned out to be Italian. Additional resources were permitted with the exclusion of anything related to WordNet (Miller, 1995) or ConceptNet (Liu and Singh, 2004) .", "cite_spans": [ { "start": 401, "end": 426, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 455, "end": 472, "text": "(Yu et al., 2020)", "ref_id": "BIBREF20" }, { "start": 584, "end": 606, "text": "(Samenko et al., 2020)", "ref_id": "BIBREF12" }, { "start": 703, "end": 719, "text": "(im Walde, 2020)", "ref_id": "BIBREF6" }, { "start": 845, "end": 864, "text": "(Geng et al., 2020)", "ref_id": "BIBREF5" }, { "start": 1891, "end": 1905, "text": "(Miller, 1995)", "ref_id": "BIBREF11" }, { "start": 1920, "end": 1941, "text": "(Liu and Singh, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our initial intention was to target the second subtask with a multilingual system relying on the state-ofthe-art multilingual model XLM-RoBERTa (XLM-R) (Conneau et al., 2020) adapted to the task at hand utilizing a linear layer and CogALex-VI training datasets, a model we call Transrelation that we provided within the Text to Terminological Concept System (Text2TCS) 1 project. To support the model's ability to distinguish relations we experimented with data augmentation, data addition and ensemble methods, joining Transrelation 2 with a model trained on a Siamese Triple Net. Finally, the adapted XML-R model outperformed all other experiments as well as all other submitted models to CogALex-VI on both tasks.", "cite_spans": [ { "start": 152, "end": 174, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexico-semantic relations, also called semantic and lexical semantic relations, represent the major organizing means for structuring lexical knowledge. A common distinction for such relations is between paradigmatic and syntagmatic relations, where the former represents relations between natural language expressions that could be found in the same position in a sentence and the latter refers to co-occurring elements. Importance of paradigmatic relations might differ by word class (im Walde, 2020), i.e, hypernymy is particularly central for the organization of nouns but less important for organizing verbs. In the CogALex VI shared task all relations are paradigmatic, which are particularly difficult to be distinguished by regular word embedding models and between different word classes (im Walde, 2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexico-Semantic Relations", "sec_num": "2.1" }, { "text": "Recent approaches trying to identify hypernym relations in a multilingual setting utilize fastText embeddings (Bojanowski et al., 2017) of different languages being aligned into a single vector space (Wang et al., 2019) or train models using different fastText embeddings in a multilingual setting with the help of meta-learning algorithms (Yu et al., 2020) . Synonym and antonym differentiation has been a key problem for automatic relation identification and has in the past been tackled with partial success using word alignment over large multilingual corpora with statistical methods to determine distributional similarity (van der Plas and Tiedemann, 2006) or statistical translation to a pivot language for synonymy discovery (Wittmann et al., 2014) . Samenko et al. (2020) utilize Siamese Triple Nets (Bromley et al., 1994) to train so-called contrasting maps, vector representations trained on monolingual embeddings that reinforce the distinction between antonyms and synonyms. Approaches that tackle all three relations at once in a multilingual environment frequently rely on active transfer learning and lexical resources (Geng et al., 2020) or prototypical vector representations for each type of relation (im Walde, 2020).", "cite_spans": [ { "start": 110, "end": 135, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 200, "end": 219, "text": "(Wang et al., 2019)", "ref_id": "BIBREF17" }, { "start": 340, "end": 357, "text": "(Yu et al., 2020)", "ref_id": "BIBREF20" }, { "start": 628, "end": 662, "text": "(van der Plas and Tiedemann, 2006)", "ref_id": "BIBREF16" }, { "start": 733, "end": 756, "text": "(Wittmann et al., 2014)", "ref_id": "BIBREF18" }, { "start": 759, "end": 780, "text": "Samenko et al. (2020)", "ref_id": "BIBREF12" }, { "start": 809, "end": 831, "text": "(Bromley et al., 1994)", "ref_id": "BIBREF2" }, { "start": 1135, "end": 1154, "text": "(Geng et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Relation Identification", "sec_num": "2.2" }, { "text": "Recent advances in the field of natural language processing are based on deep neural language models, which can be pretrained on large amounts of data in an unsupervised fashion and are fine-tuned afterwards on a specific task making use of the previously learned language representations. One of the most prominent example of such a model is BERT (Devlin et al., 2018) utilizing the now ubiquitous Transformer architecture. Compared to earlier approaches like word2vec (Mikolov et al., 2013) and fastText (Bojanowski et al., 2017) the word embeddings generated by these deep neural language models are context-specific, i.e., a word's embedding changes depending on its surrounding words. Language models do not have to be monolingual, but the pretraining can be extended to multiple languages at the same time, e.g. by making use of a shared subword vocabulary. Prominent examples are multilingual BERT and the more recent XLM-R (Conneau et al., 2020) .", "cite_spans": [ { "start": 348, "end": 369, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF4" }, { "start": 470, "end": 492, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF10" }, { "start": 506, "end": 531, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 931, "end": 953, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "2.3" }, { "text": "3 System Description 3.1 Architecture", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "2.3" }, { "text": "Our system makes use of the multilingual language model XLM-R (Conneau et al., 2020) . We use the implementation provided by the transformers library (Wolf et al., 2019) , which offers the XLM-R model pretrained on 100 different languages using CommonCrawl data. We use the base model size, which uses less parameters than the large version of XLM-R, but performed equally well in our experiments. A linear layer is added on top of the pooled output in order to allow for classification into one of the four possible classes, i.e., three semantic relations or random.", "cite_spans": [ { "start": 62, "end": 84, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" }, { "start": 150, "end": 169, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "2.3" }, { "text": "The CogALex VI shared task provided training and validation datasets in English (Santus et al., 2015) , German (Scheible and Im Walde, 2014) and Mandarin Chinese (Liu et al., 2019) . The test data for the surprise language Italian were taken from Sucameli and Lenci (2017) . Word pair counts for the training datasets are provided in Table 1 . English 916 998 842 2554 German 829 841 782 2430 Chinese 361 421 402 1330 ", "cite_spans": [ { "start": 80, "end": 101, "text": "(Santus et al., 2015)", "ref_id": "BIBREF13" }, { "start": 111, "end": 140, "text": "(Scheible and Im Walde, 2014)", "ref_id": "BIBREF14" }, { "start": 162, "end": 180, "text": "(Liu et al., 2019)", "ref_id": "BIBREF9" }, { "start": 247, "end": 272, "text": "Sucameli and Lenci (2017)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 344, "end": 431, "text": "English 916 998 842 2554 German 829 841 782 2430 Chinese 361 421 402 1330", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.2" }, { "text": "The input provided to the model consists of a word pair labeled with a relation surrounded by XLM-R specific classification and sequence separation tokens, as well as additional padding tokens, which guarantee that all inputs have the same length. For instance, the input pair tiger and animal is encoded as '', ' tiger', '', '', ' animal', '', excluding the padding tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input and Preprocessing", "sec_num": "3.3" }, { "text": "This model was then trained on the training datasets (see Table 1 ) in three languages simultaneously. Hyperparameters were fine-tuned manually and via gridsearch on the given validation sets. The best results were achieved with the following hyperparameters: Optimizer: AdamW, Learning rate = 2e-5, Epsilon = 1e-8, Weight Decay = 0, Warm-up steps = 0, Epochs = 7, Batch size = 32. Table 2 shows the results of our model on the four provided test sets. The computed score is a weighted F1-score excluding unrelated words labeled with RANDOM. The strongest performance can be observed in Chinese with a weighted F1-score of 0.881. English and German are far behind with scores of 0.517 and 0.500 respectively. Interestingly, the model performs nearly as well on the Italian test set with a score of 0.477, although the model had not been trained on this language, thus showing the remarkable zero-shot-learning abilities of XLM-R. Fig. 1 shows the normalized confusion matrix based on the joined results on all four test sets. Besides confusing meaningful relations with RANDOM, which can be explained by the fact that RANDOM is the majority class, the highest confusion exists between hypernyms and synonyms. For Chinese, for instnace, 19 HYP/SYN labeled test examples were confused. From these examples, in 11 pairs some characters in one sequence are present in the other, such as \u6d77\u6c34-\u6c34(sea water -water) (label: HYP) and \u8239-\u8239\u8236(ship/boat -ship) (label: SYN). This also occurred in four SYN/ANT labeled examples, e.g. \u7121 \u7dda-\u6709\u7dda(wireless -wired) (gold: ANT). For the remainder of wrongly classified SYN/ANT examples, our model frequently selected RANDOM, e.g. \u79c1\u4eba-\u516c\u7acb(private individual -public) (gold: ANT). The learning curve shown in Fig. 2 plots the achieved weighted F1 score in relation to the number of samples in the training set. For each training set size we trained four models and reported the highest observed score. The model greatly benefits from additional training samples when the training set size is below 8,000. However, the usefulness of adding more data diminishes quickly as the learning curve seems to plateau towards the end. This was confirmed when we tried to add additional training data to data provided by CogALex-VI observing the WordNet/ConceptNet exclusion.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 65, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 382, "end": 389, "text": "Table 2", "ref_id": null }, { "start": 930, "end": 936, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 1730, "end": 1736, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Training and Hyperparameters", "sec_num": "3.4" }, { "text": "In additional experiments we trained a Siamese Triplet Net (Bromley et al., 1994) to learn metaembeddings that contrast synonyms and antonyms, which we also tried for hypernym and synonym distinction. However, an ensemble method combining this model and XLM-R performed worse than XLM-R on its own. Due to our model's strong performance in Chinese we also experimented with data augmentation by machine translating the training and validation sets from Chinese to the other languages. The model's performance on these translated datasets was, however, considerably worse than solely on the original untranslated datasets. Additionally, performance of both models trained for individual languages or consecutively one language after another lagged considerably behind our final model. Given the vast differences in model performance on the different languages, we briefly analyzed the data quality. In the confusion matrix in Fig. 1 it becomes evident that our model tended to confuse hypernyms and synonyms a well as random and antonyms. A brief check on the German data where the model performed worse showed that some word pairs labeled as hypernyms might be understood as synonyms by human classifiers, e.g. fett (fat) -dick (plump), unruhig (anxious/restless) -erregt (excited/aroused), and radikal (radical) -drastisch (radical/extreme) could instead be labeled as synonyms. Additional training data not related to WordNet or ConceptNet we experimented with (e.g. Kober et al. (2020) ) had similar issues and data addition did not improve performance of both the tested models. So on the one hand we attribute this confusion problem of our model to word pairs that might easily be confused by human users. On the other hand, the number of training examples was rather low and data augmentation/addition with high-quality data might have considerably improved performance.", "cite_spans": [ { "start": 59, "end": 81, "text": "(Bromley et al., 1994)", "ref_id": "BIBREF2" }, { "start": 1469, "end": 1488, "text": "Kober et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 925, "end": 931, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Depending on the fact that the semantics of these examples change with context, we believe that providing words in context could be one way to alleviate this misclassification problem. One curious example underlining this issue was the result we got for the surprise language Italian not seen during training, where farfalla (butterfly) and coccinella (ladybug) are labeled as antonyms, while our system labeled the pair as a synonym. Since both can be used to lovingly refer to a young female person in Italian, the result of our system could be regarded as correct if the words are understood in this sense. Further such examples can be found in great number in the training, validation and test datasets. Curiously, performance on Mandarin Chinese did not seem to be impacted as heavily by this problem, which might be due to the fact that the training datasets were compiled from a different source of different quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In this paper, we present our system Transrelation for the CogALex VI shared task on multilingual relation identification called Transrelation. We experimented with data addition, data augmentation and ensemble methods joining pretrained transformer-based models with a Siamese Triple Net. The final system is based on the multilingual pretrained language model XLM-R, which turned out to be the winning system and delivered a strong performance on all four languages, including one previously unknown and unseen additional language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In the future, it would be interesting to apply ideas from curriculum learning (Bengio et al., 2009) or meta-learning, as already done for simpler models in the case of hypernymy detection (Yu et al., 2020) to improve the learning process of our model. This would especially apply to similar scenarios of few available training datasets. Furthermore, it would be interesting to evaluate the model's performance on different lexico-semantic relations as well as languages from different language families, e.g. Slavic.", "cite_spans": [ { "start": 79, "end": 100, "text": "(Bengio et al., 2009)", "ref_id": "BIBREF0" }, { "start": 189, "end": 206, "text": "(Yu et al., 2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "This work has been supported by the project Text2TCS funded by the European Language Grid H2020 (grant number 825627).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Curriculum learning", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "J\u00e9r\u00f4me", "middle": [], "last": "Louradour", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th annual international conference on machine learning", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceed- ings of the 26th annual international conference on machine learning, pages 41-48.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Signature verification using a\" siamese\" time delay neural network", "authors": [ { "first": "Jane", "middle": [], "last": "Bromley", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Guyon", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "S\u00e4ckinger", "suffix": "" }, { "first": "Roopak", "middle": [], "last": "Shah", "suffix": "" } ], "year": 1994, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "737--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1994. Signature verification using a\" siamese\" time delay neural network. In Advances in neural information processing systems, pages 737-744.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised crosslingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross- lingual representation learning at scale. pages 8440-8451, July.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Semantic relation extraction using sequential and tree-structured lstm with attention", "authors": [ { "first": "Zhiqiang", "middle": [], "last": "Geng", "suffix": "" }, { "first": "Guofei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yongming", "middle": [], "last": "Han", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Information Sciences", "volume": "509", "issue": "", "pages": "183--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "ZhiQiang Geng, GuoFei Chen, YongMing Han, Gang Lu, and Fang Li. 2020. Semantic relation extraction using sequential and tree-structured lstm with attention. Information Sciences, 509:183-192.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distinguishing between paradigmatic semantic relations across word classes: human ratings and distributional similarity", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2020, "venue": "Journal of Language Modelling", "volume": "8", "issue": "1", "pages": "53--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde. 2020. Distinguishing between paradigmatic semantic relations across word classes: human ratings and distributional similarity. Journal of Language Modelling, 8(1):53-101.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Data augmentation for hypernymy detection", "authors": [ { "first": "Thomas", "middle": [], "last": "Kober", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "Lorenzo", "middle": [], "last": "Bertolini", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Kober, Julie Weeds, Lorenzo Bertolini, and David Weir. 2020. Data augmentation for hypernymy detec- tion. ArXiv e-prints.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Conceptnet-a practical commonsense reasoning tool-kit", "authors": [ { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Push", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2004, "venue": "BT technology journal", "volume": "22", "issue": "4", "pages": "211--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet-a practical commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semantic relata for the evaluation of distributional models in mandarin chinese", "authors": [ { "first": "Hongchao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Emmanuele", "middle": [], "last": "Chersoni", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Klyueva", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Santus", "suffix": "" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "IEEE access", "volume": "7", "issue": "", "pages": "145705--145713", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongchao Liu, Emmanuele Chersoni, Natalia Klyueva, Enrico Santus, and Chu-Ren Huang. 2019. Semantic relata for the evaluation of distributional models in mandarin chinese. IEEE access, 7:145705-145713.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Synonyms and Antonyms: Embedded Conflict", "authors": [ { "first": "Igor", "middle": [], "last": "Samenko", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Tikhonov", "suffix": "" }, { "first": "Ivan", "middle": [ "P" ], "last": "Yamshchikov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.12835v1" ] }, "num": null, "urls": [], "raw_text": "Igor Samenko, Alexey Tikhonov, and Ivan P. Yamshchikov. 2020. Synonyms and Antonyms: Embedded Conflict. arXiv:2004.12835v1 [cs].", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models", "authors": [ { "first": "Enrico", "middle": [], "last": "Santus", "suffix": "" }, { "first": "Frances", "middle": [], "last": "Yung", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications", "volume": "", "issue": "", "pages": "64--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrico Santus, Frances Yung, Alessandro Lenci, and Chu-Ren Huang. 2015. Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models. In Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications, pages 64-69.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A database of paradigmatic semantic relation pairs for german nouns, verbs, and adjectives", "authors": [ { "first": "Silke", "middle": [], "last": "Scheible", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing", "volume": "", "issue": "", "pages": "111--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silke Scheible and Sabine Schulte Im Walde. 2014. A database of paradigmatic semantic relation pairs for german nouns, verbs, and adjectives. In Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing, pages 111-119.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Parad-it: Eliciting italian paradigmatic relations with crowdsourcing. CLiC-it", "authors": [ { "first": "Irene", "middle": [], "last": "Sucameli", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Sucameli and Alessandro Lenci. 2017. Parad-it: Eliciting italian paradigmatic relations with crowdsourcing. CLiC-it 2017 11-12 December 2017, Rome, page 310.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Finding synonyms using automatic word alignment and measures of distributional similarity", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Lonneke Van Der Plas", "suffix": "" }, { "first": "", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "866--873", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lonneke van der Plas and J\u00f6rg Tiedemann. 2006. Finding synonyms using automatic word alignment and mea- sures of distributional similarity. (July):866-873.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A family of fuzzy orthogonal projection models for monolingual and cross-lingual hypernymy prediction", "authors": [ { "first": "Chengyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "He", "suffix": "" }, { "first": "Aoying", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference", "volume": "", "issue": "", "pages": "1965--1976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chengyu Wang, Yan Fan, Xiaofeng He, and Aoying Zhou. 2019. A family of fuzzy orthogonal projection models for monolingual and cross-lingual hypernymy prediction. In The World Wide Web Conference, pages 1965- 1976.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic extraction of synonyms for German particle verbs from parallel data with distributional similarity as a re-ranking feature", "authors": [ { "first": "Marion", "middle": [], "last": "Moritz Wittmann", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Weller", "suffix": "" }, { "first": "", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 9th International Conference on Language Resources and Evaluation", "volume": "2014", "issue": "", "pages": "1430--1437", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moritz Wittmann, Marion Weller, and Sabine Schulte Im Walde. 2014. Automatic extraction of synonyms for German particle verbs from parallel data with distributional similarity as a re-ranking feature. Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014, (1998):1430-1437.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, pages arXiv-1910.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hypernymy detection for low-resource languages via meta learning", "authors": [ { "first": "Changlong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jialong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Haisong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wilfred", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3656", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changlong Yu, Jialong Han, Haisong Zhang, and Wilfred Ng. 2020. Hypernymy detection for low-resource languages via meta learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3651-3656.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Normalized Confusion Matrix Figure 2: Learning Curve", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "num": null, "text": "Word pair counts of training sets", "html": null, "content": "
Language ANT HYP SYN Weighted
English0.587 0.483 0.473 0.517
German0.534 0.535 0.427 0.500
Chinese0.914 0.876 0.849 0.881
Italian0.447 0.462 0.513 0.477
Table 2: F1-score on test set
" } } } }