{ "paper_id": "S01-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:35:38.843855Z" }, "title": "Framework and Results for the Spanish SENSEVAL", "authors": [ { "first": "German", "middle": [], "last": "Rig Au", "suffix": "", "affiliation": {}, "email": "g.rigau@lsi.upc.es" }, { "first": "Mariona", "middle": [], "last": "Taule", "suffix": "", "affiliation": {}, "email": "rntaule@lingua.filub.es" }, { "first": "Ana", "middle": [], "last": "Fernandez", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Julio", "middle": [], "last": "Gonzalo", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we describe the structure, organisation and results of the SENSEVAL exercise for Spanish. We present several design decisions we taked for the exercise, we describe the creation of the goldstandard data and finally, we present the results of the evaluation. Twelve systems from five different universities were evaluated. Final scores ranged from 0.56 to 0.65. 1 The noun \"arte\" was not included in the exercise because it was provided to the competitors during the trial phase. 2 The working corpus of the HERMES project CICYT TIC2000-0335-C03-02.", "pdf_parse": { "paper_id": "S01-1010", "_pdf_hash": "", "abstract": [ { "text": "In this paper we describe the structure, organisation and results of the SENSEVAL exercise for Spanish. We present several design decisions we taked for the exercise, we describe the creation of the goldstandard data and finally, we present the results of the evaluation. Twelve systems from five different universities were evaluated. Final scores ranged from 0.56 to 0.65. 1 The noun \"arte\" was not included in the exercise because it was provided to the competitors during the trial phase. 2 The working corpus of the HERMES project CICYT TIC2000-0335-C03-02.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we describe the structure, organisation and results of the Spanish exercise included within the framework of SENSEVAL-2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although we closely follow the general architecture of the evaluation of SENSEVAL-2, the final setting of the Spanish exercise involved a number of choices detailed in section 2. In the following sections we describe the data, the manual tagging process (including the inter-tagger agreement figures), the participant systems and the accuracy results (including some baselines for comparison purposes).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For Spanish SENSEVAL, the lexical-sample variant for the task was chosen. The main reasons for this decision are the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Selection", "sec_num": "2.1" }, { "text": "\u2022 During the same tagging session, it is easier and quicker to concentrate only on one word at a time. That is, tagging multiple instances of the same word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Selection", "sec_num": "2.1" }, { "text": "\u2022 The all-words task requires access to a full dictionary. To our knowledge, there are no full Spanish dictionaries available (with low or no cost). Instead, the lexical-sample task required only as many dictionary entries as words in the sample task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Selection", "sec_num": "2.1" }, { "text": "The task for Spanish is a \"lexical sample\" for 39 words 1 (17 nouns, 13 verbs, and 9 adjectives). See table 1 for the complete list of all words selected for the Spanish lexical sample task. The words can belong only to one of the syntactic categories. The fourteen words selected to be translation-equivalents to English has been:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selection", "sec_num": "2.2" }, { "text": "\u2022 Nouns: arte (=art), autoridad (= authority), canal ( = channel), circuito ( = circuit), and naturaleza ( = nature).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selection", "sec_num": "2.2" }, { "text": "\u2022 Verbs: conducir (=drive), tratar (=treat), and usar (=use).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selection", "sec_num": "2.2" }, { "text": "\u2022 Adjectives: ciego (=blind), local(= local), natural (= natural), simple (= simple), verde (= green), and vital(= vital).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Selection", "sec_num": "2.2" }, { "text": "The corpus was collected from two different sources: \"El Peri6dico\" 2 (a Spanish newspaper) and LexEsp 3 (a balanced corpus of 5.5 million words). The length of corpus samples is the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Selection", "sec_num": "2.3" }, { "text": "The lexicon provided was created specifically for the task and it consists of a definition for each sense linked to the Spanish version of EuroWordNet and, thus, to the English WordNet 1.5. The syntactic category and, sometimes, examples and synonyms are also provided. The connections to EuroWord-Net have been provided in order to have a common language independent conceptual structure. Neither proper nouns nor multiwords has been considered. We have also provided the complete mapping between WordNet 1.5 and 1.6 versions 4 \u2022 Each dictionary entry have been constructed consulting the cor-pus and multiple Spanish dictionaries (including the Spanish WordNet).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of Dictionary", "sec_num": "2.4" }, { "text": "The Spanish SENSEVAL annotation procedure was divided into three consecutive phases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation procedure", "sec_num": "2.5" }, { "text": "\u2022 Corpus and dictionary creation \u2022 Annotation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation procedure", "sec_num": "2.5" }, { "text": "\u2022 Referee process All these processes have been possible thanks to the effort of volunteers from three NLP groups from Universitat Politecnica de Catalunya 5 (UPC), Universitat de Barcelona 6 (UB) and Universidad Nacional de Educaci6n a Distancia 7 (UNED).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation procedure", "sec_num": "2.5" }, { "text": "The most important and crucial task was carried out by the UB team of linguists, headed by Mariana Taule. They were responsible for the selection of the words, the creation of the dictionary entries and the selection of the corpus instances. First, this team selected the polysemous words for the task consulting several dictionaries including the Spanish WordNet and a quick inspection to the Spanish corpus. For the words selected, the dictionary entries were created simultaneously with the annotation of all occurrences of the word. This allowed the modification of the dictionary entries (i.e. adapting the dictionary to the corpus) during the annotation and the elimination of unclear corpus instances (i.e. adapting the corpus to the dictionary).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus and Dictionary Creation", "sec_num": "2.5.1" }, { "text": "Once the Spanish SENSEVAL dictionary and the annotated corpus were created, all the data was delivered to the UPC and UNED teams, removing all the sense tags from the corpus. Having the Spanish SENSEVAL dictionary provided by the UB team as the unique semantic reference for annotation both teams performed in parallel and simultaneously a new annotation of the whole corpus. Both teams where allowed to provide comments/problems on the each of the corpus instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "2.5.2" }, { "text": "Finally, in order to provide a coherent annotation, a unique referee from the UPC team collate both annotated corpus tagged by the UPC and the UNED teams. This referee was not integrated in the UPC team in the previous annotating phase. The referee was in fact providing a new annotation for each instance when occurring a disagreement between the sense tags provided by the UPC and UNED teams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referee Control", "sec_num": "2.5.3" }, { "text": "3 The Spanish data 3.1 Spanish Dictionary The Spanish lexical sample is a selection of higl medium and low polysemy frequent nouns, verbs an adjectives. The dictionary has 5.10 senses per wor and the polysemy degree ranges from 2 to 13. Noun has 3.94 ranging from 2 to 10, verbs 7.23 from 4 t 13 and adjectives 4.22 from 2 to 9 (see table 1 fo further details).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referee Control", "sec_num": "2.5.3" }, { "text": "The lexical entries of the dictionary have the fol lowing form: Figure 1 : Dictionary entry format For instance, the dictionary for noun headwor' arte ( = art) is: arte#NCMS#1#Actividad humana o producto d tal actividad que expresa simb6licamente un as pecto de la realidad: el arte de la musica; el art precolombino #SIN:?#00518008n/02980374n7 arte#NCMS#2#Sabiduria, destreza o habilida\u2022 de una persona en una actividad o con ducta determinada: tiene mucho arte bai lando; despleg6 todo su arte para convencerl #SIN:?#03850627n# arte#NCMS#3#Aparato que sirve par, pescar#SIN: ?#02005 770n# 3.2 Spanish Corpus We adopted, when possible, the guidelines propose1 by the SENSEVAL organisers (Edmonds, 2000) . Fo each word selected having n senses we provided a least 75 + 15n instances. For the adjective popular ; larger set of instances has been provided to test per formance improvement when increasing the numbe of examples. These data has been then ramdoml: divided in a ratio of 2:1 between training and tes set.", "cite_spans": [ { "start": 688, "end": 703, "text": "(Edmonds, 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 64, "end": 72, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Referee Control", "sec_num": "2.5.3" }, { "text": "< HEADWORD># # < SENSENUMBER ># # SIN:< SINONYMWORDs ># #", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referee Control", "sec_num": "2.5.3" }, { "text": "The corpus was structured following the standan SENSEVAL XML format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referee Control", "sec_num": "2.5.3" }, { "text": "In this section we discuss the most frequent and reg ular types of disagreement between annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Major problems during annotation", "sec_num": "3.3" }, { "text": "In particular, the dictionary proved to be not suf ficiently representative of the selected words to b1 annotated. Although the dictionary was built fo the task, out of 48% of the problems during the sec and phase of the annotation where due to the lacl of the appropriate sense in the corresponding dictionary entry. This portion includes 5% of metaphorical uses not explicitly described into the dictionary entry. Furthermore, 51% of the problems reported by the annotators were concentrated only on five words (pasaje, canal, bomba, usar, and saltar) .", "cite_spans": [ { "start": 513, "end": 553, "text": "(pasaje, canal, bomba, usar, and saltar)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Major problems during annotation", "sec_num": "3.3" }, { "text": "Selecting only one sentence as a context during annotation was the other main problem. Around 26% of the problems where attributed to insufficient context to determine the appropriate sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Major problems during annotation", "sec_num": "3.3" }, { "text": "Other sources of minor problems included different Part-of-Speech from the one selected for the word to be annotated, and sentences with multiple meanings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Major problems during annotation", "sec_num": "3.3" }, { "text": "In general, disagreement between annotators (and sometimes the use of multiple tags) must be interpreted as misleading problems in the definition of the dictionary entries. The inter-tagger agreement between UPC and UNED teams was 0.64% and the Kappa measure 0.44%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-tagger agreement", "sec_num": "3.4" }, { "text": "Twelve systems from five teams participated in the Spanish task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Systems", "sec_num": "4" }, { "text": "\u2022 Universidad de Alicante (UA) combined a Knowledge-based method and a supervised method. The first uses WordNet and the second a Maximum Entropy model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Systems", "sec_num": "4" }, { "text": "\u2022 John Hopkins University (JHU) presented a metalearner of six diverse supervised learning subsystems integrated via classifier. The subsystems included decision lists, transformationbased error-driven learning, cosine-based vector models, decision stumps and feature-enhanced naive Bayes systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Systems", "sec_num": "4" }, { "text": "\u2022 Stanford University (SU) presented a metalearner mainly using Naive Bayes methods, but also including vector space, n-gram, and KNN classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Systems", "sec_num": "4" }, { "text": "\u2022 University of Maryland (UMD) used a marginbased algorithm to the task: Support Vector Machine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Systems", "sec_num": "4" }, { "text": "\u2022 University of Manitoba (d6-lO,dX-Z) presented different combinations of classical Machine Learning algorithms. Table 1 presents the results in detail for all systems and all words. The best scores for each word are highlighted in boldface. The best average score is obtained by the JHU system. This system is the best in 12 out of the 39 words and is also the best for nouns and verbs but not for adjectives. The SU system gets the highest score for adjectives. The associated agreement and kappa measures for each system are shown in Table 2 . Again JHU system scores higher in both agreement and Kappa measures. This indicates that the results from the JHU system are closer to the corpus than the rest of participants.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 537, "end": 544, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "The Systems", "sec_num": "4" }, { "text": "Obviously, an in deep study of the strengths and weaknesses of each system with respect to the results of the evaluation must be carried out, including also further analysis comparing the UPC and UNED annotations against each system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "6" }, { "text": "Following the ideas described in (Escudero et al., 2000) we are considering also to add a cross-domain aspect to the evaluation in future SENSEVAL editions, allowing the training on one domain and the evaluation on the other, and vice-versa.", "cite_spans": [ { "start": 33, "end": 56, "text": "(Escudero et al., 2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "6" }, { "text": "In order to provide a common platform for evaluating different WSD algorithms we are planning to process the Spanish corpus tagged with POS using MACO (Carmona et al., 1998) and RELAX (Padro, 1998 ", "cite_spans": [ { "start": 151, "end": 173, "text": "(Carmona et al., 1998)", "ref_id": "BIBREF0" }, { "start": 184, "end": 196, "text": "(Padro, 1998", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Work", "sec_num": "6" }, { "text": "http://www.lsi.upc.es/.-vnlp 6 http://www.ub.es/ling/labing.htm 7 http://rayuela.ieec.uned.es/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The Spanish SENSEVAL has been possible thanks to the effort of volunteers from three NLP groups from UPC, UB, and UNED universities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An Environment for Morphosyntactic Processing of Unrestricted Spanish Text", "authors": [ { "first": "J", "middle": [], "last": "Carmona", "suffix": "" }, { "first": "S", "middle": [], "last": "Cervell", "suffix": "" }, { "first": "L", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Marti", "suffix": "" }, { "first": "L", "middle": [], "last": "Padro", "suffix": "" }, { "first": "R", "middle": [], "last": "Placer", "suffix": "" }, { "first": "H", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "M", "middle": [], "last": "Taule", "suffix": "" }, { "first": "J", "middle": [], "last": "Turmo", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the First International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Carmona, S. Cervell, L. Marquez, M.A. Marti, L. Padro, R. Placer, H. Rodriguez, M. Taule, and J. Turmo. 1998. An Environment for Morphosyn- tactic Processing of Unrestricted Spanish Text. In Proceedings of the First International Conference on Language Resources and Evaluation, LREC, Granada, Spain.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Designing a task for SENSEVAL-2. Draft, Sharp Laboratories, Oxford", "authors": [ { "first": "P", "middle": [], "last": "Edmonds", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Edmonds. 2000. Designing a task for SENSEVAL-2. Draft, Sharp Laboratories, Ox- ford.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation", "authors": [ { "first": "G", "middle": [], "last": "Escudero", "suffix": "" }, { "first": "L", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 4th Computational Natural Language Learning Workshop", "volume": "", "issue": "", "pages": "Por-- tugal", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Escudero, L. Marquez, and G. Rigau. 2000. A Comparison between Supervised Learning Algo- rithms for Word Sense Disambiguation. In Pro- ceedings of the 4th Computational Natural Lan- guage Learning Workshop, CoNLL, Lisbon, Por- tugal.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Hybrid Environment for Syntax-Semantic Tagging", "authors": [ { "first": "L", "middle": [], "last": "Padro", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Padro. 1998. A Hybrid Environment for Syntax- Semantic Tagging. Phd. Thesis, Software De- partment (LSI). Technical University of Catalonia (UPC).", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "content": "
Agreement 0.51 0.630.650.610.55 0.57 0.59 0.53 0.59 0.550.510.57
Kappa0.20 0.340.470.200.13 0.19 0.23 0.06 0.24 0.15 -0.03 0.15
", "num": null, "text": "Evaluation of Spanish words. p stands for Part-of-Speech; e for the total number of examples (including train and test sets); s for the number of senses; MF for the Most Frequent Sense Classifier and the rest are the system acronyms.words UA su JHU UMD d6 d7 d8 d9 dlO dX dY dZ", "type_str": "table" }, "TABREF2": { "html": null, "content": "", "num": null, "text": "Agreement and Kappa measures", "type_str": "table" } } } }