{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:11:13.685443Z" }, "title": "Zero-shot cross-lingual Meaning Representation Transfer: Annotation of Hungarian using the Prague Functional Generative Description", "authors": [ { "first": "Attila", "middle": [], "last": "Nov\u00e1k", "suffix": "", "affiliation": { "laboratory": "MTA-PPKE Hungarian Language Technology Research Group", "institution": "P\u00e1zm\u00e1ny P\u00e9ter Catholic University", "location": { "postCode": "1083", "settlement": "Budapest", "country": "Hungary" } }, "email": "" }, { "first": "Borb\u00e1la", "middle": [], "last": "Nov\u00e1k", "suffix": "", "affiliation": { "laboratory": "MTA-PPKE Hungarian Language Technology Research Group", "institution": "P\u00e1zm\u00e1ny P\u00e9ter Catholic University", "location": { "postCode": "1083", "settlement": "Budapest", "country": "Hungary" } }, "email": "" }, { "first": "Csilla", "middle": [], "last": "Nov\u00e1k", "suffix": "", "affiliation": { "laboratory": "MTA-PPKE Hungarian Language Technology Research Group", "institution": "P\u00e1zm\u00e1ny P\u00e9ter Catholic University", "location": { "postCode": "1083", "settlement": "Budapest", "country": "Hungary" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present the results of our experiments concerning the zero-shot crosslingual performance of the PERIN sentence-tograph semantic parser. We applied the PTG model trained using the PERIN parser on a 740k-token Czech newspaper corpus to Hungarian. We evaluated the performance of the parser using the official evaluation tool of the MRP 2020 shared task. The gold standard Hungarian annotation was created by manual correction of the output of the parser following the annotation manual of the tectogrammatical level of the Prague Dependency Treebank. An English model trained on a larger one-million-token English newspaper corpus is also available, however, we found that the Czech model performed significantly better on Hungarian input due to the fact that Hungarian is typologically more similar to Czech than to English. We have found that zero-shot transfer of the PTG meaning representation across typologically not-too-distant languages using a neural parser model based on a multilingual contextual language model followed by a manual correction by linguist experts seems to be a viable annotation scenario.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present the results of our experiments concerning the zero-shot crosslingual performance of the PERIN sentence-tograph semantic parser. We applied the PTG model trained using the PERIN parser on a 740k-token Czech newspaper corpus to Hungarian. We evaluated the performance of the parser using the official evaluation tool of the MRP 2020 shared task. The gold standard Hungarian annotation was created by manual correction of the output of the parser following the annotation manual of the tectogrammatical level of the Prague Dependency Treebank. An English model trained on a larger one-million-token English newspaper corpus is also available, however, we found that the Czech model performed significantly better on Hungarian input due to the fact that Hungarian is typologically more similar to Czech than to English. We have found that zero-shot transfer of the PTG meaning representation across typologically not-too-distant languages using a neural parser model based on a multilingual contextual language model followed by a manual correction by linguist experts seems to be a viable annotation scenario.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Two workshops on Cross-Framework Meaning Representation Parsing (MRP) hosted by the 2019 and 2020 editions of the Conference on Computational Natural Language Learning (CoNLL), featured two editions of a shared task, where implementations of parsers turning raw text into different flavors of meaning representation graphs competed with each other. The first MRP 2019 task involved only English as the object language, and 5 frameworks of meaning representation were featured: DM, PSD, EDS, UCCA and AMR. Two of these frameworks, DM (DELPH-IN MRS Bi-Lexical Dependencies, Ivanova et al., 2012) , and PSD (Prague Semantic Dependencies Haji\u010d et al., 2012; Miyao et al., 2014) are simple bi-lexical dependency graphs generated from core predicateargument structure of a rich syntactic-semantic annotation based on general theories of grammar. The underlying linguistic theory is Head-Driven Phrase Structure Grammar (HPSG, Pollard and Sag, 1994) with Minimal Recursion Semantics (MRS, Copestake et al., 2005) for DM and Prague Functional Generative Description (FGD, Sgall et al., 1986) for PSD.", "cite_spans": [ { "start": 572, "end": 593, "text": "Ivanova et al., 2012)", "ref_id": "BIBREF13" }, { "start": 634, "end": 653, "text": "Haji\u010d et al., 2012;", "ref_id": "BIBREF10" }, { "start": 654, "end": 673, "text": "Miyao et al., 2014)", "ref_id": "BIBREF18" }, { "start": 920, "end": 942, "text": "Pollard and Sag, 1994)", "ref_id": "BIBREF21" }, { "start": 976, "end": 1005, "text": "(MRS, Copestake et al., 2005)", "ref_id": null }, { "start": 1058, "end": 1083, "text": "(FGD, Sgall et al., 1986)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In DM and PSD, the nodes are surface word forms. The other three frameworks feature more complex graphs that contain nodes not in one-toone relation to input words. Elementary Dependency Structures (EDS) are based on English Resource Grammar (Flickinger et al., 2017) aka English Resource Semantics (ERS) (Flickinger et al., 2014) annotation 1 that was turned into a variablefree semantic dependency graph consisting of labeled graph nodes representing logical predications and edges representing labeled argument positions. The conversion from ERS to EDS discards information on semantic scope. The nodes are anchored to spans of the input string.", "cite_spans": [ { "start": 242, "end": 267, "text": "(Flickinger et al., 2017)", "ref_id": "BIBREF7" }, { "start": 305, "end": 330, "text": "(Flickinger et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Universal Conceptual Cognitive Annotation (UCCA, Abend and Rappoport, 2013) is an abstract annotation featuring only purely semantic categories and structure. The foundational layer of UCCA (featured in the shared task) consists of a very basic set of semantic categories like Process, Argument, State, \"Adverb\" (modifier) etc., that are used as labels on edges linking unlabeled nodes representing semantic units and surface word forms.", "cite_spans": [ { "start": 49, "end": 75, "text": "Abend and Rappoport, 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, Abstract Meaning Representation (AMR, Banarescu et al., 2013) features graphs comparable to EDS, but with more abstract predication labels due to application of lexical decomposition and normalization towards verbal senses e.g., representing 'similar' as the verbal sense 'resemble'. In contrast to EDS (and the rest), AMR nodes are not explicitly anchored to spans of the surface form. 2 The graphs in the MRP shared tasks were presented in a JSON-lines-based Uniform Graph Interchange Format, and participants were asked to design and train systems that predict sentence-level meaning representations in all frameworks in parallel to foster transfer and multi-task learning.", "cite_spans": [ { "start": 47, "end": 70, "text": "Banarescu et al., 2013)", "ref_id": "BIBREF2" }, { "start": 396, "end": 397, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the MRP 2020 shared task , DM was dropped, as EDS is a richer representation derived from the same resource. PSD was also replaced by a richer meaning representation, Prague Tectogrammatical Graphs (PTG, , derived from the same Prague Functional Generative Description (FGD), but retaining more of the original Tectogrammatical annotation than PSD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EDS, UCCA, and AMR were retained, and Discourse Representation Graphs (DRG, were added as a new meaning representation. DRG is a graph encoding of Discourse Representation Structures (DRS), the meaning representations at the core of Discourse Representation Theory (DRT; Kamp and Reyle, 1993) . This model handles many challenging semantic phenomena from quantifiers to presupposition accommodation, and discourse structure.", "cite_spans": [ { "start": 271, "end": 292, "text": "Kamp and Reyle, 1993)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to English data, the MRP 2020 task covered new languages, one for each of four of the five covered formalisms (except EDS): Czech for PTG, German for UCCA and DRG, and Chinese for AMR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Training and evaluation data of the MRP shared task was only available to shared task participants distributed to them by the Linguistic Data Consortium (LDC), since part of the data is based on LDC-owned material, the WSJ part of the Penn Treebank (PTB). The shared task site states that, upon completion of each competition, subsets of task data that are copyright-free (including system submissions and evaluation results) will be made available for public, open-source download. However, unfortunately, we have not found a public release of the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, one of the top performing systems at MRP 2020, the PERIN parser (Samuel and Straka, 2020) , which was developed at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University, Prague, was made available at the \u00daFAL GitHub repo 3 including pretrained models for the PERIN submission to the shared task. There is also a link to an Interactive demo on Google Colab. This facilitated testing the models on various inputs. Positive subjective impressions of the performance of the English and especially the Czech PTG model of the parser on Hungarian input prompted us to perform the experiment described here evaluating the zero-shot cross-lingual performance of the model. Of the models available, we selected PTG, because", "cite_spans": [ { "start": 73, "end": 98, "text": "(Samuel and Straka, 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 the categories/concepts it operates with looked immediately familiar, \u2022 the annotation it generated seemed reasonable and detailed, \u2022 the non-English model covers Czech, a language sharing many typological features with Hungarian (rich morphology, relatively free word order, pro drop etc.), \u2022 the model was trained on a sizable 740k-token corpus, \u2022 a rather detailed 1255-page annotation manual (Mikulov\u00e1 et al., 2006) of the underlying Prague tectogrammatical annotation is available in English, and \u2022 performance of the parsers (also of PERIN, in particular) on the Czech PTG data reported in the MRP 2020 task results was relatively high.", "cite_spans": [ { "start": 398, "end": 421, "text": "(Mikulov\u00e1 et al., 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Concerning the other formalisms featured in the MRP 2020 task, we had the following impressions, further motivating our model selection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Annotation in the UCCA foundational layer is rather coarse-grained compared to PDT (a handful of edge label types, no annotation on nodes). 4 In spite of this, all systems participating in MRP 2020 performed relatively poorly (F 1 < 0.5 on edge labels) on UCCA. This might indicate consistency problems with the UCCA annotation. \u2022 While reported performance of the best parsers is generally good on DRG, our impression of DRG output generated from our Hungarian test corpus was that it made rela-tively little sense. \u2022 Performance of the best parsers was also good on EDS. However, an EDS-style model is trained only for English. The model struggles on Hungarian input completely misinterpreting important constructions. This seems to be due to typological differences between Hungarian and English. E.g., grammatical relations expressed by prepositions and word order are mainly expressed by suffixes in Hungarian, the latter being an agglutinative language. The EDS model often fails to properly recognize most of these relations (locations, times, possessive constructions, constituents not in canonical positions for English etc.), because suffixes are not independent tokens in Hungarian. 5 There is also pro drop in Hungarian, and this phenomenon affects a high proportion of clauses (see section 4.3.1), but the EDS model fails to recover all such covert pronouns. The PTG annotation the Czech PERIN model was trained on is derived from the Prague Tectogrammatical Annotation, an elaborate system of deep linguistic analysis based on a many-decadelong tradition of dependency-grammar-based linguistic research. The Prague Dependency Treebank (PDT) and the Prague Czech-English Dependency Treebank (PCEDT; Haji\u010d et al., 2012) , from which PTG data was derived, embody an awe-inspiringly immense amount of decade-long annotation work. In addition to the deep syntactic annotation we review here, P(CE)DT annotation includes morphological annotation and a dependency-based shallow 'analytical' syntactic annotation of the underlying text. The tectogrammatical analysis was generated based on these surface-level representations, and then manually checked and corrected.", "cite_spans": [ { "start": 142, "end": 143, "text": "4", "ref_id": null }, { "start": 1196, "end": 1197, "text": "5", "ref_id": null }, { "start": 1714, "end": 1733, "text": "Haji\u010d et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the MRP 2020 shared task featured a \"cross-lingual\" track, it only meant in practice that parsers were trained and tested on data in more than one language for meaning representations that had such annotation available. Transfer from one language to another was not tested there.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The Prague dependency annotation scheme has been ported to languages other than Czech or English, examples including the Slovak Dependency Treebank (Gajdo\u0161ov\u00e1 et al., 2016) , the PAWS Treebank (including Polish and Russian in addition to English and Czech, Nedoluzhko et al., 2018) , and the Prague Arabic Dependency Treebank (Haji\u010d and Zem\u00e1nek, 2004) . However, all these syntactic annotations have been created manually. At the same time, there is a significant body of research literature describing work concerning crosslingual transfer using deep-neural-network-based models. Multilingual pre-training of contextual language models like multilingual BERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020) facilitated this kind of knowledge transfer. These models have been used to train massively multilingual syntactic dependency parsers (Kondratyuk and Straka, 2019), zero-shot named entity recognizers (Wu et al., 2020) etc, with even specific multilingual benchmarks prepared for testing the cross-lingual generalization capability of models on various tasks such as sentence-pair classification, structured prediction (POS tagging, NER), question answering, natural language inference and sentence retrieval (Hu et al., 2020) .", "cite_spans": [ { "start": 148, "end": 172, "text": "(Gajdo\u0161ov\u00e1 et al., 2016)", "ref_id": "BIBREF8" }, { "start": 257, "end": 281, "text": "Nedoluzhko et al., 2018)", "ref_id": "BIBREF19" }, { "start": 326, "end": 351, "text": "(Haji\u010d and Zem\u00e1nek, 2004)", "ref_id": "BIBREF11" }, { "start": 660, "end": 681, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 698, "end": 720, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" }, { "start": 921, "end": 938, "text": "(Wu et al., 2020)", "ref_id": "BIBREF24" }, { "start": 1229, "end": 1246, "text": "(Hu et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we examine a zero-shot approach to meaning representation transfer, which belongs to the structured prediction problem class. Few studies address this topic, because evaluation requires tedious manual work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The approach we took was the following. We had a 150-sentence Hungarian corpus annotated by the PERIN Czech PTG model. We turned this into a gold standard Hungarian annotation by manually correcting the output of the parser following the annotation manual of the tectogrammatical level of the Prague Dependency Treebank (Mikulov\u00e1 et al., 2006) . Members of the annotation team had solid training in theoretical and computational linguistics and cognitive science encompassing both dependency syntax and formal semantics. However, we had to understand and learn details of the annotation scheme during the process, which required substantial effort. Fortunately, examples in the Annotation Guidelines have English translation. However, only a few illustrative examples have a full tree representation. We had to reiterate and rediscuss our solutions several times to converge on an annotation that we considered consistent with what is described in the PDT annotation guidelines. 50-sentence folds were annotated, discussed and reannotated several times, as our understanding of the annotation scheme evolved during the process. Access to PCEDT would have been very helpful, however, only the Czech part of treebank is available on line, 6 so we could not take a look at the English translations, or efficiently search for specific constructions not being speakers of Czech.", "cite_spans": [ { "start": 320, "end": 343, "text": "(Mikulov\u00e1 et al., 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "When doing the manual annotation correction, we had to refrain from making modifications to the annotation scheme if we 'disliked' the way a specific phenomenon is handled (or ignored) in the original scheme. We also tried to refrain from interpreting dubious situations 'the way we would have made it', we tried to figure out instead, how \u00daFAL experts would do it. We assumed that if the parser more-or-less consistently generates some sort of sufficiently sensible annotation for a specific construction, it reflects a deliberate annotation pattern in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "The serialized JSON lines graph representations output by the parser cannot be edited themselves, although they can be visualized using mtool, 7 the Swiss Army Knife for Graph-Based Meaning Representation. We thus created a converter from the JSON lines graph representations to CoNLL-U 8 (using anchors to project the data) and vice versa, and we edited the graphs in the CoNLL-U format. We used mtool to visualize our gold standard solutions while we edited the annotations and also to evaluate the zero-shot output of the parser against the edited gold standard version. Based on the graph configurations, mtool creates potential node-to-node mappings between the two graphs, so the fact that the ordering of nodes is changed during conversions is not a problem from the point of view of evaluation and visualization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "The name of the parser, PERIN, is motivated by the fact that it embodies a permutation-invariant model that predicts all nodes at once in parallel and 6 https://lindat.mff.cuni.cz/services/ pmltq/#!/treebank/pcedt20_cz/query/ 7 https://github.com/cfmrp/mtool 8 The files have the same fields as CoNLL-U , but category and dependency labels are, of course, the ones coming from the PTG model rather than UD-compliant labels. We use the deps field to store graph edges, upos to store the POS label and feats to store other node features. We used a special mrg relation to link function words (e.g., determiners, postpositions, subordinating conjunctions) to the head content word anchored to the same graph node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PERIN parser", "sec_num": "3.1" }, { "text": "is trained using a permutation invariant loss function that is not sensitive to the ordering of nodes (Samuel and Straka, 2020) .", "cite_spans": [ { "start": 102, "end": 127, "text": "(Samuel and Straka, 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "The PERIN parser", "sec_num": "3.1" }, { "text": "The language model the parser uses as neural input 'features' when inferring the graph annotation based on the input tokens is XLM-RoBERTa (base). XLM-R (Conneau et al., 2020) is the encoder part of a transformer model pretrained originally on 2.5TB of filtered CommonCrawl data in 100 languages including Czech and Hungarian to predict masked word forms. This underlying multilingual neural language model makes an essential contribution to the decent zero-shot cross-lingual performance we encountered possible, enabling the parser to output sensible annotation for input in a language the parser itself was not trained to handle originally.", "cite_spans": [ { "start": 153, "end": 175, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The PERIN parser", "sec_num": "3.1" }, { "text": "The model uses relative string encodings to predict node labels that map anchored token strings onto label strings. Specifically, in the PTG model, lemmata ('t-lemmata') are used as node labels. This mechanism performs well (as shown by MRP 2020 evaluation results) when parsing text in the same language the model was trained on. However, in our case, applying Czech lemmatization patterns to Hungarian input unsurprisingly resulted in funny lemmata. Nevertheless, since PTG is a 'Flavor-1' model, i.e. nodes are anchored to spans in the input (practically to tokens), external lemmatization can be used to fix the node labels. Since tokens could be linked to nodes, we could also evaluate the annotation ignoring the ill-formed lemmata.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PERIN parser", "sec_num": "3.1" }, { "text": "In contrast, our initial probing of the model indicated that grammatical/semantic relations among content words (edge labels in the graph, 'functors' and 'subfunctors' in PDT terminology) seem to carry over relatively well to Hungarian. And it was this aspect of the annotation that we wanted to concentrate on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PERIN parser", "sec_num": "3.1" }, { "text": "In addition to visualization, we also used mtool to evaluate the zero-shot output of both the Czech and the English PTG models against the edited gold standard version of the test corpus. English PTG has less node features than the Czech model, and also the edge labels generally contain no subfunctor annotation. The English model also uses different patterns to generate node labels (lemmata), so the performance of the models would not be comparable without applying some sort of normalization to the annotations before comparison. The normalization we performed included a) replacement of node labels (lemmata) by the sequence of tokens anchored to the node (with the exception of unanchored tokens, which retained their labels), and b) removal of subfunctor annotation from edge labels (except for subtypes of coref and bridging relations, as these also have subfunctors in the English annotation). Performance of the normalized output of the models as returned by mtool is compared in Table 1. Node properties have not been corrected or normalized (see Section 4.1), so that row can be ignored. But the Czech model definitely performs better at identifying grammatical relations (edges, attributes) and the difference in node label recall reflects mainly its advantage at identifying zero nodes (due to handling of pro drop and richer annotation of argument coreference relations in light verb constructions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Below we discuss specific details of the performance of the Czech model on Hungarian input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The PTG graph representations used in the MRP 2020 shared task were created by automatic conversion from tectogrammatical trees in P(CE)DT. English data comes from the Prague Czech-English Dependency Treebank 2.0 (Haji\u010d et al., 2012) while the source of Czech data was the Prague Dependency Treebank 3.5. (Haji\u010d et al., 2020) . state that 'The Prague treebanks, especially the Czech PDT, contain a number of grammatemes that were assigned semiautomatically without much human intervention. Such properties were omitted and only the manually assigned (or checked) ones were carried over to PTG.' What we see in fact, is that, with the exception of part of speech, and the tfa feature (see Section 4.1.2) all we have in terms of node properties ('grammatemes' in the Prague terminology) are the features that were introduced during the upgrade of the treebanks from version 2.0/2.5 to version 3.0. These properties are consequently not described in the PDT 2.0 annotation manual (Mikulov\u00e1 et al., 2006) , but their description can be found in Mikulov\u00e1 et al. (2013) . The latter source states that these properties were converted from previous annotation by semiautomatic procedures. Moreover, some of the features that remained (typgroup, diatgram) have limited or no relevance from our cross-lingual point of view.", "cite_spans": [ { "start": 213, "end": 233, "text": "(Haji\u010d et al., 2012)", "ref_id": "BIBREF10" }, { "start": 305, "end": 325, "text": "(Haji\u010d et al., 2020)", "ref_id": "BIBREF9" }, { "start": 977, "end": 1000, "text": "(Mikulov\u00e1 et al., 2006)", "ref_id": "BIBREF16" }, { "start": 1041, "end": 1063, "text": "Mikulov\u00e1 et al. (2013)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Node properties", "sec_num": "4.1" }, { "text": "At the same time, we really miss the very important grammatical properties that were omitted (number, person, tense, modality, degree etc.). This results in that almost all relevant information is lost in the annotation of e.g., covert pronouns (see Section 4.3.1) or modal auxiliaries (corresponding to can, must, will etc.). The latter are not represented in PDT annotation as independent nodes: they only contribute a feature to the node of verb they combine with. This feature, however, is lost in conversion. Technical rather than practical considerations may have played the major role in the selection of the properties kept. Should another conversion of PDT to PTG ever be performed, we would be very happy to see the missing features in the new version. Moreover, the lack of crucial grammatical features in the representation may play an important role in the parser making errors like establishing coreference relations between pronouns and noun phrases of different person/number (e.g., between 'she' and 'I' in the parse of Elj\u00f6n, mert szeretem\u0151t. 'She will come because I love her.' instead of linking 'she' and 'her'.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node properties", "sec_num": "4.1" }, { "text": "Since much of what we would like to see there is not there, and some of what we do have is irrelevant, we have not performed an exhaustive quantitative evaluation of the mapping of node features. Nevertheless, we make some qualitative observa-tions concerning the performance of the parser with regard to specific node features present in the annotation in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node properties", "sec_num": "4.1" }, { "text": "Lexical nodes have at least a part-of-speech property, which is termed 'semantic' in PDT terminology, but it is much less semantic than what you would expect. E.g., nominalized verbs are 'semantic' nouns. There are just a few deviations from syntactic part of speech: deadjectival adverbs corresponding to English -ly adverbs are tagged 'semantic' adjectives, and numerals as adjectival or nominal quantifiers. Morphological negation is a feature reflected in the part of speech category set that does not apply to Hungarian. Also non-ly adverbs are sometimes tagged as adjectives, but otherwise part of speech is accurately identified by the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Part of speech", "sec_num": "4.1.1" }, { "text": "The Czech model also contains a feature related to topic-focus articulation (tfa). This is an advanced feature rarely found in computational meaning representations. However, topic-focus articulation has been one of the major research directions of the Prague school of linguistics behind PTD, so the presence of a feature like this is not so surprising after all. We would have, however, expected four possible values instead of the actual three: t = contextually bound expression (topic), f = contextually non-bound expression (new information) , c = (contextually bound) contrastive expression. We think that it would be relevant to distinguish contextually non-bound contrastive elements (focus proper) from contextually bound contrastive elements (contrastive topic). We could not determine from the limited description in the annotation manual how specific constructions (e.g., contrasting predicates) should fit into the annotation scheme used in PDT. The parser often assigns values to this feature that seem reasonable, but there are also cases where the annotation is obviously wrong (e.g., assignment of the f value to definite expressions). The source of these problems could be among others that word order constraints concerning contrastive elements (focus/contrastive topic) are quite different in Czech and Hungarian (Czech: clause final, Hungarian: preverbal) and that there is no definite article in Czech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic-focus articulation", "sec_num": "4.1.2" }, { "text": "The model is able to differentiate appeals, requests and questions from assertions, however, quite surprisingly, it often fails to identify potential ('would') and contrafactual ('would have') 9 modalities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factual and sentence modality", "sec_num": "4.1.3" }, { "text": "Although an F score of around 0.7 for edges/attributes (see Table 1 ) might not seem very great at first sight, this is not bad in fact (especially considering the rich variety of possible labels), and not very much worse than the performance of the same parser model for Czech input (F=0.84/0.78, Samuel and Straka, 2020) . The model is especially good at identifying adjuncts (time, place, directional and manner adverbials).", "cite_spans": [ { "start": 298, "end": 322, "text": "Samuel and Straka, 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Identification of grammatical/semantic relations", "sec_num": "4.2" }, { "text": "What was a bit disappointing for us is the annotation of predicate argument relations in PDT, which in most cases is limited to two relations called act and pat. These have nothing to do with real thematic roles like agent or patient, but are mostly simply placeholders for the first two arguments of any predicate. E.g., the subject of the window broke is act, while the predicate argument in the valency frame of the copula (i.e., blue in my hat is blue) is marked as pat. But PTG is not alone having uninformative argument labels among the meaning representation schemes in MRP 2020: others have ARG0, ARG1 etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of grammatical/semantic relations", "sec_num": "4.2" }, { "text": "In addition to edge labels, the model also relatively successfully predicts empty elements, such as dropped pronouns and ellipsis as long as similar patterns apply to both the source and the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empty elements", "sec_num": "4.3" }, { "text": "For example, Czech, similarly to Hungarian, features pro drop: i.e. subject pronouns may optionally be omitted in neutral sentences, as shown in the two side-by-side one-word sentences in (1). In the case of Hungarian, there is a lack of overt subject pronouns in most cases when the pronoun is not emphasized. It is a nice feature of the model that it includes existentially bound optional arguments in the analyses it generates (e.g., Olvasok. is interpreted as 'I am reading (something).') (1) Olvasok.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "read.Prs.1Sg\u010c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "tu. read.Prs.1Sg 'I am reading. = I am reading (something).'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "In Hungarian, the same sentence with an overt pronoun has different interpretations depending on the intonation pattern (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "(2) \u00c9n olvasok.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "'I am reading.' (neutral, rare) 'It is me who is reading.' (focus) 'As for me, I do read.' (contrastive topic)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "However, in cases where the same phenomenon (i.e. pro drop) does not apply to certain pronouns in the source language, the model always fails to predict such covert pronouns in the target language. In Hungarian, for example, object pronouns also undergo pro drop. What makes this possible is that verbal morphology encodes not only subject agreement but also the definiteness of the object, as illustrated in (3). If the morphology of verb form implies the presence of a definite object, then the lack of an overt object implies the presence of an object pronoun (4a). In contrast, there is no object pro drop in Czech (4b), thus the model fails to predict covert object pronouns for Hungarian. Instead, we get the same interpretation with an existentially bound object that we get for Olvasok (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "( ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro drop", "sec_num": "4.3.1" }, { "text": "The same applies to possessive constructions involving personal pronouns. The Czech (or English) version of these constructs involves a possessive pronoun determiner followed by a noun, optionally modified by adjuncts (5b). In Hungarian (and many similar agglutinating languages), the construct involves possessive suffixes attached to the noun as inflection, and the presence of an overt pronoun is optional, and, again, is mostly limited to cases of emphasis on the pronoun (5a). Since the possessive pronoun is obligatory in Czech (it is the key element of the construction), the parser trained on Czech data always fails to predict empty personal pronouns involved in possessive constructions in Hungarian, too. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possessive constructions involving pronouns", "sec_num": "4.3.2" }, { "text": "The model also performs reasonably well predicting and reconstructing elliptical structures as long as a similar elliptical construction is present in the language the model was trained on. Both Czech and Hungarian feature gapping in the second clause in coordinated clauses. However, in Hungarian (similarly to e.g., Turkish), gapping in the first clause is also a frequently used construction. As shown in Fig. 1 , the parser fails to properly recognize the elliptical structure if the gap is in the first clause (not an option in Czech or English). For the given examples, we get a perfect parse only if the gap is in the second clause, and word order in the first clause is SVO (Fig. 1c) .", "cite_spans": [], "ref_spans": [ { "start": 408, "end": 414, "text": "Fig. 1", "ref_id": "FIGREF3" }, { "start": 682, "end": 691, "text": "(Fig. 1c)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Ellipsis", "sec_num": "4.3.3" }, { "text": "PDT much predates the Universal Dependencies (UD) project, and in contrast to the lexical content head solution to copula constructions there, the copula is the head in PDT/PTG. In Hungarian, there is a zero copula in the default 3rd person singular present indicative case, so we needed to introduce a new zero copula (#zerocop) item to accommodate the annotation of zero copula constructions to the scheme applied in PDT and PTG. As it is, the model fails to parse zero copula constructions due to the analysis above and the fact that there is always an overt copula in Czech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Zero copula", "sec_num": "4.3.4" }, { "text": "In contrast to subordination, coordination (in the PDT annotation manual: parataxis) is problematic for dependency-based annotation schemes because it is not an endocentric construction. The solution applied in the PTG implementation of PDT structures makes coordinating conjunctions or, in the absence of these, punctuation (commas) the head of coordinate structures, as shown in Fig. 1c . The coordinated predicates there (the two olvas 'read' nodes) are pred-members of the #comma node (technical head of the coordinate structure), and they are also linked directly by pred-effective edges to the node dominating the whole structure, here the root of the sentence graph. The technical head (#comma) is attached using a relation characterizing the paratactic structure (e.g., conjunction, disjunction, apposition etc., here: conj) to the node dominating the coordinate structure. This solution is again different from the one applied in UD, how-ever, it is analogous to the way coordination is represented, e.g., in EDS.", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 388, "text": "Fig. 1c", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Coordination/parataxis", "sec_num": "4.4" }, { "text": "However, two aspects of this solution are problematic. Certain types of coordination express an asymmetric relation like cause, consequence or confrontation, and these types of relations were doubled in the annotation scheme only because they also have a subordinating variant (coordinating confr, reas vs. subordinating contrd, caus). The distinction is purely syntactic and 99.5% of speakers would have an extremely hard time distinguishing the subordinating variant from the coordinating one. However, the analyses are very different. What is even worse, the representation of the paratactic variant of these constructions completely fails to distinguish which conjunct plays which role, e.g., what is the cause and what is the consequence. These unnecessary syntactic distinctions gave us a hard time during correction of the gold standard data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordination/parataxis", "sec_num": "4.4" }, { "text": "Coordinated predicates involving covert subject pronouns were analyzed by the model as verb phrase coordination sharing a single covert subject pronoun rather than assuming two coreferring covert pronouns. We accepted this solution assuming that similar constructions must have been analyzed analogously in the Czech PDT treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordination/parataxis", "sec_num": "4.4" }, { "text": "The model sometimes fails to integrate parts of the analysis into the whole structure, or, in some cases, completely ignores some part of the input. This often seems to be related to covert elements not attested in the source language like a zero copula or gapping in the first conjunct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further problems", "sec_num": "4.5" }, { "text": "Short function words are sometimes confused with short frequent function words in the source language, and this may result in wrong analysis. E.g., Hungarian s 'and' and a 'the' are sometimes confused with Czech s 'with' and a 'and', respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further problems", "sec_num": "4.5" }, { "text": "Function words (articles, postpositions, subordinating conjunctions, auxiliaries) are normally merged with content words (the node is anchored on several tokens), but in some cases a partial merge is performed (the function word is anchored both to an independent node of its own and to the node of a content word). This is an error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further problems", "sec_num": "4.5" }, { "text": "The model tokenizes at hyphens, and the hyphen remains unanchored. This is quite problematic for Hungarian, because suffixes (e.g., case endings) are often attached with a hyphen to the stem, and such case endings become independent tokens in the analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further problems", "sec_num": "4.5" }, { "text": "In this paper we presented our experiment investigating the zero-shot cross-lingual performance of neural parser models based on the PDT/PTG meaning representation formalism. The model yields reasonable performance, and it can be feasibly applied in a semi-automatic annotation scenario. The specific language pairs were Czech-Hungarian vs. English-Hungarian. The former model performs better because the source and the target language share more typological characteristics like rich morphology, free word order, pro drop etc. even though they belong to different language fam-ilies. Moreover, the PDT/PTG annotation scheme utilizing a rich set of dependency relations as edge labels seems to perform much better than, e.g., EDS where edge labels are completely abstract. Fortes of the PDT/PTG model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 some aspects of the model are extremely rich, \u2022 detailed classification and still efficient recognition of adjuncts and covert pronouns, including control, quasi-control, other coreference relations and existentially bound arguments, \u2022 most analyses are easy to interpret. Points that could be improved:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 important features were discarded in the PDTto-PTG conversion, \u2022 the act and pat argument relations are semantically empty (real thematic roles would be very much welcome), \u2022 problems with some asymmetric coordinating structures (unreasonable contrast between e.g., caus and reas), \u2022 too flat structures (e.g., the attachment of 'rhematizers' to the predicate instead of to what they modify).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "When decomposition is applied, anchoring of individual component nodes becomes non-trivial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ufal/perin 4 On the positive side, some distinctions present in UCCA, such as state vs. process are orthogonal to those in other annotation schemes, and these would be worth porting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note, however, that the English PTG model, which utilizes a rich set of edge label categories to encode grammatical relations, seems to be much less affected by typological differences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In PDT the value 'irreal' is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was implemented with support provided by grants FK 125217 and PD 125216 of the National Research, Development and Innovation Office of Hungary financed under the FK 17 and PD 17 funding schemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "UCCA: A semantics-based grammatical annotation scheme", "authors": [ { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omri Abend and Ari Rappoport. 2013. UCCA: A semantics-based grammatical annotation scheme. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Pa- pers, pages 1-12, Potsdam, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "DRS at MRP 2020: Dressing up discourse representation structures as graphs", "authors": [ { "first": "Lasha", "middle": [], "last": "Abzianidze", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing", "volume": "", "issue": "", "pages": "23--32", "other_ids": { "DOI": [ "10.18653/v1/2020.conll-shared.2" ] }, "num": null, "urls": [], "raw_text": "Lasha Abzianidze, Johan Bos, and Stephan Oepen. 2020. DRS at MRP 2020: Dressing up discourse representation structures as graphs. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing, pages 23-32, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Abstract Meaning Representation for sembanking", "authors": [ { "first": "Laura", "middle": [], "last": "Banarescu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Madalina", "middle": [], "last": "Georgescu", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "178--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Minimal recursion semantics: An introduction", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" } ], "year": 2005, "venue": "Reseach On Language And Computation", "volume": "3", "issue": "", "pages": "281--332", "other_ids": { "DOI": [ "10.1007/s11168-006-6327-9" ] }, "num": null, "urls": [], "raw_text": "Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag. 2005. Minimal recursion semantics: An intro- duction. Reseach On Language And Computation, 3:281-332.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Towards an encyclopedia of compositional semantics: Documenting the interface of the English Resource Grammar", "authors": [ { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "875--881", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Flickinger, Emily M. Bender, and Stephan Oepen. 2014. Towards an encyclopedia of compositional se- mantics: Documenting the interface of the English Resource Grammar. In Proceedings of the Ninth In- ternational Conference on Language Resources and Evaluation (LREC'14), pages 875-881, Reykjavik, Iceland. European Language Resources Association (ELRA).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sustainable Development and Refinement of Complex Linguistic Annotations at Scale", "authors": [ { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "353--377", "other_ids": { "DOI": [ "10.1007/978-94-024-0881-2_14" ] }, "num": null, "urls": [], "raw_text": "Dan Flickinger, Stephan Oepen, and Emily M. Bender. 2017. Sustainable Development and Refinement of Complex Linguistic Annotations at Scale, pages 353- 377. Springer Netherlands, Dordrecht.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Slovak dependency treebank. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", "authors": [ { "first": "Katar\u00edna", "middle": [], "last": "Gajdo\u0161ov\u00e1", "suffix": "" }, { "first": "M\u00e1ria", "middle": [], "last": "\u0160imkov\u00e1", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katar\u00edna Gajdo\u0161ov\u00e1, M\u00e1ria \u0160imkov\u00e1, and et al. 2016. Slovak dependency treebank. LINDAT/CLARIAH- CZ digital library at the Institute of Formal and Ap- plied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Prague dependency treebankconsolidated 1.0", "authors": [ { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Bej\u010dek", "suffix": "" }, { "first": "Jaroslava", "middle": [], "last": "Hlavacova", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Mikulov\u00e1", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "5208--5218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Haji\u010d, Eduard Bej\u010dek, Jaroslava Hlavacova, Marie Mikulov\u00e1, Milan Straka, Jan \u0160t\u011bp\u00e1nek, and Barbora \u0160t\u011bp\u00e1nkov\u00e1. 2020. Prague dependency treebank - consolidated 1.0. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 5208-5218, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Announcing Prague Czech-English Dependency Treebank 2.0", "authors": [ { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Haji\u010dov\u00e1", "suffix": "" }, { "first": "Jarmila", "middle": [], "last": "Panevov\u00e1", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Sgall", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Silvie", "middle": [], "last": "Cinkov\u00e1", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Fu\u010d\u00edkov\u00e1", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Mikulov\u00e1", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Pajas", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Popelka", "suffix": "" }, { "first": "Ji\u0159\u00ed", "middle": [], "last": "Semeck\u00fd", "suffix": "" }, { "first": "Jana", "middle": [], "last": "\u0160indlerov\u00e1", "suffix": "" }, { "first": "Jan", "middle": [], "last": "\u0160t\u011bp\u00e1nek", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Toman", "suffix": "" }, { "first": "Zde\u0148ka", "middle": [], "last": "Ure\u0161ov\u00e1", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "\u017dabokrtsk\u00fd", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "3153--3160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Haji\u010d, Eva Haji\u010dov\u00e1, Jarmila Panevov\u00e1, Petr Sgall, Ond\u0159ej Bojar, Silvie Cinkov\u00e1, Eva Fu\u010d\u00edkov\u00e1, Marie Mikulov\u00e1, Petr Pajas, Jan Popelka, Ji\u0159\u00ed Semeck\u00fd, Jana \u0160indlerov\u00e1, Jan \u0160t\u011bp\u00e1nek, Josef Toman, Zde\u0148ka Ure\u0161ov\u00e1, and Zden\u011bk \u017dabokrtsk\u00fd. 2012. Announc- ing Prague Czech-English Dependency Treebank 2.0. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 3153-3160, Istanbul, Turkey. Eu- ropean Language Resources Association (ELRA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Prague arabic dependency treebank: Development in data and tools", "authors": [ { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Zem\u00e1nek", "suffix": "" } ], "year": 2004, "venue": "Proc. of the NEMLAR Intern. Conf. on Arabic Language Resources and Tools", "volume": "", "issue": "", "pages": "110--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Haji\u010d and Petr Zem\u00e1nek. 2004. Prague arabic de- pendency treebank: Development in data and tools. In In Proc. of the NEMLAR Intern. Conf. on Arabic Language Resources and Tools, pages 110-117.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generaliza- tion. CoRR, abs/2003.11080.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Who did what to whom? a contrastive study of syntacto-semantic dependencies", "authors": [ { "first": "Angelina", "middle": [], "last": "Ivanova", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Lilja", "middle": [], "last": "\u00d8vrelid", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Sixth Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "2--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelina Ivanova, Stephan Oepen, Lilja \u00d8vrelid, and Dan Flickinger. 2012. Who did what to whom? a contrastive study of syntacto-semantic dependencies. In Proceedings of the Sixth Linguistic Annotation Workshop, pages 2-11, Jeju, Republic of Korea. As- sociation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "From Discourse to Logic: Introduction to Model-theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory", "authors": [ { "first": "Hans", "middle": [], "last": "Kamp", "suffix": "" }, { "first": "Uwe", "middle": [], "last": "Reyle", "suffix": "" } ], "year": 1993, "venue": "Studies in Linguistics and Philosophy", "volume": "42", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-94-017-1616-1" ] }, "num": null, "urls": [], "raw_text": "Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic: Introduction to Model-theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory, volume 42 of Studies in Lin- guistics and Philosophy. Springer, Dordrecht.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "75 languages, 1 model: Parsing Universal Dependencies universally", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2779--2795", "other_ids": { "DOI": [ "10.18653/v1/D19-1279" ] }, "num": null, "urls": [], "raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing Universal Dependencies universally. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2779-2795, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Annotation on the tectogrammatical level in the prague dependency treebank. annotation manual", "authors": [ { "first": "Marie", "middle": [], "last": "Mikulov\u00e1", "suffix": "" }, { "first": "Alevtina", "middle": [], "last": "B\u00e9mov\u00e1", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Haji\u010dov\u00e1", "suffix": "" }, { "first": "Ji\u0159\u00ed", "middle": [], "last": "Havelka", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Kol\u00e1\u0159ov\u00e1", "suffix": "" }, { "first": "Lucie", "middle": [], "last": "Ku\u010dov\u00e1", "suffix": "" }, { "first": "Mark\u00e9ta", "middle": [], "last": "Lopatkov\u00e1", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Pajas", "suffix": "" }, { "first": "Jarmila", "middle": [], "last": "Panevov\u00e1", "suffix": "" }, { "first": "Magda", "middle": [], "last": "Raz\u00edmov\u00e1", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Sgall", "suffix": "" } ], "year": 2006, "venue": "\u00daFAL MFF UK", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie Mikulov\u00e1, Alevtina B\u00e9mov\u00e1, Jan Haji\u010d, Eva Haji\u010dov\u00e1, Ji\u0159\u00ed Havelka, Veronika Kol\u00e1\u0159ov\u00e1, Lucie Ku\u010dov\u00e1, Mark\u00e9ta Lopatkov\u00e1, Petr Pajas, Jarmila Panevov\u00e1, Magda Raz\u00edmov\u00e1, Petr Sgall, Jan \u0160t\u011bp\u00e1nek, Zde\u0148ka Ure\u0161ov\u00e1, Kate\u0159ina Vesel\u00e1, and Zden\u011bk \u017dabokrtsk\u00fd. 2006. Annotation on the tec- togrammatical level in the prague dependency tree- bank. annotation manual. Technical Report 30, \u00daFAL MFF UK, Prague, Czech Rep.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "From PDT 2.0 to PDT 3.0 (modifications and complements)", "authors": [ { "first": "Marie", "middle": [], "last": "Mikulov\u00e1", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Bej\u010dek", "suffix": "" }, { "first": "Ji\u0159\u00ed", "middle": [], "last": "M\u00edrovsk\u00fd", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Nedoluzhko", "suffix": "" }, { "first": "Jarmila", "middle": [], "last": "Panevov\u00e1", "suffix": "" }, { "first": "Lucie", "middle": [], "last": "Pol\u00e1kov\u00e1", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Stra\u0148\u00e1k", "suffix": "" }, { "first": "Magda", "middle": [], "last": "\u0160ev\u010d\u00edkov\u00e1", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "\u017dabokrtsk\u00fd", "suffix": "" } ], "year": 2013, "venue": "\u00daFAL MFF UK", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie Mikulov\u00e1, Eduard Bej\u010dek, Ji\u0159\u00ed M\u00edrovsk\u00fd, Anna Nedoluzhko, Jarmila Panevov\u00e1, Lucie Pol\u00e1kov\u00e1, Pavel Stra\u0148\u00e1k, Magda \u0160ev\u010d\u00edkov\u00e1, and Zden\u011bk \u017dabokrtsk\u00fd. 2013. From PDT 2.0 to PDT 3.0 (modifications and complements). Technical report, \u00daFAL MFF UK, Prague, Czech Rep.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "In-house: An ensemble of pre-existing offthe-shelf parsers", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "335--340", "other_ids": { "DOI": [ "10.3115/v1/S14-2056" ] }, "num": null, "urls": [], "raw_text": "Yusuke Miyao, Stephan Oepen, and Daniel Zeman. 2014. In-house: An ensemble of pre-existing off- the-shelf parsers. In Proceedings of the 8th Interna- tional Workshop on Semantic Evaluation (SemEval 2014), pages 335-340, Dublin, Ireland. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "PAWS: A multi-lingual parallel treebank with anaphoric relations", "authors": [ { "first": "Anna", "middle": [], "last": "Nedoluzhko", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Nov\u00e1k", "suffix": "" }, { "first": "Maciej", "middle": [], "last": "Ogrodniczuk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference", "volume": "", "issue": "", "pages": "68--76", "other_ids": { "DOI": [ "10.18653/v1/W18-0708" ] }, "num": null, "urls": [], "raw_text": "Anna Nedoluzhko, Michal Nov\u00e1k, and Maciej Ogrod- niczuk. 2018. PAWS: A multi-lingual parallel tree- bank with anaphoric relations. In Proceedings of the First Workshop on Computational Models of Refer- ence, Anaphora and Coreference, pages 68-76, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "MRP 2020: The second shared task on cross-framework and cross-lingual meaning representation parsing", "authors": [ { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Lasha", "middle": [], "last": "Abzianidze", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Li", "suffix": "" }, { "first": "O'", "middle": [], "last": "Tim", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Gorman", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Xue", "suffix": "" }, { "first": "", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing", "volume": "", "issue": "", "pages": "1--22", "other_ids": { "DOI": [ "10.18653/v1/2020.conll-shared.1" ] }, "num": null, "urls": [], "raw_text": "Stephan Oepen, Omri Abend, Lasha Abzianidze, Jo- han Bos, Jan Hajic, Daniel Hershcovich, Bin Li, Tim O'Gorman, Nianwen Xue, and Daniel Zeman. 2020. MRP 2020: The second shared task on cross-framework and cross-lingual meaning repre- sentation parsing. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Rep- resentation Parsing, pages 1-22, Online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Head-Driven Phrase Structure Grammar", "authors": [ { "first": "Carl", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "Ivan", "middle": [ "A" ], "last": "Sag", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "\u00daFAL at MRP 2020: Permutation-invariant semantic parsing in PERIN", "authors": [ { "first": "David", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing", "volume": "", "issue": "", "pages": "53--64", "other_ids": { "DOI": [ "10.18653/v1/2020.conll-shared.5" ] }, "num": null, "urls": [], "raw_text": "David Samuel and Milan Straka. 2020. \u00daFAL at MRP 2020: Permutation-invariant semantic pars- ing in PERIN. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Represen- tation Parsing, pages 53-64, Online. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The Meaning of the Sentence in Its Semantic and Pragmatic Aspects", "authors": [ { "first": "Petr", "middle": [], "last": "Sgall", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Haji\u010dov\u00e1", "suffix": "" }, { "first": "Jarmilla", "middle": [], "last": "Panevov\u00e1", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petr Sgall, Eva Haji\u010dov\u00e1, and Jarmilla Panevov\u00e1. 1986. The Meaning of the Sentence in Its Semantic and Pragmatic Aspects. Reidel, Dordrecht.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unitrans : Unifying model transfer and data transfer for cross-lingual named entity recognition with unlabeled data", "authors": [ { "first": "Qianhui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zijia", "middle": [], "last": "Lin", "suffix": "" }, { "first": "F", "middle": [], "last": "B\u00f6rje", "suffix": "" }, { "first": "Biqing", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "Jian-Guang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Lou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20", "volume": "", "issue": "", "pages": "3926--3932", "other_ids": { "DOI": [ "10.24963/ijcai.2020/543" ] }, "num": null, "urls": [], "raw_text": "Qianhui Wu, Zijia Lin, B\u00f6rje F. Karlsson, Biqing Huang, and Jian-Guang Lou. 2020. Unitrans : Unifying model transfer and data transfer for cross-lingual named entity recognition with unla- beled data. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intel- ligence, IJCAI-20, pages 3926-3932. International Joint Conferences on Artificial Intelligence Organi- zation. Main track.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "FGD at MRP 2020: Prague tectogrammatical graphs", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing", "volume": "", "issue": "", "pages": "33--39", "other_ids": { "DOI": [ "10.18653/v1/2020.conll-shared.3" ] }, "num": null, "urls": [], "raw_text": "Daniel Zeman and Jan Hajic. 2020. FGD at MRP 2020: Prague tectogrammatical graphs. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing, pages 33-39, On- line. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "P\u00e9ter \u00fajs\u00e1got olvas, Mari k\u00e9preg\u00e9nyt. 'Peter is reading a newspaper, Mary a comic.' -Gap in second clause, SOV word order in first clause. Minor error in the analysis.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "P\u00e9ter \u00fajs\u00e1got, Mari k\u00e9preg\u00e9nyt olvas. -Gap in first clause, SOV word order in second clause. Wrong analysis of first clause. P\u00e9ter olvas \u00fajs\u00e1got, Mari k\u00e9preg\u00e9nyt. -Gap in second clause, SVO word order in first clause. Perfect analysis.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "\u00dajs\u00e1got P\u00e9ter, k\u00e9preg\u00e9nyt Mari olvas. -Gap in first clause, OSV word order in second clause. Wrong analysis.", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "Output of the Czech PTG grammar for various Hungarian gapping constructions. Only the analysis in 1c is completely correct.", "uris": null }, "TABREF1": { "num": null, "content": "