{ "paper_id": "P19-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:30:56.779901Z" }, "title": "Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation", "authors": [ { "first": "Masashi", "middle": [], "last": "Yoshikawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nara Institute of Science and Technology", "location": { "settlement": "Nara", "country": "Japan" } }, "email": "" }, { "first": "Yoshikawa", "middle": [ "Masashi" ], "last": "Yh8@", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Koji", "middle": [], "last": "Mineshima", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "mineshima.koji@ocha.ac.jp" }, { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "bekki@is.ocha.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees. Our solution is conceptually simple, and not relying on a specific parser architecture, making it applicable to the current best-performing parsers. We conduct extensive parsing experiments with detailed discussion; on top of existing benchmark datasets on (1) biomedical texts and (2) question sentences, we create experimental datasets of (3) speech conversation and (4) math problems. When applied to the proposed method, an off-the-shelf CCG parser shows significant performance gains, improving from 90.7% to 96.6% on speech conversation, and from 88.5% to 96.8% on math problems.", "pdf_parse": { "paper_id": "P19-1013", "_pdf_hash": "", "abstract": [ { "text": "We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees. Our solution is conceptually simple, and not relying on a specific parser architecture, making it applicable to the current best-performing parsers. We conduct extensive parsing experiments with detailed discussion; on top of existing benchmark datasets on (1) biomedical texts and (2) question sentences, we create experimental datasets of (3) speech conversation and (4) math problems. When applied to the proposed method, an off-the-shelf CCG parser shows significant performance gains, improving from 90.7% to 96.6% on speech conversation, and from 88.5% to 96.8% on math problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The recent advancement of Combinatory Categorial Grammar (CCG; Steedman (2000) ) parsing Yoshikawa et al., 2017) , combined with formal semantics, has enabled high-performing natural language inference systems (Abzianidze, 2017; Mart\u00ednez-G\u00f3mez et al., 2017) . We are interested in transferring the success to a range of applications, e.g., inference systems on scientific papers and speech conversation.", "cite_spans": [ { "start": 63, "end": 78, "text": "Steedman (2000)", "ref_id": "BIBREF36" }, { "start": 89, "end": 112, "text": "Yoshikawa et al., 2017)", "ref_id": "BIBREF39" }, { "start": 210, "end": 228, "text": "(Abzianidze, 2017;", "ref_id": "BIBREF0" }, { "start": 229, "end": 257, "text": "Mart\u00ednez-G\u00f3mez et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To achieve the goal, it is urgent to enhance the CCG parsing accuracy on new domains, i.e., solving a notorious problem of domain adaptation of a statistical parser, which has long been addressed in the literature. Especially in CCG parsing, prior work (Rimell and Clark, 2008; has taken advantage of highly informative categories, which determine the most part of sentence structure once correctly assigned to words. It is demonstrated that the annotation of only preterminal categories is sufficient to adapt a CCG parser to new domains. However, the solution is limited to a specific parser's architecture, making non-trivial the application of the method to the current state-of-the-art parsers Yoshikawa et al., 2017; Stanojevi\u0107 and Steedman, 2019) , which require full parse annotation. Additionally, some ambiguities remain unresolved with mere supertags, especially in languages other than English (as discussed in Yoshikawa et al. (2017) ), to which the method is not portable.", "cite_spans": [ { "start": 253, "end": 277, "text": "(Rimell and Clark, 2008;", "ref_id": "BIBREF33" }, { "start": 699, "end": 722, "text": "Yoshikawa et al., 2017;", "ref_id": "BIBREF39" }, { "start": 723, "end": 753, "text": "Stanojevi\u0107 and Steedman, 2019)", "ref_id": "BIBREF35" }, { "start": 923, "end": 946, "text": "Yoshikawa et al. (2017)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Distributional embeddings are proven to be powerful tools for solving the issue of domain adaption, with their unlimited applications in NLP, not to mention syntactic parsing (Lewis and Steedman, 2014b; Mitchell and Steedman, 2015; . Among others, reports huge performance boosts in constituency parsing using contextualized word embeddings , which is orthogonal to our work, and the combination shows huge gains. Including Joshi et al. (2018) , there are studies to learn from partially annotated trees (Mirroshandel and Nasr, 2011; Li et al., 2016; , again, most of which exploit specific parser architecture.", "cite_spans": [ { "start": 175, "end": 202, "text": "(Lewis and Steedman, 2014b;", "ref_id": "BIBREF22" }, { "start": 203, "end": 231, "text": "Mitchell and Steedman, 2015;", "ref_id": "BIBREF29" }, { "start": 414, "end": 443, "text": "Including Joshi et al. (2018)", "ref_id": null }, { "start": 504, "end": 533, "text": "(Mirroshandel and Nasr, 2011;", "ref_id": "BIBREF28" }, { "start": 534, "end": 550, "text": "Li et al., 2016;", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose a conceptually simpler approach to the issue, which is agnostic on any parser architecture, namely, automatic generation of CCGbanks (i.e., CCG treebanks) 1 for new domains, by exploiting cheaper resources of dependency trees. Specifically, we train a deep conversion model to map a dependency tree to a CCG tree, on aligned annotations of the Penn Treebank (Marcus et al., 1993) and the English CCGbank (Hockenmaier and Steedman, 2007) (Figure 1a) . When we need a CCG parser tailored for Figure 1 : Overview of the proposed method. (a) A neural network-based model is trained to convert a dependency tree to a CCG one using aligned annotations on WSJ part of the Penn Treebank and the English CCGbank. (b) The trained converter is applied to an existing dependency corpus (e.g., the Genia corpus) to generate a CCGbank, (c) which is then used to fine-tune the parameters of an off-the-shelf CCG parser.", "cite_spans": [ { "start": 383, "end": 404, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF24" }, { "start": 429, "end": 461, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 462, "end": 473, "text": "(Figure 1a)", "ref_id": null }, { "start": 515, "end": 523, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "a new domain, the trained converter is applied to a dependency corpus in that domain to obtain a new CCGbank (1b), which is then used to fine-tune an off-the-shelf CCG parser (1c). The assumption that we have a dependency corpus in that target domain is not demanding given the abundance of existing dependency resources along with its developed annotation procedure, e.g., Universal Dependencies (UD) project (Nivre et al., 2016) , and the cheaper cost to train an annotator.", "cite_spans": [ { "start": 410, "end": 430, "text": "(Nivre et al., 2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the biggest bottlenecks of syntactic parsing is handling of countless unknown words. It is also true that there exist such unfamiliar input data types to our converter, e.g., disfluencies in speech and symbols in math problems. We address these issues by constrained decoding ( \u00a74), enabled by incorporating a parsing technique into our converter. Nevertheless, syntactic structures exhibit less variance across textual domains than words do; our proposed converter suffers less from such unseen events, and expectedly produces high-quality CCGbanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The work closest to ours is Jiang et al. (2018) , where a conversion model is trained to map dependency treebanks of different annotation principles, which is used to increase the amount of labeled data in the target-side treebank. Our work extends theirs and solves a more challenging task; the mapping to learn is to more complex CCG trees, and it is applied to datasets coming from plainly different natures (i.e., domains). Some prior studies design conversion algorithms to induce CCGbanks for languages other than English from dependency treebanks (Bos et al., 2009; Ambati et al., 2013) . Though the methods may be applied to our problem, they usually cannot cover the entire dataset, consequently discarding sentences with characteristic features. On top of that, unavoidable information gaps between the two syntactic formalisms may at most be addressed probabilistically.", "cite_spans": [ { "start": 28, "end": 47, "text": "Jiang et al. (2018)", "ref_id": "BIBREF13" }, { "start": 554, "end": 572, "text": "(Bos et al., 2009;", "ref_id": "BIBREF5" }, { "start": 573, "end": 593, "text": "Ambati et al., 2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To verify the generalizability of our approach, on top of the existing benchmarks on (1) biomedical texts and (2) question sentences (Rimell and Clark, 2008) , we conduct parsing experiments on (3) speech conversation texts, which exhibit other challenges such as handling informal expressions and lengthy sentences. We create a CCG version of the Switchboard corpus (Godfrey et al., 1992) , consisting of full train/dev/test sets of automatically generated trees and manually annotated 100 sentences for a detailed evaluation. Additionally, we manually construct experimental data for parsing (4) math problems (Seo et al., 2015) , for which the importance of domain adaptation is previously demonstrated . We observe huge additive gains in the performance of the depccg parser (Yoshikawa et al., 2017) , by combining contextualized word embeddings and our domain adaptation method: in terms of unlabeled F1 scores, 90.68% to 95.63% on speech conversation, and 88.49% to 95.83% on math problems, respectively. 2 Figure 2 : Example CCG derivation tree for phrase cats that Kyle wants to see. Categories are combined using rules such as an application rule (marked with \">\", X/Y Y \u21d2 X) and a composition rule (\">B\": X/Y Y/Z \u21d2 X/Z). See Steedman (2000) for the detail.", "cite_spans": [ { "start": 133, "end": 157, "text": "(Rimell and Clark, 2008)", "ref_id": "BIBREF33" }, { "start": 367, "end": 389, "text": "(Godfrey et al., 1992)", "ref_id": "BIBREF10" }, { "start": 612, "end": 630, "text": "(Seo et al., 2015)", "ref_id": "BIBREF34" }, { "start": 779, "end": 803, "text": "(Yoshikawa et al., 2017)", "ref_id": "BIBREF39" }, { "start": 1235, "end": 1250, "text": "Steedman (2000)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 1013, "end": 1021, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "cats N cats NP cats un that (NP x \\NP x )/(S/NP x ) Kyle NP kyle S y /(S y \\NP kyle ) T wants (S wants \\NP z,1 )/(S w,2 \\NP z ) to (S u \\NP v )/(S u \\NP v ) see (S see \\NP s,1 )/NP t,2 (S see \\NP v )/NP t : u = see, v = s >B (S wants \\NP z )/NP t : w = see, z = v >B S y /NP t : y = wants, z = kyle >B NP x \\NP x : x = t > NP cats : x = cats <", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CCG is a lexicalized grammatical formalism, where words and phrases are assigned categories with complex internal structures. A category X/Y (or X\\Y) represents a phrase that combines with a Y phrase on its right (or left), and becomes an X phrase. As such, a category (S\\NP)/NP represents an English transitive verb which takes NPs on both sides and becomes a sentence (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "The semantic structure of a sentence can be extracted using the functional nature of CCG categories. Figure 2 shows an example CCG derivation of a phrase cats that Kyle wants to see, where categories are marked with variables and constants (e.g., kyle in NP kyle ), and argument ids in the case of verbs (subscripts in (S see \\NP s,1 )/NP t,2 ). Unification is performed on these variables and constants in the course of derivation, resulting in chains of equations s = v = z = kyle, and t = x = cats, successfully recovering the first and second argument of see: Kyle and cats (i.e., capturing long-range dependencies). What is demonstrated here is performed in the standard evaluation of CCG parsing, where the number of such correctly predicted predicate-argument relations is calculated (for the detail, see Clark et al. (2002) ). Remarkably, it is also the basis of CCG-based semantic parsing (Abzianidze, 2017; Mart\u00ednez-G\u00f3mez et al., 2017; Matsuzaki et al., 2017) , where the above simple unification rule is replaced with more sophisticated techniques such as \u03bb-calculus.", "cite_spans": [ { "start": 812, "end": 831, "text": "Clark et al. (2002)", "ref_id": "BIBREF7" }, { "start": 898, "end": 916, "text": "(Abzianidze, 2017;", "ref_id": "BIBREF0" }, { "start": 917, "end": 945, "text": "Mart\u00ednez-G\u00f3mez et al., 2017;", "ref_id": "BIBREF26" }, { "start": 946, "end": 969, "text": "Matsuzaki et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "There are two major resources in CCG: the English CCGbank (Hockenmaier and Steedman, 2007) for news texts, and the Groningen Meaning Bank (Bos et al., 2017) for wider domains, including Aesop's fables. However, when one wants a CCG parser tuned for a specific domain, he or she faces the issue of its high annotation cost:", "cite_spans": [ { "start": 58, "end": 90, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF11" }, { "start": 138, "end": 156, "text": "(Bos et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "\u2022 The annotation requires linguistic expertise, being able to keep track of semantic composition performed during a derivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "\u2022 An annotated tree must strictly conform to the grammar, e.g., inconsistencies such as combining N and S\\NP result in ill-formed trees and hence must be disallowed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "We relax these assumptions by using dependency tree, which is a simpler representation of the syntactic structure, i.e., it lacks information of longrange dependencies and conjunct spans of a coordination structure. However, due to its simplicity and flexibility, it is easier to train an annotator, and there exist plenty of accessible dependency-based resources, which we exploit in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "3 Dependency-to-CCG Converter", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "We propose a domain adaptation method based on the automatic generation of a CCGbank out of a dependency treebank in the target domain. This is achieved by our dependency-to-CCG converter, a neural network model consisting of a dependency tree encoder and a CCG tree decoder. In the encoder, higher-order interactions among dependency edges are modeled with a bidirectional TreeLSTM (Miwa and Bansal, 2016) , which is important to facilitate mapping from a dependency tree to a more complex CCG tree. Due to the strict nature of CCG grammar, we model the output space of CCG trees explicitly 3 ; our decoder is inspired by the recent success of A* CCG parsing (Lewis and Steedman, 2014a; Yoshikawa et al., 2017) , where the most probable valid tree is found using A* parsing (Klein and D. Manning, 2003) . In the following, we describe the details of the proposed converter.", "cite_spans": [ { "start": 383, "end": 406, "text": "(Miwa and Bansal, 2016)", "ref_id": "BIBREF30" }, { "start": 660, "end": 687, "text": "(Lewis and Steedman, 2014a;", "ref_id": "BIBREF21" }, { "start": 688, "end": 711, "text": "Yoshikawa et al., 2017)", "ref_id": "BIBREF39" }, { "start": 775, "end": 803, "text": "(Klein and D. Manning, 2003)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "Firstly, we define a probabilistic model of the dependency-to-CCG conversion process. According to Yoshikawa et al. (2017) , the structure of a CCG tree y for sentence", "cite_spans": [ { "start": 99, "end": 122, "text": "Yoshikawa et al. (2017)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "x = (x 1 , ..., x N )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "is almost uniquely determined 4 if a sequence of the pre-terminal CCG categories (supertags) c = (c 1 , ..., c N ) and a dependency structure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "d = (d 1 , ..., d N ), where d i \u2208 {0, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": ".., N } is an index of dependency parent of x i (0 represents a root node), are provided. Note that the dependency structure d is generally different from an input dependency tree. 5 While supertags are highly informative about the syntactic structure (Bangalore and Joshi, 1999) , remaining ambiguities such as attachment ambiguities need to be modeled using dependencies. Let the input dependency tree of sentence x be z = (p, d , ), where p i is a part-of-speech tag of x i , d i an index of its dependency parent, i is the label of the corresponding dependency edge, then the conversion process is expressed as follows:", "cite_spans": [ { "start": 252, "end": 279, "text": "(Bangalore and Joshi, 1999)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "6 P (y|x, z) = N i=1 p tag (c i |x, z) N i=1 p dep (d i |x, z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "Based on this formulation, we model c i and d i conditioned on a dependency tree z, and search for y that maximizes P (y|x, z) using A* parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "Encoder A bidirectional TreeLSTM consists of two distinct TreeLSTMs (Tai et al., 2015) . A bottom-up TreeLSTM recursively computes a hidden vector h \u2191 i for each x i , from vector representation e i of the word and hidden vectors of its dependency children {h \u2191 j |d j = i}. A top-down TreeL-STM, in turn, computes h \u2193 i using e i and a hidden vector of the dependency parent h \u2193 d i . In total, a bidirectional TreeLSTM returns concatenations of hidden vectors for all words:", "cite_spans": [ { "start": 68, "end": 86, "text": "(Tai et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "h i = h \u2191 i \u2295 h \u2193 i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "We encode a dependency tree as follows, where e v denotes the vector representation of variable v, and \u2126 and \u039e d are shorthand notations of the series of operations of sequential and tree bidirectional LSTMs, respectively:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "e 1 , ..., e N = \u2126(e p 1 \u2295 e x 1 , ..., e p N \u2295 e x N ), h 1 , ..., h N = \u039e d (e 1 \u2295 e 1 , ..., e N \u2295 e N ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "Decoder The decoder part adopts the same architecture as in Yoshikawa et al. (2017) , where p dep|tag probabilities are computed on top of {h i } i\u2208[0,N ] , using a biaffine layer (Dozat and Manning, 2017 ) and a bilinear layer, respectively, which are then used in A* parsing to find the most probable CCG tree.", "cite_spans": [ { "start": 60, "end": 83, "text": "Yoshikawa et al. (2017)", "ref_id": "BIBREF39" }, { "start": 180, "end": 204, "text": "(Dozat and Manning, 2017", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "Firstly a biaffine layer is used to compute unigram head probabilities p dep as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "r i = \u03c8 dep child (h i ), r j = \u03c8 dep head (h j ), s i,j = r T i W r j + w T r j , p dep (d i = j|x, z) \u221d exp(s i,j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "where \u03c8 denotes a multi-layer perceptron. The probabilities p tag are computed by a bilinear transformation of vector encodings x i and xd i , whered i is the most probable dependency head of x i with respect to p dep :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "d i = arg max j p dep (d i = j|x, z). q i = \u03c8 tag child (h i ), qd i = \u03c8 tag head (hd i ), s i,c = q T i W c qd i + v T c q i + u T c qd i + b c , p tag (c i = c|x, z) \u221d exp(s i,c ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "A* Parsing Since the probability P (y|x, z) of a CCG tree y is simply decomposable into probabilities of subtrees, the problem of finding the most probable tree can be solved with a chart-based algorithm. In this work, we use one of such algorithms, A* parsing (Klein and D. Manning, 2003) . A* parsing is a generalization of A* search for shortest path problem on a graph, and it controls subtrees (corresponding to a node in a graph case) to visit next using a priority queue. We follow Yoshikawa et al. (2017) exactly in formulating our A* parsing, and adopt an admissible heuristic by taking the sum of the max p tag|dep probabilities outside a subtree. The advantage of employing an A* parsing-based decoder is not limited to the optimality guarantee of the decoded tree; it enables constrained decoding, which is described next.", "cite_spans": [ { "start": 261, "end": 289, "text": "(Klein and D. Manning, 2003)", "ref_id": "BIBREF16" }, { "start": 489, "end": 512, "text": "Yoshikawa et al. (2017)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2" }, { "text": "While our method is a fully automated treebank generation method, there are often cases where we want to control the form of output trees by using external language resources. For example, when generating a CCGbank for biomedical domain, it will be convenient if a disease dictionary is utilized to ensure that a complex disease name in a text is always assigned the category NP. In our decoder based on A* parsing, it is possible to perform such a controlled generation of a CCG tree by imposing constraints on the space of trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Decoding", "sec_num": "4" }, { "text": "A constraint is a triplet (c, i, j) representing a constituent of category c spanning over words x i , ..., x j . The constrained decoding is achieved by refusing to add a subtree (denoted as (c , k, l), likewise, with its category and span) to the priority queue when it meets one of the conditions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Decoding", "sec_num": "4" }, { "text": "\u2022 The spans overlap: i < k \u2264 j < l or k < i \u2264 l < j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Decoding", "sec_num": "4" }, { "text": "\u2022 The spans are identical (i = k and j = l), while the categories are different (c = c ) and no category c exists such that c \u21d2 c is a valid unary rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Decoding", "sec_num": "4" }, { "text": "The last condition on unary rule is necessary to prevent structures such as (NP (N dog)) from being accidentally discarded, when using a constraint to make a noun phrase to be NP. A set of multiple constraints are imposed by checking the above conditions for each of the constraints when adding a new item to the priority queue. When one wants to constrain a terminal category to be c, that is achieved by manipulating p tag : p tag (c|x, z) = 1 and for all categories c = c, p tag (c |x, z) = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Decoding", "sec_num": "4" }, { "text": "We evaluate our method in terms of performance gain obtained by fine-tuning an off-the-shelf CCG parser depccg (Yoshikawa et al., 2017) , on a variety of CCGbanks obtained by converting existing dependency resources using the method. In short, the method of depccg is equivalent to omitting the dependence on a dependency tree z from P (y|x, z) of our converter model, and running an A* parsing-based decoder on p tag|dep calculated on h 1 , ..., h N = \u2126(e x 1 , ..., e x N ), as in our method. In the plain depccg, the word representation e x i is a concatenation of GloVe 7 vectors and vector representations of affixes. As in the previous work, the parser is trained on both the English CCGbank (Hockenmaier and Steedman, 2007) and the tri-training dataset by Yoshikawa et al. (2017) . In this work, on top of that, we include as a baseline a setting where the affix vectors ), 8 which we find marks the current best scores in the English CCGbank parsing ( Table 1) .", "cite_spans": [ { "start": 111, "end": 135, "text": "(Yoshikawa et al., 2017)", "ref_id": "BIBREF39" }, { "start": 698, "end": 730, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF11" }, { "start": 763, "end": 786, "text": "Yoshikawa et al. (2017)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 960, "end": 968, "text": "Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "The evaluation is based on the standard evaluation metric, where the number of correctly predicted predicate argument relations is calculated ( \u00a72), where labeled metrics take into account the category through which the dependency is constructed, while unlabeled ones do not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "The input word representations to the converter are the concatenation of GloVe and ELMo representations. Each of e p i and e i is randomly initialized 50-dimensional vectors, and the two-layer sequential LSTMs \u2126 outputs 300 dimensional vectors, as well as bidirectional TreeLSTM \u039e d , whose outputs are then fed into 1-layer 100-dimensional MLPs with ELU non-linearity (Clevert et al., 2016) . The training is done by minimizing the sum of negative log likelihood of p tag|dep using the Adam optimizer (with \u03b2 1 = \u03b2 2 = 0.9), on a dataset detailed below.", "cite_spans": [ { "start": 369, "end": 391, "text": "(Clevert et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": null }, { "text": "Data Processing In this work, the input tree to the converter follows Universal Dependencies (UD) v1 (Nivre et al., 2016) . Constituency-based treebanks are converted using the Stanford Converter 9 to obtain UD trees. The output dependency structure d follows Head First dependency tree (Yoshikawa et al., 2017) , where a dependency arc is always from left to right. The conversion model is trained to map UD trees in the Wall Street Journal (WSJ) portion 2-21 of the Penn Treebank (Marcus et al., 1993) to its corresponding CCG trees in the English CCGbank (Hockenmaier and Steedman, 2007) . Table 2 : Per-relation F1 scores of the proposed converter and depccg + ELMo (Parser). \"#\" column shows the number of occurrence of the phenomenon.", "cite_spans": [ { "start": 101, "end": 121, "text": "(Nivre et al., 2016)", "ref_id": "BIBREF31" }, { "start": 287, "end": 311, "text": "(Yoshikawa et al., 2017)", "ref_id": "BIBREF39" }, { "start": 482, "end": 503, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF24" }, { "start": 575, "end": 590, "text": "Steedman, 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 593, "end": 600, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Implementation Details", "sec_num": null }, { "text": "Fine-tuning the CCG Parser In each of the following domain adaptation experiments, newly obtained CCGbanks are used to fine-tune the parameters of the baseline parser described above, by re-training it on the mixture of labeled examples from the new target-domain CCGbank, the English CCGbank, and the tri-training dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": null }, { "text": "First, we examine whether the trained converter can produce high-quality CCG trees, by applying it to dependency trees in the test portion (WSJ23) of Penn Treebank and then calculating the standard evaluation metrics between the resulting trees and the corresponding gold trees (Table 1) . This can be regarded as evaluating the upper bound of the conversion quality, since the evaluated data comes from the same domain as the converter's training data. Our converter shows much higher scores compared to the current best-performing depccg combined with ELMo (1.5% and 2.17% up in unlabeled/labeled F1 scores), suggesting that, using the proposed converter, we can obtain CCGbanks of high quality.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 287, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluating Converter's Performance", "sec_num": "5.2" }, { "text": "Inspecting the details, the improvement is observed across the board (Table 2) ; the converter precisely handles PP-attachment (2a), notoriously hard parsing problem, by utilizing input's pobj dependency edges, as well as relative clauses (2b), one of well-known sources of long-range dependencies, for which the converter has to learn from the non-local combinations of edges, their labels and part-of-speech tags surrounding the phenomenon.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 78, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Converter's Performance", "sec_num": "5.2" }, { "text": "Previous work (Rimell and Clark, 2008) provides CCG parsing benchmark datasets in biomedical texts and question sentences, each representing two contrasting challenges for a newswire-trained parser, i.e., a large amount of out-of-vocabulary words (biomedical texts), and rare or even unseen grammatical constructions (questions).", "cite_spans": [ { "start": 14, "end": 38, "text": "(Rimell and Clark, 2008)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Domain and Questions", "sec_num": "5.3" }, { "text": "Since the work also provides small training datasets for each domain, we utilize them as well: GENIA1000 with 1,000 sentences and Questions with 1,328 sentences, both annotated with pre-terminal CCG categories. Since pre-terminal categories are not sufficient to train depccg, we automatically annotate Head First dependencies using RBG parser (Lei et al., 2014) , trained to produce this type of trees (We follow Yoshikawa et al. (2017)'s tri-training setup).", "cite_spans": [ { "start": 344, "end": 362, "text": "(Lei et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Domain and Questions", "sec_num": "5.3" }, { "text": "Following the previous work, the evaluation is based on the Stanford grammatical relations (GR; Marneffe et al. (2006) ), a deep syntactic representation that can be recovered from a CCG tree. 10", "cite_spans": [ { "start": 96, "end": 118, "text": "Marneffe et al. (2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Biomedical Domain and Questions", "sec_num": "5.3" }, { "text": "Biomedical Domain By converting the Genia corpus (Tateisi et al., 2005) , we obtain a new CCGbank of 4,432 sentences from biomedical papers annotated with CCG trees. During the process, we have successfully assigned the category NP to all the occurrences of complex biomedical terms by imposing constraints ( \u00a74) that NP spans in the original corpus be assigned the category NP in the resulting CCG trees as well. Table 3 shows the results of the parsing experiment, where the scores of previous work (C&C (Clark and Curran, 2007) and EasySRL ) are included for reference. The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in terms of F1). Fine-tuning the parser on GENIA1000 results in a mixed result, with slightly lower scores. This is presumably because the automatically annotated Head First dependencies are not accurate. Finally, by fine-tuning on the Genia CCGbank, we observe another improvement, resulting in the highest 86.52 F1 score.", "cite_spans": [ { "start": 49, "end": 71, "text": "(Tateisi et al., 2005)", "ref_id": "BIBREF38" }, { "start": 506, "end": 530, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 414, "end": 421, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Biomedical Domain and Questions", "sec_num": "5.3" }, { "text": "Questions In this experiment, we obtain a CCG version of the QuestionBank (Judge et al., 2006) , consisting of 3,622 question sentences, excluding ones contained in the evaluation data. Table 4 compares the performance of depccg fine-tuned on the QuestionBank, along with other baselines. Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect. We hypothesize that, since the evaluation set contains sentences with similar constructions, the contributions of the latter two methods are less observable on top of Questions data. Inspection of the output trees reveals that this is actually the case; the majority of differences among parser's configurations are irrelevant to question constructions, suggesting that the models capture well the syntax of question in the data. 11", "cite_spans": [ { "start": 74, "end": 94, "text": "(Judge et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 186, "end": 193, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Biomedical Domain and Questions", "sec_num": "5.3" }, { "text": "Setup We apply the proposed method to a new domain, transcription texts of speech conversation, with new applications of CCG parsing in mind. We create the CCG version of the Switchboard corpus (Godfrey et al., 1992) , by which, as far as we are aware of, we conduct the first CCG parsing experiments on speech conversation. 12 We obtain a new CCGbank of 59,029/3,799/7,681 sen- 11 Due to many-to-many nature of mapping to GRs, the evaluation set contains relations not recoverable from the gold supertags using the provided script; for example, we find that from the annotated supertags of sentence How many battles did she win ?, the (amod battle many) relation is obtained instead of the gold det relation. This implies one of the difficulties to obtain further improvement on this set.", "cite_spans": [ { "start": 194, "end": 216, "text": "(Godfrey et al., 1992)", "ref_id": "BIBREF10" }, { "start": 325, "end": 327, "text": "12", "ref_id": null }, { "start": 379, "end": 381, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Speech Conversation", "sec_num": "5.4" }, { "text": "12 Since the annotated part-of-speech tags are noisy, we automatically reannotate them using the core web sm model of spaCy (https://spacy.io/), version 2.0.16. a. we should cause it does help b. the only problem i see with term limitations is that i think that the bureaucracy in our government as is with most governments is just so complex that there is a learning curve and that you ca n't just send someone off to washington and expect his first day to be an effective congress precision tences for each of the train/test/development set, where the data split follows prior work on dependency parsing on this dataset (Honnibal and Johnson, 2014 ).", "cite_spans": [ { "start": 622, "end": 649, "text": "(Honnibal and Johnson, 2014", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Conversation", "sec_num": "5.4" }, { "text": "In the conversion, we have to handle one of the characteristics of speech transcription texts, disfluencies. In real application, it is ideal to remove disfluencies such as interjection and repairs (e.g., I want a flight to Boston um to Denver), prior to performing CCG-based semantic composition. Since this corpus contains a layer of annotation that labels their occurrences, we perform constrained decoding to mark the gold disfluencies in a tree with a dummy category X, which can combine with any category from both sides (i.e., for all category C, C X \u21d2 C and X C \u21d2 C are allowed). In this work, we perform parsing experiments on texts that are clean of disfluencies, by removing X-marked words from sentences (i.e., a pipeline system setting with an oracle disfluency detection preprocessor). 13 Another issue in conducting experiments on this dataset is evaluation. Since there exists no evaluation protocol for CCG parsing on speech texts, we evaluate the quality of output trees by two procedures; in the first experiment, we parse the entire test set, and convert them to constituency trees us- Figure 3 : Parse output by the re-trained parser for sentence if CD = 8 and BE = 2, find AE. from math problems. ing a method by Kummerfeld et al. (2012) . 14 We report labeled bracket F1 scores between the resulting trees and the gold trees in the true Switchboard corpus, using the EVALB script. 15 However, the reported scores suffer from the compound effect of failures in CCG parsing as well as ones occurred in the conversion to the constituency trees. To evaluate the parsing performance in detail, the first author manually annotated a subset of randomly sampled 100 sentences from the test set. Sentences with less than four words are not contained, to exclude short phrases such as nodding. Using this test set, we report the standard CCG parsing metrics. Sentences from this domain exhibit other challenging aspects (Table 5) , such as less formal expressions (e.g., use of cause instead of because) (5a), and lengthy sentences with many embedded phrases (5b). 16 Results On the whole test set, depccg shows consistent improvements when combined with ELMo and the proposed method, in the constituency-based metrics (Whole columns in 14 https://github.com/jkkummerfeld/ berkeley-ccg2pst", "cite_spans": [ { "start": 800, "end": 802, "text": "13", "ref_id": null }, { "start": 1235, "end": 1259, "text": "Kummerfeld et al. (2012)", "ref_id": "BIBREF17" }, { "start": 1262, "end": 1264, "text": "14", "ref_id": null }, { "start": 2078, "end": 2080, "text": "16", "ref_id": null } ], "ref_spans": [ { "start": 1106, "end": 1114, "text": "Figure 3", "ref_id": null }, { "start": 1933, "end": 1942, "text": "(Table 5)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Speech Conversation", "sec_num": "5.4" }, { "text": "if ((S\\NP)/(S\\NP))/S dcl CD N NP un = (S dcl \\NP)/NP 8 N NP un S dcl \\NP > S dcl < and conj BE N NP un = (S dcl \\NP)/NP 2 N NP un S dcl \\NP > S dcl < S dcl \\S dcl \u03a6 S dcl < (S\\NP)/(S\\NP) > , , find (S dcl \\NP)/NP AE N NP un S dcl \\NP > S dcl \\NP rp . . S dcl \\NP rp S dcl \\NP >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Conversation", "sec_num": "5.4" }, { "text": "15 https://nlp.cs.nyu.edu/evalb/ 16 Following Honnibal and Johnson (2014) , sentences in this data are fully lower-cased and contain no punctuation. Table 7 ). Though the entire scores are relatively lower, the result suggests that the proposed method is effective to this domain on the whole. By directly evaluating the parser's performance in terms of predicate argument relations (Subset columns), we observe that it actually recovers the most of the dependencies, with the fine-tuned depccg achieving as high as 95.63% unlabeled F1 score.", "cite_spans": [ { "start": 46, "end": 73, "text": "Honnibal and Johnson (2014)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "We further investigate error cases of the finetuned depccg in the subset dataset ( Table 6 ). The tendency of error types is in accordance with other domains, with frequent errors in PPattachment and predicate-argument structure, and seemingly more cases of attachment errors of adverbial phrases (11 cases), which occur in lengthy sentences such as in Table 5b . Other types of error are failures to recognize that the sentence is in imperative form (2 cases), and ones in handling informal functional words such as cause (Table 5a) . We conclude that the performance on this domain is as high as it is usable in application. Since the remaining errors are general ones, they will be solved by improving general parsing techniques.", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 90, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 353, "end": 361, "text": "Table 5b", "ref_id": "TABREF6" }, { "start": 523, "end": 533, "text": "(Table 5a)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Setup Finally, we conduct another experiment on parsing math problems. Following previous work of constituency parsing on math problem , we use the same train/test sets by Seo et al. (2015) , consisting of 63/62 sentences respectively, and see if a CCG parser can be adapted with the small training samples. Again, the first author annotated both train/test sets, dependency trees on the train set, and CCG trees on the test set, respectively. In the annotation, we follow the manuals of the English CCGbank and the UD. We regard as an important future work extending the annotation to include fine-grained feature values in categories, e.g., marking a distinction between integers and real numbers (Matsuzaki et al., 2017) . Figure 3 shows an example CCG tree from this domain, successfully parsed by fine-tuned depccg.", "cite_spans": [ { "start": 172, "end": 189, "text": "Seo et al. (2015)", "ref_id": "BIBREF34" }, { "start": 699, "end": 723, "text": "(Matsuzaki et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 726, "end": 734, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Math Problems", "sec_num": "5.5" }, { "text": "Results Table 8 shows the F1 scores of depccg in the respective settings. Remarkably, we observe huge additive performance improvement. While, in terms of labeled F1, ELMo contributes about 4 points on top of the plain depccg, adding the new training set (converted from dependency trees) improves more than 10 points. 17 Examining the resulting trees, we observe that the huge gain is primarily involved with expressions unique to math. Figure 3 is one of such cases, which the plain depccg falsely analyzes as one huge NP phrase. However, after fine-tuning, it successfully produces the correct \"If S 1 and S 2 , S 3 \" structure, recognizing that the equal sign is a predicate.", "cite_spans": [ { "start": 319, "end": 321, "text": "17", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 8, "end": 15, "text": "Table 8", "ref_id": "TABREF10" }, { "start": 438, "end": 446, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Math Problems", "sec_num": "5.5" }, { "text": "In this work, we have proposed a domain adaptation method for CCG parsing, based on the automatic generation of new CCG treebanks from dependency resources. We have conducted experiments to verify the effectiveness of the proposed method on diverse domains: on top of existing benchmarks on biomedical texts and question sentences, we newly conduct parsing experiments on speech conversation and math problems. Remarkably, when applied to our domain adaptation method, the improvements in the latter two domains are significant, with the achievement of more than 5 points in the unlabeled metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In this paper, we call a treebank based on CCG grammar a CCGbank, and refer to the specific one constructed inHockenmaier and Steedman (2007) as the English CCGbank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "All the programs and resources used in this work are available at: https://github.com/masashi-y/ depccg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The strictness and the large number of categories make it still hard to leave everything to neural networks to learn. We trained constituency-based RSP parser on the English CCGbank by disguising the trees as constituency ones, whose performance could not be evaluated since most of the output trees violated the grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The uniqueness is broken if a tree contains a unary node.5 In this work, input dependency tree is based on Universal Dependencies(Nivre et al., 2016), while dependency structure d of a CCG tree is Head First dependency tree introduced inYoshikawa et al. (2017). See \u00a7 5 for the detail.6 Here, the independence of each cis and dis is assumed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the \"original\" ELMo model, with 1,024dimensional word vector outputs (https://allennlp. org/elmo).9 https://nlp.stanford.edu/software/ stanford-dependencies.shtml.We used the version 3.9.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used their public script (https://www.cl. cam.ac.uk/\u02dcsc609/candc-1.00.html).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We regard developing joint disfluency detection and syntactic parsing method based on CCG as future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the three anonymous reviewers for their insightful comments. This work was in part supported by JSPS KAKENHI Grant Number JP18J12945, and also by JST AIP-PRISM Grant Number JPMJCR18Y1, Japan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "LangPro: Natural Language Theorem Prover", "authors": [ { "first": "", "middle": [], "last": "Lasha Abzianidze", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "115--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lasha Abzianidze. 2017. LangPro: Natural Language Theorem Prover. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 115- 120. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "they evaluate on partially annotated (unlabeled) trees, we perform the \"full\" CCG parsing evaluation, employing the standard evaluation metrics", "authors": [ { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Note that, while in the experiment on this dataset in the previous constituency parsing work (Joshi et al., 2018), they evaluate on partially annotated (unlabeled) trees, we perform the \"full\" CCG parsing evaluation, employing the standard evaluation metrics. Given that, the improvement is even more significant.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using CCG categories to improve Hindi dependency parsing", "authors": [ { "first": "Bharat", "middle": [ "Ram" ], "last": "Ambati", "suffix": "" }, { "first": "Tejaswini", "middle": [], "last": "Deoskar", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "604--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharat Ram Ambati, Tejaswini Deoskar, and Mark Steedman. 2013. Using CCG categories to improve Hindi dependency parsing. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics, pages 604-609. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Supertagging: An Approach to Almost Parsing", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "2", "pages": "237--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore and Aravind K. Joshi. 1999. Su- pertagging: An Approach to Almost Parsing. Com- putational Linguistics, 25(2):237-265.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Groningen Meaning Bank", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Evang", "suffix": "" }, { "first": "Noortje", "middle": [ "J" ], "last": "Venhuizen", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Bjerva", "suffix": "" } ], "year": 2017, "venue": "Handbook of Linguistic Annotation", "volume": "", "issue": "", "pages": "463--496", "other_ids": { "DOI": [ "10.1007/978-94-024-0881-2_18" ] }, "num": null, "urls": [], "raw_text": "Johan Bos, Valerio Basile, Kilian Evang, Noortje J. Venhuizen, and Johannes Bjerva. 2017. The Gronin- gen Meaning Bank. In Handbook of Linguistic An- notation, pages 463-496. Springer Netherlands.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Converting a Dependency Treebank to a Categorial Grammar Treebank for Italian", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "Mazzei", "middle": [], "last": "Alessandro", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Eighth International Workshop on Treebanks and Linguistic Theories", "volume": "", "issue": "", "pages": "27--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos, Bosco Cristina, and Mazzei Alessandro. 2009. Converting a Dependency Treebank to a Cate- gorial Grammar Treebank for Italian. In In Proceed- ings of the Eighth International Workshop on Tree- banks and Linguistic Theories, pages 27-38.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "4", "pages": "493--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James R. Curran. 2007. Wide- Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493-552.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Building Deep Dependency Structures with a Wide-coverage CCG Parser", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "327--334", "other_ids": { "DOI": [ "10.3115/1073083.1073138" ] }, "num": null, "urls": [], "raw_text": "Stephen Clark, Julia Hockenmaier, and Mark Steed- man. 2002. Building Deep Dependency Structures with a Wide-coverage CCG Parser. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 327-334. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)", "authors": [ { "first": "Djork-Arn\u00e9", "middle": [], "last": "Clevert", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Unterthiner", "suffix": "" }, { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and Accurate Deep Net- work Learning by Exponential Linear Units (ELUs). ICLR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deep Biaffine Attention for Neural Dependency Parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. ICLR.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SWITCHBOARD: Telephone Speech Corpus for Research and Development", "authors": [ { "first": "John", "middle": [ "J" ], "last": "Godfrey", "suffix": "" }, { "first": "Edward", "middle": [ "C" ], "last": "Holliman", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Mc-Daniel", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 1992 IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "517--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "John J. Godfrey, Edward C. Holliman, and Jane Mc- Daniel. 1992. SWITCHBOARD: Telephone Speech Corpus for Research and Development. In Pro- ceedings of the 1992 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 517-520. IEEE Computer Society.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "355--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A Corpus of CCG Derivations and Depen- dency Structures Extracted from the Penn Treebank. Computational Linguistics, 33(3):355-396.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Joint Incremental Disfluency Detection and Dependency Parsing. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2014, "venue": "", "volume": "2", "issue": "", "pages": "131--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Mark Johnson. 2014. Joint Incremental Disfluency Detection and Dependency Parsing. Transactions of the Association for Com- putational Linguistics, 2:131-142.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Supervised Treebank Conversion: Data and Approaches", "authors": [ { "first": "Xinzhou", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2706--2716", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinzhou Jiang, Zhenghua Li, Bo Zhang, Min Zhang, Sheng Li, and Luo Si. 2018. Supervised Treebank Conversion: Data and Approaches. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2706-2716. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples", "authors": [ { "first": "Vidur", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1190--1199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1190-1199. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "QuestionBank: Creating a Corpus of Parse-Annotated Questions", "authors": [ { "first": "John", "middle": [], "last": "Judge", "suffix": "" }, { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "497--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Judge, Aoife Cahill, and Josef van Genabith. 2006. QuestionBank: Creating a Corpus of Parse- Annotated Questions. In Proceedings of the 21st In- ternational Conference on Computational Linguis- tics and 44th Annual Meeting of the Association for Computational Linguistics, pages 497-504. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A* Parsing: Fast Exact Viterbi Parse Selection", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "40--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2003. A* Parsing: Fast Exact Viterbi Parse Selection. In Pro- ceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 40-47. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Robust Conversion of CCG Derivations to Phrase Structure Trees", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "105--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan K. Kummerfeld, Dan Klein, and James R. Curran. 2012. Robust Conversion of CCG Deriva- tions to Phrase Structure Trees. In Proceedings of the 50th Annual Meeting of the Association for Com- putational Linguistics, pages 105-109. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Global Neural CCG Parsing with Optimality Guarantees", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2366--2376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global Neural CCG Parsing with Optimality Guar- antees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 2366-2376. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Low-Rank Tensors for Scoring Dependency Structures", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Xin", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1381--1391", "other_ids": { "DOI": [ "10.3115/v1/P14-1130" ] }, "num": null, "urls": [], "raw_text": "Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-Rank Tensors for Scoring Dependency Structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1381-1391. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "LSTM CCG Parsing", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "221--231", "other_ids": { "DOI": [ "10.18653/v1/N16-1026" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 221-231. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A* CCG Parsing with a Supertag-factored Model", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "990--1000", "other_ids": { "DOI": [ "10.3115/v1/D14-1107" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis and Mark Steedman. 2014a. A* CCG Parsing with a Supertag-factored Model. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing, pages 990- 1000. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improved CCG Parsing with Semi-supervised Supertagging", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "327--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis and Mark Steedman. 2014b. Improved CCG Parsing with Semi-supervised Supertagging. Transactions of the Association for Computational Linguistics, 2:327-338.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Active Learning for Dependency Parsing with Partial Annotation", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhanyi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "344--354", "other_ids": { "DOI": [ "10.18653/v1/P16-1033" ] }, "num": null, "urls": [], "raw_text": "Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active Learning for Dependency Parsing with Par- tial Annotation. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 344-354. Association for Computa- tional Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "314--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):314-330.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Generating Typed Dependency Parses from Phrase Structure Parses", "authors": [ { "first": "M", "middle": [], "last": "Marneffe", "suffix": "" }, { "first": "B", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "449--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Marneffe, B. Maccartney, and C. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the Fifth In- ternational Conference on Language Resources and Evaluation, pages 449-454. European Language Resources Association.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On-demand Injection of Lexical Knowledge for Recognising Textual Entailment", "authors": [ { "first": "Pascual", "middle": [], "last": "Mart\u00ednez-G\u00f3mez", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Mineshima", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "710--720", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascual Mart\u00ednez-G\u00f3mez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2017. On-demand In- jection of Lexical Knowledge for Recognising Tex- tual Entailment. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 710-720. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semantic Parsing of Pre-university Math Problems", "authors": [ { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Takumi", "middle": [], "last": "Ito", "suffix": "" }, { "first": "Hidenao", "middle": [], "last": "Iwane", "suffix": "" }, { "first": "Hirokazu", "middle": [], "last": "Anai", "suffix": "" }, { "first": "Noriko", "middle": [ "H" ], "last": "Arai", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2131--2141", "other_ids": { "DOI": [ "10.18653/v1/P17-1195" ] }, "num": null, "urls": [], "raw_text": "Takuya Matsuzaki, Takumi Ito, Hidenao Iwane, Hi- rokazu Anai, and Noriko H. Arai. 2017. Semantic Parsing of Pre-university Math Problems. In Pro- ceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2131- 2141. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Active Learning for Dependency Parsing Using Partially Annotated Sentences", "authors": [ { "first": "Abolghasem", "middle": [], "last": "Seyed", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Mirroshandel", "suffix": "" }, { "first": "", "middle": [], "last": "Nasr", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 12th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "140--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active Learning for Dependency Parsing Using Partially Annotated Sentences. In Proceed- ings of the 12th International Conference on Parsing Technologies, pages 140-149. Association for Com- putational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Parser Adaptation to the Biomedical Domain without Re-Training", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis", "volume": "", "issue": "", "pages": "79--89", "other_ids": { "DOI": [ "10.18653/v1/W15-2610" ] }, "num": null, "urls": [], "raw_text": "Jeff Mitchell and Mark Steedman. 2015. Parser Adaptation to the Biomedical Domain without Re- Training. In Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis, pages 79-89. Association for Computa- tional Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures", "authors": [ { "first": "Makoto", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1105--1116", "other_ids": { "DOI": [ "10.18653/v1/P16-1105" ] }, "num": null, "urls": [], "raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 1105-1116. Association for Compu- tational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Universal Dependencies v1: A Multilingual Treebank Collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "1659--1666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation, pages 1659-1666. European Language Resources Association.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Deep Contextualized Word Representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies,, pages 2227-2237. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Adapting a Lexicalized-Grammar Parser to Contrasting Domains", "authors": [ { "first": "Laura", "middle": [], "last": "Rimell", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "475--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Rimell and Stephen Clark. 2008. Adapting a Lexicalized-Grammar Parser to Contrasting Do- mains. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing, pages 475-484. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Solving geometry problems: Combining text and diagram interpretation", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Clint", "middle": [], "last": "Malcolm", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1466--1476", "other_ids": { "DOI": [ "10.18653/v1/D15-1171" ] }, "num": null, "urls": [], "raw_text": "Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geom- etry problems: Combining text and diagram inter- pretation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1466-1476. Association for Compu- tational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "CCG Parsing Algorithm with Incremental Tree Rotation", "authors": [ { "first": "Milo\u0161", "middle": [], "last": "Stanojevi\u0107", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "228--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milo\u0161 Stanojevi\u0107 and Mark Steedman. 2019. CCG Parsing Algorithm with Incremental Tree Rotation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 228-239. Association for Computational Lin- guistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "The Syntactic Process", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2000. The Syntactic Process. The MIT Press.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1556--1566", "other_ids": { "DOI": [ "10.3115/v1/P15-1150" ] }, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representa- tions From Tree-Structured Long Short-Term Mem- ory Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing, pages 1556-1566. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Syntax Annotation for the GENIA Corpus", "authors": [ { "first": "Yuka", "middle": [], "last": "Tateisi", "suffix": "" }, { "first": "Akane", "middle": [], "last": "Yakushiji", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2005, "venue": "Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts", "volume": "", "issue": "", "pages": "220--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuka Tateisi, Akane Yakushiji, Tomoko Ohta, and Jun'ichi Tsujii. 2005. Syntax Annotation for the GENIA Corpus. In Companion Volume to the Pro- ceedings of Conference including Posters/Demos and tutorial abstracts, pages 220-225. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A* CCG Parsing with a Supertag and Dependency Factored Model", "authors": [ { "first": "Masashi", "middle": [], "last": "Yoshikawa", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Noji", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "277--287", "other_ids": { "DOI": [ "10.18653/v1/P17-1026" ] }, "num": null, "urls": [], "raw_text": "Masashi Yoshikawa, Hiroshi Noji, and Yuji Mat- sumoto. 2017. A* CCG Parsing with a Supertag and Dependency Factored Model. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics, pages 277-287. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
NP/N
the gove
(a) Training the converterreported
det nsubjmark...that
Dependency treegovern-ment Bidirectional TreeLSTM ... (Miwa et al.,2016)i Vector encodingsA* parsing decoderCCG tree
amodcasenmod...
Circadian rhythm in glucocorticoid ...Genia
Genia Dep.CCG Corpus
Corpus
", "num": null, "type_str": "table", "text": "Trained Converterthe government reported that ...", "html": null }, "TABREF1": { "content": "
7 https://nlp.stanford.edu/projects/
glove/
", "num": null, "type_str": "table", "text": "The performance of baseline CCG parsers and the proposed converter on WSJ23, where UF1 and LF1 represents unlabeled and labeled F1, respectively.", "html": null }, "TABREF2": { "content": "
RelationParser Converter#
(a) PPs attaching to NP / VP
(N P \\NP)/N P90.6297.462,561
(S\\N P )\\(S\\NP))/N P 81.1588.631,074
(b)
", "num": null, "type_str": "table", "text": "Subject / object relative clauses (N P \\NP)/(S dcl \\N P ) 93.44 98.71 307 (N P \\NP)/(S dcl /N P ) 90.48 93.02 20", "html": null }, "TABREF4": { "content": "
MethodPRF1
C&C--86.8
EasySRL88.287.988.0
depccg90.42 90.15 90.29
+ ELMo90.55 89.86 90.21
+ Proposed 90.27 89.97 90.12
", "num": null, "type_str": "table", "text": "Results on the biomedical domain dataset ( \u00a75.3). P and R represent precision and recall, respectively. The scores of C&C and EasySRL fine-tuned on the GENIA1000 is included for comparison (excerpted from).", "html": null }, "TABREF5": { "content": "", "num": null, "type_str": "table", "text": "Results on question sentences ( \u00a75.3). All of baseline C&C, EasySRL and depccg parsers are retrained on Questions data.", "html": null }, "TABREF6": { "content": "
Error type#
PP-attachment3
Adverbs attaching wrong place 11
Predicate-argument5
Imperative2
Informal functional words2
Others11
", "num": null, "type_str": "table", "text": "Example sentences from the manually annotated subset of Switchboard test set.", "html": null }, "TABREF7": { "content": "", "num": null, "type_str": "table", "text": "Error types observed in the manually annotated Switchboard subset data.", "html": null }, "TABREF9": { "content": "
MethodUF1LF1
depccg88.49 66.15
+ ELMo89.32 70.74
+ Proposed 95.83 80.53
", "num": null, "type_str": "table", "text": "Results on speech conversation texts ( \u00a75.4), on the whole test set and the manually annotated subset.", "html": null }, "TABREF10": { "content": "", "num": null, "type_str": "table", "text": "Results on math problems ( \u00a75.5).", "html": null } } } }