{ "paper_id": "N10-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:49:28.063171Z" }, "title": "Chart Mining-based Lexical Acquisition with Precision Grammars", "authors": [ { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "yzhang@coli.uni-sb.de" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": { "country": "Australia" } }, "email": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "kordoni@dfki.de" }, { "first": "David", "middle": [], "last": "Martinez", "suffix": "", "affiliation": { "laboratory": "NICTA Victoria Research Laboratory", "institution": "", "location": {} }, "email": "davidm@csse.unimelb.edu.au" }, { "first": "Jeremy", "middle": [], "last": "Nicholson", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": { "country": "Australia" } }, "email": "jeremymn@csse.unimelb.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present an innovative chart mining technique for improving parse coverage based on partial parse outputs from precision grammars. The general approach of mining features from partial analyses is applicable to a range of lexical acquisition tasks, and is particularly suited to domain-specific lexical tuning and lexical acquisition using lowcoverage grammars. As an illustration of the functionality of our proposed technique, we develop a lexical acquisition model for English verb particle constructions which operates over unlexicalised features mined from a partial parsing chart. The proposed technique is shown to outperform a state-of-the-art parser over the target task, despite being based on relatively simplistic features.", "pdf_parse": { "paper_id": "N10-1002", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present an innovative chart mining technique for improving parse coverage based on partial parse outputs from precision grammars. The general approach of mining features from partial analyses is applicable to a range of lexical acquisition tasks, and is particularly suited to domain-specific lexical tuning and lexical acquisition using lowcoverage grammars. As an illustration of the functionality of our proposed technique, we develop a lexical acquisition model for English verb particle constructions which operates over unlexicalised features mined from a partial parsing chart. The proposed technique is shown to outperform a state-of-the-art parser over the target task, despite being based on relatively simplistic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Parsing with precision grammars is increasingly achieving broad coverage over open-domain texts for a range of constraint-based frameworks (e.g., TAG, LFG, HPSG and CCG), and is being used in real-world applications including information extraction, question answering, grammar checking and machine translation (Uszkoreit, 2002; Oepen et al., 2004; Frank et al., 2006; Zhang and Kordoni, 2008; MacKinlay et al., 2009) . In this context, a \"precision grammar\" is a grammar which has been engineered to model grammaticality, and contrasts with a treebank-induced grammar, for example.", "cite_spans": [ { "start": 311, "end": 328, "text": "(Uszkoreit, 2002;", "ref_id": "BIBREF26" }, { "start": 329, "end": 348, "text": "Oepen et al., 2004;", "ref_id": "BIBREF19" }, { "start": 349, "end": 368, "text": "Frank et al., 2006;", "ref_id": "BIBREF10" }, { "start": 369, "end": 393, "text": "Zhang and Kordoni, 2008;", "ref_id": "BIBREF29" }, { "start": 394, "end": 417, "text": "MacKinlay et al., 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inevitably, however, such applications demand complete parsing outputs, based on the assumption that the text under investigation will be completely analysable by the grammar. As precision grammars generally make strong assumptions about complete lexical coverage and grammaticality of the input, their utility is limited over noisy or domain-specific data. This lack of complete coverage can make parsing with precision grammars less attractive than parsing with shallower methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One technique that has been successfully applied to improve parser and grammar coverage over a given corpus is error mining (van Noord, 2004; de Kok et al., 2009) , whereby n-grams with low \"parsability\" are gathered from the large-scale output of a parser as an indication of parser or (precision) grammar errors. However, error mining is very much oriented towards grammar engineering: its results are a mixture of different (mistreated) linguistic phenomena together with engineering errors for the grammar engineer to work through and act upon. Additionally, it generally does not provide any insight into the cause of the parser failure, and it is difficult to identify specific language phenomena from the output.", "cite_spans": [ { "start": 124, "end": 141, "text": "(van Noord, 2004;", "ref_id": null }, { "start": 142, "end": 162, "text": "de Kok et al., 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we instead propose a chart mining technique that works on intermediate parsing results from a parsing chart. In essence, the method analyses the validity of different analyses for words or constructions based on the \"lifetime\" and probability of each within the chart, combining the constraints of the grammar with probabilities to evaluate the plausibility of each.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For purposes of exemplification of the proposed technique, we apply chart mining to a deep lexical acquisition (DLA) task, using a maximum entropybased prediction model trained over a seed lexicon and treebank. The experimental set up is the following: given a set of sentences containing putative instances of English verb particle constructions, extract a list of non-compositional VPCs optionally with valence information. For comparison, we parse the same sentence set using a state-of-the-art statistical parser, and extract the VPCs from the parser output. Our results show that our chart mining method produces a model which is superior to the treebank parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To our knowledge, the only other work that has looked at partial parsing results of precision grammars as a means of linguistic error analysis is that of Kiefer et al. (1999) and Zhang et al. (2007a) , where partial parsing models were proposed to select a set of passive edges that together cover the input sequence. Compared to these approaches, our proposed chart mining technique is more general and can be adapted to specific tasks and domains. While we experiment exclusively with an HPSG grammar in this paper, it is important to note that the proposed method can be applied to any grammar formalism which is compatible with chart parsing, and where it is possible to describe an unlexicalised lexical entry for the different categories of lexical item that are to be extracted (see Section 3.2 for details).", "cite_spans": [ { "start": 154, "end": 174, "text": "Kiefer et al. (1999)", "ref_id": "BIBREF13" }, { "start": 179, "end": 199, "text": "Zhang et al. (2007a)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is organised as follows. Section 2 defines the task of VPC extraction. Section 3 presents the chart mining technique and the feature extraction process for the VPC extraction task. Section 4 evaluates the model performance with comparison to two competitor models over several different measures. Section 5 further discusses the general applicability of chart mining. Finally, Section 6 concludes the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The particular construction type we target for DLA in this paper is English Verb Particle Constructions (henceforth VPCs). VPCs consist of a head verb and one or more obligatory particles, in the form of intransitive prepositions (e.g., hand in), adjectives (e.g., cut short) or verbs (e.g., let go) (Villavicencio and Copestake, 2002; Huddleston and Pullum, 2002; Baldwin and Kim, 2009) ; for the purposes of our dataset, we assume that all particles are prepositional-by far the most common and productive of the three types-and further restrict our attention to single-particle VPCs (i.e., we ignore VPCs such as get along together).", "cite_spans": [ { "start": 300, "end": 335, "text": "(Villavicencio and Copestake, 2002;", "ref_id": "BIBREF27" }, { "start": 336, "end": 364, "text": "Huddleston and Pullum, 2002;", "ref_id": "BIBREF11" }, { "start": 365, "end": 387, "text": "Baldwin and Kim, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Verb Particle Constructions", "sec_num": "2" }, { "text": "One aspect of VPCs that makes them a particularly challenging target for lexical acquisition is that the verb and particle can be non-contiguous (for instance, hand the paper in and battle right on). This sets them apart from conventional collocations and terminology (cf., Manning and Sch\u00fctze (1999) , Smadja (1993) and McKeown and Radev (2000) ) in that they cannot be captured effectively using ngrams, due to their variability in the number and type of words potentially interceding between the verb and the particle. Also, while conventional collocations generally take the form of compound nouns or adjective-noun combinations with relatively simple syntactic structure, VPCs occur with a range of valences. Furthermore, VPCs are highly productive in English and vary in use across domains, making them a prime target for lexical acquisition (Deh\u00e9, 2002; Baldwin, 2005; Baldwin and Kim, 2009) .", "cite_spans": [ { "start": 274, "end": 300, "text": "Manning and Sch\u00fctze (1999)", "ref_id": null }, { "start": 303, "end": 316, "text": "Smadja (1993)", "ref_id": "BIBREF23" }, { "start": 321, "end": 345, "text": "McKeown and Radev (2000)", "ref_id": "BIBREF17" }, { "start": 848, "end": 860, "text": "(Deh\u00e9, 2002;", "ref_id": "BIBREF8" }, { "start": 861, "end": 875, "text": "Baldwin, 2005;", "ref_id": "BIBREF3" }, { "start": 876, "end": 898, "text": "Baldwin and Kim, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Verb Particle Constructions", "sec_num": "2" }, { "text": "In the VPC dataset we use, there is an additional distinction between compositional and noncompositional VPCs. With compositional VPCs, the semantics of the verb and particle both correspond to the semantics of the respective simplex words, including the possibility of the semantics being specific to the VPC construction in the case of particles. For example, battle on would be classified as compositional, as the semantics of battle is identical to that for the simplex verb, and the semantics of on corresponds to the continuative sense of the word as occurs productively in VPCs (cf., walk/dance/drive/govern/... on). With non-compositional VPCs, on the other hand, the semantics of the VPC is somehow removed from that of the parts. In the dataset we used for evaluation, we are interested in extracting exclusively non-compositional VPCs, as they require lexicalisation; compositional VPCs can be captured via lexical rules and are hence not the target of extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Particle Constructions", "sec_num": "2" }, { "text": "English VPCs can occur with a number of valences, with the two most prevalent and productive valences being the simple transitive (e.g., hand in the paper) and intransitive (e.g., back off ). For the purposes of our target task, we focus exclusively on these two valence types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Particle Constructions", "sec_num": "2" }, { "text": "Given the above, we define the English VPC extraction task to be the production of triples of the form v, p, s , where v is a verb lemma, p is a prepositional particle, and s \u2208 {intrans , trans} is the va-lence; additionally, each triple has to be semantically non-compositional. The triples are extracted relative to a set of putative token instances for each of the intransitive and transitive valences for a given VPC. That is, a given triple should be classified as positive if and only if it is associated with at least one noncompositional token instance in the provided tokenlevel data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Particle Constructions", "sec_num": "2" }, { "text": "The dataset used in this research is the one used in the LREC 2008 Multiword Expression Workshop Shared Task (Baldwin, 2008) . 1 In the dataset, there is a single file for each of 4,090 candidate VPC triples, containing up to 50 sentences that have the given VPC taken from the British National Corpus. When the valence of the VPC is ignored, the dataset contains 440 unique VPCs among 2,898 VPC candidates. In order to be able to fairly compare our method with a state-of-the-art lexicalised parser trained over the WSJ training sections of the Penn Treebank, we remove any VPC types from the test set which are attested in the WSJ training sections. This removes 696 VPC types from the test set, and makes the task even more difficult, as the remaining testing VPC types are generally less frequent ones. At the same time, it unfortunately means that our results are not directly comparable to those for the original shared task. 2", "cite_spans": [ { "start": 109, "end": 124, "text": "(Baldwin, 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Verb Particle Constructions", "sec_num": "2" }, { "text": "Precision Grammar", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chart Mining for Parsing with a Large", "sec_num": "3" }, { "text": "The chart mining technique we use in this paper is couched in a constituent-based bottom-up chart parsing paradigm. A parsing chart is a data structure that records all the (complete or incomplete) intermediate parsing results. Every passive edge on the parsing chart represents a complete local analysis covering a sub-string of the input, while each active edge predicts a potential local analysis. In this view, a full analysis is merely a passive edge that spans the whole input and satisfies certain root con-ditions. The bottom-up chart parser starts with edges instantiated from lexical entries corresponding to the input words. The grammar rules are used to incrementally create longer edges from smaller ones until no more edges can be added to the chart. Standardly, the parser returns only outputs that correspond to passive edges in the parsing chart that span the full input string. For those inputs without a full-spanning edge, no output is generated, and the chart becomes the only source of parsing information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Technique", "sec_num": "3.1" }, { "text": "A parsing chart takes the form of a hierarchy of edges. Where only passive edges are concerned, each non-lexical edge corresponds to exactly one grammar rule, and is connected with one or more daughter edge(s), and zero or more parent edge(s). Therefore, traversing the chart is relatively straightforward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Technique", "sec_num": "3.1" }, { "text": "There are two potential challenges for the chartmining technique. First, there is potentially a huge number of parsing edges in the chart. For instance, when parsing with a large precision grammar like the HPSG English Resource Grammar (ERG, Flickinger (2002) ), it is not unusual for a 20-word sentence to receive over 10,000 passive edges. In order to achieve high efficiency in parsing (as well as generation), ambiguity packing is usually used to reduce the number of productive passive edges on the parsing chart (Tomita, 1985) . For constraint-based grammar frameworks like LFG and HPSG, subsumption-based packing is used to achieve a higher packing ratio (Oepen and Carroll, 2000) , but this might also potentially lead to an inconsistent packed parse forest that does not unpack successfully. For chart mining, this means that not all passive edges are directly accessible from the chart. Some of them are packed into others, and the derivatives of the packed edges are not generated. Because of the ambiguity packing, zero or more local analyses may exist for each passive edge on the chart, and the cross-combination of the packed daughter edges is not guaranteed to be compatible. As a result, expensive unification operations must be reapplied during the unpacking phase. Carroll and Oepen (2005) and Zhang et al. (2007b) have proposed efficient k-best unpacking algorithms that can selectively extract the most probable readings from the packed parse forest according to a discrimina-tive parse disambiguation model, by minimising the number of potential unifications. The algorithm can be applied to unpack any passive edges. Because of the dynamic programming used in the algorithm and the hierarchical structure of the edges, the cost of the unpacking routine is empirically linear in the number of desired readings, and O(1) when invoked more than once on the same edge.", "cite_spans": [ { "start": 236, "end": 259, "text": "(ERG, Flickinger (2002)", "ref_id": null }, { "start": 518, "end": 532, "text": "(Tomita, 1985)", "ref_id": "BIBREF24" }, { "start": 662, "end": 687, "text": "(Oepen and Carroll, 2000)", "ref_id": "BIBREF18" }, { "start": 1284, "end": 1308, "text": "Carroll and Oepen (2005)", "ref_id": "BIBREF5" }, { "start": 1313, "end": 1333, "text": "Zhang et al. (2007b)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "The Technique", "sec_num": "3.1" }, { "text": "The other challenge concerns the selection of informative and representative pieces of knowledge from the massive sea of partial analyses in the parsing chart. How to effectively extract the indicative features for a specific language phenomenon is a very task-specific question, as we will show in the context of the VPC extraction task in Section 3.2. However, general strategies can be applied to generate parse ranking scores on each passive edge. The most widely used parse ranking model is the loglinear model (Abney, 1997; Johnson et al., 1999; Toutanova et al., 2002) . When the model does not use non-local features, the accumulated score on a sub-tree under a certain (unpacked) passive edge can be used to approximate the probability of the partial analysis conditioned on the sub-string within that span. 3", "cite_spans": [ { "start": 516, "end": 529, "text": "(Abney, 1997;", "ref_id": "BIBREF0" }, { "start": 530, "end": 551, "text": "Johnson et al., 1999;", "ref_id": "BIBREF12" }, { "start": 552, "end": 575, "text": "Toutanova et al., 2002)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "The Technique", "sec_num": "3.1" }, { "text": "As stated above, the target task we use to illustrate the capabilities of our chart mining method is VPC extraction. The grammar we apply our chart mining method to in this paper is the English Resource Grammar (ERG, Flickinger (2002) ), a large-scale precision HPSG for English. Note, however, that the method is equally compatible with any grammar or grammar formalism which is compatible with chart parsing.", "cite_spans": [ { "start": 211, "end": 234, "text": "(ERG, Flickinger (2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Application: Acquiring Features for VPC Extraction", "sec_num": "3.2" }, { "text": "The lexicon of the ERG has been semiautomatically extended with VPCs extracted by Baldwin (2005) . In order to show the effectiveness of chart mining in discovering \"unknowns\" and remove any lexical probabilities associated with pre-existing lexical entries, we block the lexical entries for the verb in the candidate VPC by substituting the input token with a DUMMY-V token, which is coupled with four candidate lexical entries of type: (1) intransitive simplex verb (v -e), (2) transitive simplex verb (v np le), (3) intransitive VPC (v p le), and (4) transitive VPC (v p-np le), respectively. These four lexical entries represent the two VPC valences we wish to distinguish between in the VPC extraction task, and the competing simplex verb candidates. Based on these lexical types, the features we extract with chart mining are summarised in Table 1 . The maximal constituent (MAXCONS) of a lexical entry is defined to be the passive edge that is an ancestor of the lexical entry edge that: (i) must span over the particle, and (ii) has maximal span length. In the case of a tie, the edge with the highest disambiguation score is selected as the MAXCONS. If there is no edge found on the chart that spans over both the verb and the particle, the MAXCONS is set to be NULL, with a MAXSPAN of 0, MAXLEVEL of 0 and MAXCRANK of 4 (see Table 1 ). The stem of the particle is also collected as a feature.", "cite_spans": [ { "start": 82, "end": 96, "text": "Baldwin (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 846, "end": 853, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1335, "end": 1342, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Application: Acquiring Features for VPC Extraction", "sec_num": "3.2" }, { "text": "One important characteristic of these features is that they are completely unlexicalised on the verb. This not only leads to a fair evaluation with the ERG by excluding the influence from the lexical coverage of VPCs in the grammar, but it also demonstrates that complete grammatical coverage over simplex verbs is not a prerequisite for chart mining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Application: Acquiring Features for VPC Extraction", "sec_num": "3.2" }, { "text": "To illustrate how our method works, we present the unpacked parsing chart for the candidate VPC show off and input sentence The boy shows off his new toys in Figure 1 . The non-terminal edges are marked with their syntactic categories, i.e., HPSG rules (e.g., subjh for the subject-head-rule, hadj for the head-adjunct-rule, etc.), and optionally their disambiguation scores. By traversing upward through parent edges from the DUMMY-V edge, all features can be efficiently extracted (see the third column in Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 166, "text": "Figure 1", "ref_id": null }, { "start": 508, "end": 515, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Application: Acquiring Features for VPC Extraction", "sec_num": "3.2" }, { "text": "It should be noted that none of these features are used to deterministically dictate the predicted VPC category. Instead, the acquired features are used as inputs to a statistical classifier for predicting the type of the VPC candidate at the token level (in the context of the given sentence). In our experiment, we used a maximum entropy-based model to do a 3- Figure 1 : Example of a parsing chart in chart-mining for VPC extraction with the ERG category classification: non-VPC, transitive VPC, or intransitive VPC. For the parameter estimation of the ME model, we use the TADM open source toolkit (Malouf, 2002) . The token-level predictions are then combined with a simple majority voting to derive the type-level prediction for the VPC candidate. In the case of a tie, the method backs off to the na\u00efve baseline model described in Section 4.2, which relies on the combined probability of the verb and particle forming a VPC.", "cite_spans": [ { "start": 602, "end": 616, "text": "(Malouf, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 363, "end": 371, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Application: Acquiring Features for VPC Extraction", "sec_num": "3.2" }, { "text": "We have also experimented with other ways of deriving type-level predictions from token-level classification results. For instance, we trained a separate classifier that takes the token-level prediction as input in order to determine the type-level VPC predic-tion. Our results indicate no significant difference between these methods and the basic majority voting approach, so we present results exclusively for this simplistic approach in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Application: Acquiring Features for VPC Extraction", "sec_num": "3.2" }, { "text": "To evaluate the proposed chart mining-based VPC extraction model, we use the dataset from the LREC 2008 Multiword Expression Workshop shared task (see Section 2). We use this dataset to perform three distinct DLA tasks, as detailed in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 242, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4.1" }, { "text": "The chart mining feature extraction is implemented as an extension to the PET parser (Callmeier, Task Description GOLD VPC Determine the valence for a verb-preposition combination which is known to occur as a non-compositional VPC (i.e. known VPC, with unknown valence(s))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4.1" }, { "text": "Determine whether each verb-preposition combination is a VPC or not, and further predict its valence(s) (i.e. unknown if VPC, and unknown valence(s)) VPC Determine whether each verb-preposition combination is a VPC or not ignoring valence (i.e. unknown if VPC, and don't care about valence) 2001 ). We use a slightly modified version of the ERG in our experiments, based on the nov-06 release. The modifications include 4 newly-added dummy lexical entries for the verb DUMMY-V and the corresponding inflectional rules, and a lexical type prediction model (Zhang and Kordoni, 2006) trained on the LOGON Treebank (Oepen et al., 2004) for unknown word handling. The parse disambiguation model we use is also trained on the LOGON Treebank. Since the parser has no access to any of the verbs under investigation (due to the DUMMY-V substitution), those VPC types attested in the LOGON Treebank do not directly impact on the model's performance. The chart mining feature extraction process took over 10 CPU days, and collected a total of 44K events for 4,090 candidate VPC triples. 4 5-fold cross validation is used to train/test the model. As stated above (Section 2), the VPC triples attested in the WSJ training sections of the Penn Treebank are excluded in each testing fold for comparison with the Charniak parser-based model (see Section 4.2).", "cite_spans": [ { "start": 555, "end": 580, "text": "(Zhang and Kordoni, 2006)", "ref_id": "BIBREF28" }, { "start": 611, "end": 631, "text": "(Oepen et al., 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 291, "end": 295, "text": "2001", "ref_id": null } ], "eq_spans": [], "section": "FULL", "sec_num": null }, { "text": "For comparison, we first built a na\u00efve baseline model using the combined probabilities of the verb and particle being part of a VPC. More specifically, P (c|v) and P (c|p) are the probabilities of a given verb v and particle p being part of a VPC candidate of type s \u2208 {intrans , trans, null}, for transitive VPC, intransitive VPC, and non-VPC, respectively. P (s|v, p) = P (s|v) \u2022 P (s|p) is used to approximate the joint probability of verb-particle (v, p) being of type s, and the prediction type is chosen randomly based on this probabilistic distribution. Both P (s|v) and P (s|p) can be estimated from a list of VPC candidate types. If v is unseen, P (s|v) is set to be 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline and Benchmark", "sec_num": "4.2" }, { "text": "|V | v i \u2208V P (s|v i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline and Benchmark", "sec_num": "4.2" }, { "text": "estimated over all verbs |V | seen in the list of VPC candidates. The na\u00efve baseline performed poorly, mainly because there is not enough knowledge about the context of use of VPCs. This also indicates that the task of VPC extraction is non-trivial, and that context (evidence from sentences in which the VPC putatively occurs) must be incorporated in order to make more accurate predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline and Benchmark", "sec_num": "4.2" }, { "text": "As a benchmark VPC extraction system, we use the Charniak parser (Charniak, 2000) . This statistical parser induces a context-free grammar and a generative parsing model from a training set of gold standard parse trees. Traditionally, it has been trained over the WSJ component of the Penn Treebank, and for this work we decided to take the same approach and train over sections 1 to 22, and use section 23 for parameter-tuning. After parsing, we simply search for the VPC triples in each token instance with tgrep2, 5 and decide on the classification of the candidate by majority voting over all instances, breaking ties randomly.", "cite_spans": [ { "start": 65, "end": 81, "text": "(Charniak, 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline and Benchmark", "sec_num": "4.2" }, { "text": "The results of our experiments are summarised in Table 3 . For the na\u00efve baseline and the chart miningbased models, the results are averaged over 5-fold cross validation.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "We evaluate the methods in the form of the three tasks described in Table 2 . Formally, GOLD VPC equates to extracting v, p, s tuples from the subset of gold-standard v, p tuples; FULL equates to extracting v, p, s tuples for all VPC candidates; and VPC equates to extracting v, p tuples (ignoring valence) over all VPC candidates. In each case, we present the precision (P), recall (R) and F-score (\u03b2 = 1: F). For multi-category classifications (i.e. the two tasks where we predict the valence s, indicated as \"All\" in Table 3 ), we micro-average the precision and recall over the two VPC categories, and calculate the F-score as their harmonic mean.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 520, "end": 527, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "From the results, it is obvious that the chart mining-based model performs best overall, and indeed for most of the measures presented. The Charniak parser-based extraction method performs reasonably well, especially in the VPC+valence extraction task over the FULL task, where the recall was higher than the chart mining method. Although not reported here, we observe a marked improvement in the results for the Charniak parser when the VPC types attested in the WSJ are not filtered from the test set. This indicates that the statistical parser relies heavily on lexicalised VPC information, while the chart mining model is much more syntax-oriented. In error analysis of the data, we observed that the Charniak parser was noticeably more accurate at extracting VPCs where the verb was frequent (our method, of course, did not have access to the base frequency of the simplex verb), underlining again the power of lexicalisation. This points to two possibilities: (1) the potential for our method to similarly benefit from lexicalisation if we were to remove the constraint on ignoring any pre-existing lexical entries for the verb; and (2) the possibility for hybridising between lexicalised models for frequent verbs and unlexicalised models for infrequent verbs. Having said this, it is important to reinforce that lexical acquisition is usually performed in the absence of lexicalised probabilities, as if we have prior knowledge of the lexical item, there is no need to extract it. In this sense, the first set of results in Table 3 over Gold VPCs are the most informative, and illustrate the potential of the proposed approach.", "cite_spans": [], "ref_spans": [ { "start": 1532, "end": 1539, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "From the results of all the models, it would appear that intransitive VPCs are more difficult to extract than transitive VPCs. This is partly because the dataset we use is unbalanced: the number of transitive VPC types is about twice the number of intransitive VPCs. Also, the much lower numbers over the FULL set compared to the GOLD VPC set are due to the fact that only 1/8 of the candidates are true VPCs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "The inventory of features we propose for VPC extraction is just one illustration of how partial parse results can be used in lexical acquisition tasks. The general chart mining technique can easily be adapted to learn other challenging linguistic phenomena, such as the countability of nouns (Baldwin and Bond, 2003) , subcategorization properties of verbs or nouns (Korhonen, 2002) , and general multiword expression (MWE) extraction (Baldwin and Kim, 2009) . With MWE extraction, e.g., even though some MWEs are fixed and have no internal syntactic variability, such as ad hoc, there is a very large proportion of idioms that allow various degrees of internal variability, and with a variable number of elements. For example, the idiom spill the beans allows internal modification (spill mountains of beans), passivisation (The beans were spilled in the latest edition of the report), topicalisation (The beans, the opposition spilled), and so forth (Sag et al., 2002) . In general, however, the exact degree of variability of an idiom is difficult to predict (Riehemann, 2001 ). The chart mining technique we propose here, which makes use of partial parse results, may facilitate the automatic recognition task of even more flexible idioms, based on the encouraging results for VPCs.", "cite_spans": [ { "start": 292, "end": 316, "text": "(Baldwin and Bond, 2003)", "ref_id": "BIBREF1" }, { "start": 366, "end": 382, "text": "(Korhonen, 2002)", "ref_id": "BIBREF14" }, { "start": 435, "end": 458, "text": "(Baldwin and Kim, 2009)", "ref_id": "BIBREF2" }, { "start": 952, "end": 970, "text": "(Sag et al., 2002)", "ref_id": "BIBREF22" }, { "start": 1062, "end": 1078, "text": "(Riehemann, 2001", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "The main advantage, though, of chart mining is that parsing with precision grammars does not any longer have to assume complete coverage, as has traditionally been the case. As an immediate consequence, the possibility of applying our chart mining technique to evolving medium-sized grammars makes it especially interesting for lexical acquisi- Table 3 : Results for the different methods over the three VPC extraction tasks detailed in Table 2 tion over low-density languages, for instance, where there is a real need for rapid-prototyping of language resources.", "cite_spans": [], "ref_spans": [ { "start": 345, "end": 352, "text": "Table 3", "ref_id": null }, { "start": 437, "end": 444, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "The chart mining approach we propose in this paper is couched in the bottom-up chart parsing paradigm, based exclusively on passive edges. As future work, we would also like to look into the top-level active edges (those active edges that are never completed), as an indication of failed assumptions. Moreover, it would be interesting to investigate the applicability of the technique in other parsing strategies, e.g., head-corner or left-corner parsing. Finally, it would also be interesting to investigate whether by using the features we acquire from chart mining enhanced with information on the prevalence of certain patterns, we could achieve performance improvements over broader-coverage treebank parsers such as the Charniak parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "We have proposed a chart mining technique for lexical acquisition based on partial parsing with precision grammars. We applied the proposed method to the task of extracting English verb particle constructions from a prescribed set of corpus instances. Our results showed that simple unlexicalised features mined from the chart can be used to effectively extract VPCs, and that the model outperforms a probabilistic baseline and the Charniak parser at VPC extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Downloadable from http://www.csse.unimelb. edu.au/research/lt/resources/vpc/vpc.tgz.2 In practice, there was only one team who participated in the original VPC task(Ramisch et al., 2008), who used a variety of web-and dictionary-based features suited more to highfrequency instances in high-density languages, so a simplistic comparison would not have been meaningful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To have a consistent ranking model on any sub-analysis, one would have to retrain the disambiguation model on every passive edge. In practice, we find this to be intractable. Also, the approximation based on full-parse ranking model works reasonably well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Not all sentences in the dataset are successfully chartmined. Due to the complexity of the precision grammar we use, the parser is unlikely to complete the parsing chart for extremely long sentences (over 50 words). Moreover, sentences which do not receive any spanning edge over the verb and the particle are not considered as an indicative event. Nevertheless, the coverage of the chart mining is much higher than the fullparse coverage of the grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Noting that the Penn POS tagset captures essentially the compositional vs. non-compositional VPC distinction required in the extraction task, through the use of the RP (prepositional particle, for non-compositional VPCs) and RB (adverb, for compositional VPCs) tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic attribute-value grammars", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "", "pages": "597--618", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney. 1997. Stochastic attribute-value gram- mars. Computational Linguistics, 23:597-618.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning the countability of English nouns from corpus data", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bond", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003)", "volume": "", "issue": "", "pages": "463--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin and Francis Bond. 2003. Learning the countability of English nouns from corpus data. In Proceedings of the 41st Annual Meeting of the As- sociation for Computational Linguistics (ACL 2003), pages 463-470, Sapporo, Japan.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multiword expressions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" } ], "year": 2009, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin and Su Nam Kim. 2009. Multiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing. CRC Press, Boca Raton, USA, 2nd edition.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The deep lexical acquisition of English verb-particle constructions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2005, "venue": "Computer Speech and Language, Special Issue on Multiword Expressions", "volume": "19", "issue": "4", "pages": "398--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin. 2005. The deep lexical acquisition of English verb-particle constructions. Computer Speech and Language, Special Issue on Multiword Expres- sions, 19(4):398-414.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A resource for evaluating the deep lexical acquisition of English verb-particle constructions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the LREC 2008 Workshop: Towards a Shared Task for Multiword Expressions (MWE 2008)", "volume": "", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin. 2008. A resource for evaluating the deep lexical acquisition of English verb-particle con- structions. In Proceedings of the LREC 2008 Work- shop: Towards a Shared Task for Multiword Expres- sions (MWE 2008), pages 1-2, Marrakech, Morocco. Ulrich Callmeier. 2001. Efficient parsing with large- scale unification grammars. Master's thesis, Univer- sit\u00e4t des Saarlandes, Saarbr\u00fccken, Germany.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "High efficiency realization for a wide-coverage unification grammar", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP 2005)", "volume": "", "issue": "", "pages": "165--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll and Stephan Oepen. 2005. High efficiency realization for a wide-coverage unification grammar. In Proceedings of the 2nd International Joint Confer- ence on Natural Language Processing (IJCNLP 2005), pages 165-176, Jeju Island, Korea.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A maximum entropy-based parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st Annual Meeting of the North American Chapter of Association for Computational Linguistics (NAACL2000)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2000. A maximum entropy-based parser. In Proceedings of the 1st Annual Meeting of the North American Chapter of Association for Com- putational Linguistics (NAACL2000), Seattle, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A generalized method for iterative error mining in parsing results", "authors": [ { "first": "Jianqiang", "middle": [], "last": "Daniel De Kok", "suffix": "" }, { "first": "Gertjan", "middle": [], "last": "Ma", "suffix": "" }, { "first": "", "middle": [], "last": "Van Noord", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL2009 Workshop on Grammar Engineering Across Frameworks (GEAF)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel de Kok, Jianqiang Ma, and Gertjan van Noord. 2009. A generalized method for iterative error min- ing in parsing results. In Proceedings of the ACL2009 Workshop on Grammar Engineering Across Frame- works (GEAF), Singapore.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Particle Verbs in English: Syntax, Information, Structure and Intonation", "authors": [ { "first": "Nicole", "middle": [], "last": "Deh\u00e9", "suffix": "" } ], "year": 2002, "venue": "John Benjamins", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicole Deh\u00e9. 2002. Particle Verbs in English: Syn- tax, Information, Structure and Intonation. John Ben- jamins, Amsterdam, Netherlands/Philadelphia, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On building a more efficient grammar by exploiting types", "authors": [ { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Collaborative Language Engineering", "volume": "", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Flickinger. 2002. On building a more efficient grammar by exploiting types. In Stephan Oepen, Dan Flickinger, Jun'ichi Tsujii, and Hans Uszkoreit, edi- tors, Collaborative Language Engineering, pages 1- 17. CSLI Publications.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Question answering from structured knowledge sources", "authors": [ { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Hans-Ulrich", "middle": [], "last": "Krieger", "suffix": "" }, { "first": "Feiyu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Berthold", "middle": [], "last": "Crysmann", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "J\u00f6rg", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Sch\u00e4fer", "suffix": "" } ], "year": 2006, "venue": "Journal of Applied Logic, Special Issue on Questions and Answers: Theoretical and Applied Perspectives", "volume": "5", "issue": "1", "pages": "20--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anette Frank, Hans-Ulrich Krieger, Feiyu Xu, Hans Uszkoreit, Berthold Crysmann, Brigitte J\u00f6rg, and Ul- rich Sch\u00e4fer. 2006. Question answering from struc- tured knowledge sources. Journal of Applied Logic, Special Issue on Questions and Answers: Theoretical and Applied Perspectives., 5(1):20-48.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Cambridge Grammar of the English Language", "authors": [ { "first": "Rodney", "middle": [], "last": "Huddleston", "suffix": "" }, { "first": "Geoffrey", "middle": [ "K" ], "last": "Pullum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodney Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cam- bridge University Press, Cambridge, UK.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Estimators for stochastic unifcation-based grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Canon", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999)", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochas- tic unifcation-based grammars. In Proceedings of the 37th Annual Meeting of the Association for Computa- tional Linguistics (ACL 1999), pages 535-541, Mary- land, USA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Bag of Useful Techniques for Efficient and Robust Parsing", "authors": [ { "first": "Bernd", "middle": [], "last": "Kiefer", "suffix": "" }, { "first": "Hans-Ulrich", "middle": [], "last": "Krieger", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "473--480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Kiefer, Hans-Ulrich Krieger, John Carroll, and Rob Malouf. 1999. A Bag of Useful Techniques for Efficient and Robust Parsing. In Proceedings of the 37th Annual Meeting of the Association for Computa- tional Linguistics, pages 473-480, Maryland, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Subcategorization Acquisition", "authors": [ { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Korhonen. 2002. Subcategorization Acquisition. Ph.D. thesis, University of Cambridge.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Biomedical event annotation with CRFs and precision grammars", "authors": [ { "first": "Andrew", "middle": [], "last": "Mackinlay", "suffix": "" }, { "first": "David", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2009, "venue": "Proceedings of BioNLP 2009: Shared Task", "volume": "", "issue": "", "pages": "77--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew MacKinlay, David Martinez, and Timothy Bald- win. 2009. Biomedical event annotation with CRFs and precision grammars. In Proceedings of BioNLP 2009: Shared Task, pages 77-85, Boulder, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A comparison of algorithms for maximum entropy parameter estimation", "authors": [ { "first": "Robert", "middle": [], "last": "Malouf", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 6th Conferencde on Natural Language Learning", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Pro- ceedings of the 6th Conferencde on Natural Language Learning (CoNLL 2002), pages 49-55, Taipei, Taiwan. Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Process- ing. MIT Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Collocations", "authors": [ { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2000, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen R. McKeown and Dragomir R. Radev. 2000. Collocations. In Robert Dale, Hermann Moisl, and Harold Somers, editors, Handbook of Natural Lan- guage Processing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Ambiguity packing in constraint-based parsing -practical results", "authors": [ { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st Annual Meeting of the North American Chapter", "volume": "", "issue": "", "pages": "162--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Oepen and John Carroll. 2000. Ambiguity pack- ing in constraint-based parsing -practical results. In Proceedings of the 1st Annual Meeting of the North American Chapter of Association for Computational Linguistics (NAACL 2000), pages 162-169, Seattle, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Som\u00e5 kapp-ete med trollet? Towards MRS-Based Norwegian-English Machine Translation", "authors": [ { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Helge", "middle": [], "last": "Dyvik", "suffix": "" }, { "first": "Jan", "middle": [ "Tore" ], "last": "L\u00f8nning", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Velldal", "suffix": "" }, { "first": "Dorothee", "middle": [], "last": "Beermann", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Hellan", "suffix": "" }, { "first": "Janne", "middle": [], "last": "Bondi Johannessen", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Meurer", "suffix": "" }, { "first": "Torbj\u00f8rn", "middle": [], "last": "Nordg\u00e5rd", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Ros\u00e9n", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Oepen, Helge Dyvik, Jan Tore L\u00f8nning, Erik Velldal, Dorothee Beermann, John Carroll, Dan Flickinger, Lars Hellan, Janne Bondi Johannessen, Paul Meurer, Torbj\u00f8rn Nordg\u00e5rd, and Victoria Ros\u00e9n. 2004. Som\u00e5 kapp-ete med trollet? Towards MRS- Based Norwegian-English Machine Translation. In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation, Baltimore, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An evaluation of methods for the extraction of multiword expressions", "authors": [ { "first": "Carlos", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "Paulo", "middle": [], "last": "Schreiner", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Idiart", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Villavicencio", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the LREC 2008 Workshop: Towards a Shared Task for Multiword Expressions (MWE 2008)", "volume": "", "issue": "", "pages": "50--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Ramisch, Paulo Schreiner, Marco Idiart, and Aline Villavicencio. 2008. An evaluation of methods for the extraction of multiword expressions. In Proceedings of the LREC 2008 Workshop: Towards a Shared Task for Multiword Expressions (MWE 2008), pages 50-53, Marrakech, Morocco.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Constructional Approach to Idioms and Word Formation", "authors": [ { "first": "Susanne", "middle": [], "last": "Riehemann", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susanne Riehemann. 2001. A Constructional Approach to Idioms and Word Formation. Ph.D. thesis, Stanford University, CA, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multiword expressions: A pain in the neck for NLP", "authors": [ { "first": "A", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2002)", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword ex- pressions: A pain in the neck for NLP. In Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing- 2002), pages 1-15, Mexico City, Mexico.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Retrieving collocations from text: Xtract", "authors": [ { "first": "Frank", "middle": [], "last": "Smadja", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "143--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-178.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An efficient context-free parsing algorithm for natural languages", "authors": [ { "first": "Masaru", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1985, "venue": "Proceedings of the 9th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "756--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masaru Tomita. 1985. An efficient context-free parsing algorithm for natural languages. In Proceedings of the 9th International Joint Conference on Artificial Intel- ligence, pages 756-764, Los Angeles, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Parse ranking for a rich HPSG grammar", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Christoper", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 1st Workshop on Treebanks and Linguistic Theories (TLT 2002)", "volume": "", "issue": "", "pages": "253--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Christoper D. Manning, Stuart M. Shieber, Dan Flickinger, and Stephan Oepen. 2002. Parse ranking for a rich HPSG grammar. In Proceed- ings of the 1st Workshop on Treebanks and Linguistic Theories (TLT 2002), pages 253-263, Sozopol, Bul- garia.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Error mining for widecoverage grammar engineering", "authors": [ { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics)", "volume": "", "issue": "", "pages": "446--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hans Uszkoreit. 2002. New chances for deep linguis- tic processing. In Proceedings of the 19th interna- tional conference on computational linguistics (COL- ING 2002), Taipei, Taiwan. Gertjan van Noord. 2004. Error mining for wide- coverage grammar engineering. In Proceedings of the 42nd Annual Meeting of the Association for Computa- tional Linguistics), pages 446-453, Barcelona, Spain.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Verbparticle constructions in a computational grammar of English", "authors": [ { "first": "Aline", "middle": [], "last": "Villavicencio", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 9th International Conference on Head-Driven Phrase Structure Grammar", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aline Villavicencio and Ann Copestake. 2002. Verb- particle constructions in a computational grammar of English. In Proceedings of the 9th International Con- ference on Head-Driven Phrase Structure Grammar (HPSG-2002), Seoul, Korea.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Automated deep lexical acquisition for robust open texts processing", "authors": [ { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006)", "volume": "", "issue": "", "pages": "275--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Zhang and Valia Kordoni. 2006. Automated deep lexical acquisition for robust open texts processing. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006), pages 275-280, Genoa, Italy.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Robust parsing with a large HPSG grammar", "authors": [ { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Zhang and Valia Kordoni. 2008. Robust parsing with a large HPSG grammar. In Proceedings of the Sixth International Language Resources and Evalua- tion (LREC'08), Marrakech, Morocco.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Partial parse selection for robust deep processing", "authors": [ { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Fitzgerald", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL 2007 Workshop on Deep Linguistic Processing", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Zhang, Valia Kordoni, and Erin Fitzgerald. 2007a. Partial parse selection for robust deep processing. In Proceedings of ACL 2007 Workshop on Deep Linguis- tic Processing, pages 128-135, Prague, Czech Repub- lic.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Efficiency in unification-based N-best parsing", "authors": [ { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 10th International Conference on Parsing Technologies (IWPT 2007)", "volume": "", "issue": "", "pages": "48--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Zhang, Stephan Oepen, and John Carroll. 2007b. Ef- ficiency in unification-based N-best parsing. In Pro- ceedings of the 10th International Conference on Pars- ing Technologies (IWPT 2007), pages 48-59, Prague, Czech.", "links": null } }, "ref_entries": { "TABREF1": { "content": "
S3\u2212subjh(.875)
S1\u2212subjh(.125)S2\u2212subjh(.925)
VP5\u2212hcomp
VP1\u2212hadjVP2\u2212hadj(.325)VP3\u2212hcompVP4\u2212hcomp
v_\u2212_lev_np_le v_p_lev_p\u2212np_lePP\u2212hcomp
NP1PRTLPREPNP2
DUMMY\u2212V
the boyshowsoffhis new toys
02347
", "text": "Chart mining features used for VPC extraction", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "content": "", "text": "Definitions of the three DLA tasks", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "content": "
TaskVPC TypeNa\u00efve BaselineCharniak ParserChart-Mining
PRFPRFPRF
GOLD VPC Intrans-FULL Intrans-VPC 0.060 0.018 0.028 0.102 0.593 0.174 0.153 0.155 0.154 Trans-VPC 0.083 0.348 0.134 0.179 0.448 0.256 0.179 0.362 0.240 All 0.080 0.236 0.119 0.136 0.500 0.213 0.171 0.298 0.218
VPC0.123 0.348 0.182 0.173 0.782 0.284 0.259 0.332 0.291
", "text": "VPC 0.300 0.018 0.034 0.549 0.753 0.635 0.845 0.621 0.716 Trans-VPC 0.676 0.348 0.459 0.829 0.648 0.728 0.877 0.956 0.915 All 0.576 0.236 0.335 0.691 0.686 0.688 0.875 0.859 0.867", "num": null, "html": null, "type_str": "table" } } } }