{ "paper_id": "P12-1022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:28:19.184791Z" }, "title": "Discriminative Strategies to Integrate Multiword Expression Recognition and Parsing", "authors": [ { "first": "Matthieu", "middle": [], "last": "Constant", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "country": "France" } }, "email": "mconstan@univ-mlv.fr" }, { "first": "Anthony", "middle": [], "last": "Sigogne", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "country": "France" } }, "email": "sigogne@univ-mlv.fr" }, { "first": "Patrick", "middle": [], "last": "Watrin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 de Louvain CENTAL", "location": { "country": "Belgium" } }, "email": "patrick.watrin@uclouvain.be" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The integration of multiword expressions in a parsing procedure has been shown to improve accuracy in an artificial context where such expressions have been perfectly pre-identified. This paper evaluates two empirical strategies to integrate multiword units in a real constituency parsing context and shows that the results are not as promising as has sometimes been suggested. Firstly, we show that pregrouping multiword expressions before parsing with a state-of-the-art recognizer improves multiword recognition accuracy and unlabeled attachment score. However, it has no statistically significant impact in terms of F-score as incorrect multiword expression recognition has important side effects on parsing. Secondly, integrating multiword expressions in the parser grammar followed by a reranker specific to such expressions slightly improves all evaluation metrics.", "pdf_parse": { "paper_id": "P12-1022", "_pdf_hash": "", "abstract": [ { "text": "The integration of multiword expressions in a parsing procedure has been shown to improve accuracy in an artificial context where such expressions have been perfectly pre-identified. This paper evaluates two empirical strategies to integrate multiword units in a real constituency parsing context and shows that the results are not as promising as has sometimes been suggested. Firstly, we show that pregrouping multiword expressions before parsing with a state-of-the-art recognizer improves multiword recognition accuracy and unlabeled attachment score. However, it has no statistically significant impact in terms of F-score as incorrect multiword expression recognition has important side effects on parsing. Secondly, integrating multiword expressions in the parser grammar followed by a reranker specific to such expressions slightly improves all evaluation metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The integration of Multiword Expressions (MWE) in real-life applications is crucial because such expressions have the particularity of having a certain level of idiomaticity. They form complex lexical units which, if they are considered, should significantly help parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From a theoretical point of view, the integration of multiword expressions in the parsing procedure has been studied for different formalisms: Head-Driven Phrase Structure Grammar , Tree Adjoining Grammars (Schuler and Joshi, 2011) , etc. From an empirical point of view, their incorporation has also been considered such as in (Nivre and Nilsson, 2004) for dependency parsing and in (Arun and Keller, 2005) in constituency parsing. Although experiments always relied on a corpus where the MWEs were perfectly pre-identified, they showed that pre-grouping such expressions could significantly improve parsing accuracy. Recently, Green et al. (2011) proposed integrating the multiword expressions directly in the grammar without pre-recognizing them. The grammar was trained with a reference treebank where MWEs were annotated with a specific non-terminal node.", "cite_spans": [ { "start": 206, "end": 231, "text": "(Schuler and Joshi, 2011)", "ref_id": "BIBREF24" }, { "start": 328, "end": 353, "text": "(Nivre and Nilsson, 2004)", "ref_id": "BIBREF15" }, { "start": 384, "end": 407, "text": "(Arun and Keller, 2005)", "ref_id": "BIBREF1" }, { "start": 629, "end": 648, "text": "Green et al. (2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our proposal is to evaluate two discriminative strategies in a real constituency parsing context: (a) pre-grouping MWE before parsing; this would be done with a state-of-the-art recognizer based on Conditional Random Fields; (b) parsing with a grammar including MWE identification and then reranking the output parses thanks to a Maximum Entropy model integrating MWE-dedicated features. (a) is the direct realistic implementation of the standard approach that was shown to reach the best results (Arun and Keller, 2005) . We will evaluate if real MWE recognition (MWER) still positively impacts parsing, i.e., whether incorrect MWER does not negatively impact the overall parsing system. (b) is a more innovative approach to MWER (despite not being new in parsing): we select the final MWE segmentation after parsing in order to explore as many parses as possible (as opposed to method (a)). The experiments were carried out on the French Treebank (Abeill\u00e9 et al., 2003) where MWEs are annotated.", "cite_spans": [ { "start": 497, "end": 520, "text": "(Arun and Keller, 2005)", "ref_id": "BIBREF1" }, { "start": 949, "end": 971, "text": "(Abeill\u00e9 et al., 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows: section 2 is an overview of the multiword expressions and their identification in texts; section 3 presents the two different strategies and their associated models; section 4 describes the resources used for our experiments (the corpus and the lexical resources); section 5 details the features that are incorporated in the models; section 6 reports on the results obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multiword expressions are lexical items made up of multiple lexemes that undergo idiosyncratic constraints and therefore offer a certain degree of idiomaticity. They cover a wide range of linguistic phenomena: fixed and semi-fixed expressions, light verb constructions, phrasal verbs, named entities, etc. They may be contiguous (e.g. traffic light) or discontinuous (e.g. John took your argument into account). They are often divided into two main classes: multiword expressions defined through linguistic idiomaticity criteria (lexicalized phrases in the terminology of ) and those defined by statistical ones (i.e. simple collocations). Most linguistic criteria used to determine whether a combination of words is a MWE are based on syntactic and semantic tests such as the ones described in (Gross, 1986) . For instance, the utterance at night is a MWE because it does display a strict lexical restriction (*at day, *at afternoon) and it does not accept any inserting material (*at cold night, *at present night). Such linguistically defined expressions may overlap with collocations which are the combinations of two or more words that cooccur more often than by chance. Collocations are usually identified through statistical association measures. A detailed description of MWEs can be found in (Baldwin and Nam, 2010) .", "cite_spans": [ { "start": 795, "end": 808, "text": "(Gross, 1986)", "ref_id": "BIBREF12" }, { "start": 1301, "end": 1324, "text": "(Baldwin and Nam, 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Multiword expressions 2.1 Overview", "sec_num": "2" }, { "text": "In this paper, we focus on contiguous MWEs that form a lexical unit which can be marked by a part-ofspeech tag (e.g. at night is an adverb, because of is a preposition). They can undergo limited morphological and lexical variations -e.g. traffic (light+lights), (apple+orange+...) juice -and usually do not allow syntactic variations 1 such as inserts (e.g. *at cold night). Such expressions can be analyzed at the lexical level. In what follows, we use the term compounds to denote such expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiword expressions 2.1 Overview", "sec_num": "2" }, { "text": "The idiomaticity property of MWEs makes them both crucial for Natural Language Processing applications and difficult to predict. Their actual identification in texts is therefore fundamental. There are different ways for achieving this objective. The simpler approach is lexicon-driven and consists in looking the MWEs up in an existing lexicon, such as in (Silberztein, 2000) . The main drawback is that this procedure entirely relies on a lexicon and is unable to discover unknown MWEs. The use of collocation statistics is therefore useful. For instance, for each candidate in the text, Watrin and Fran\u00e7ois (2011) compute on the fly its association score from an external ngram base learnt from a large raw corpus, and tag it as MWE if the association score is greater than a threshold. They reach excellent scores in the framework of a keyword extraction task. Within a validation framework (i.e. with the use of a reference corpus annotated in MWEs), Ramisch et al. (2010) developped a Support Vector Machine classifier integrating features corresponding to different collocation association measures. The results were rather low on the Genia corpus and Green et al. (2011) confirmed these bad results on the French Treebank. This can be explained by the fact that such a method does not make any distinctions between the different types of MWEs and the reference corpora are usually limited to certain types of MWEs. Furthermore, the lexicon-driven and collocation-driven approaches do not take the context into account, and therefore cannot discard some of the incorrect candidates. A recent trend is to couple MWE recognition with a linguistic analyzer: a POS tagger (Constant and Sigogne, 2011) or a parser (Green et al., 2011 clearly evaluated, but seemed to reach around 70-80% F-score. Green et al. (2011) proposed to include the MWER in the grammar of the parser. To do so, the MWEs in the training treebank were annotated with specific non-terminal nodes. They used a Tree Substitution Grammar instead of a Probabilistic Context-free Grammar (PCFG) with latent annotations in order to capture lexicalized rules as well as general rules. They showed that this formalism was more relevant to MWER than PCFG (71% Fscore vs. 69.5%). Both methods have the advantage of being able to discover new MWEs on the basis of lexical and syntactic contexts. In this paper, we will take advantage of the methods described in this section by integrating them as features of a MWER model.", "cite_spans": [ { "start": 357, "end": 376, "text": "(Silberztein, 2000)", "ref_id": "BIBREF25" }, { "start": 956, "end": 977, "text": "Ramisch et al. (2010)", "ref_id": "BIBREF18" }, { "start": 1675, "end": 1703, "text": "(Constant and Sigogne, 2011)", "ref_id": "BIBREF6" }, { "start": 1716, "end": 1735, "text": "(Green et al., 2011", "ref_id": "BIBREF11" }, { "start": 1798, "end": 1817, "text": "Green et al. (2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Identification", "sec_num": "2.2" }, { "text": "3 Two strategies, two discriminative models 3.1 Pre-grouping Multiword Expressions MWER can be seen as a sequence labelling task (like chunking) by using an IOB-like annotation scheme (Ramshaw and Marcus, 1995) . This implies a theoretical limitation: recognized MWEs must be contiguous. The proposed annotation scheme is therefore theoretically weaker than the one proposed by Green et al. (2011) that integrates the MWER in the grammar and allows for discontinuous MWEs. Nevertheless, in practice, the compounds we are dealing with are very rarely discontinuous and if so, they solely contain a single word insert that can be easily integrated in the MWE sequence. Constant and Sigogne (2011) proposed to combine MWE segmentation and part-of-speech tagging into a single sequence labelling task by assigning to each token a tag of the form TAG+X where TAG is the part-ofspeech (POS) of the lexical unit the token belongs to and X is either B (i.e. the token is at the beginning of the lexical unit) or I (i.e. for the remaining positions): John/N+B hates/V+B traffic/N+B jams/N+I. In this paper, as our task consists in jointly locating and tagging MWEs, we limited the POS tagging to MWEs only (TAG+B/TAG+I), simple words being tagged by O (outside): John/O hates/O traffic/N+B jams/N+I. For such a task, we used Linear chain Conditional Ramdom Fields (CRF) that are discriminative prob-abilistic models introduced by Lafferty et al. (2001) for sequential labelling. Given an input sequence of tokens x = (x 1 , x 2 , ..., x N ) and an output sequence of labels y = (y 1 , y 2 , ..., y N ), the model is defined as follows:", "cite_spans": [ { "start": 184, "end": 210, "text": "(Ramshaw and Marcus, 1995)", "ref_id": "BIBREF19" }, { "start": 378, "end": 397, "text": "Green et al. (2011)", "ref_id": "BIBREF11" }, { "start": 667, "end": 694, "text": "Constant and Sigogne (2011)", "ref_id": "BIBREF6" }, { "start": 1421, "end": 1443, "text": "Lafferty et al. (2001)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Identification", "sec_num": "2.2" }, { "text": "P \u03bb (y|x) = 1 Z(x) . N t K k log\u03bb k .f k (t, y t , y t\u22121 , x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification", "sec_num": "2.2" }, { "text": "where Z(x) is a normalization factor depending on x. It is based on K features each of them being defined by a binary function f k depending on the current position t in x, the current label y t , the preceding one y t\u22121 and the whole input sequence x. The tokens x i of x integrate the lexical value of this token but can also integrate basic properties which are computable from this value (for example: whether it begins with an upper case, it contains a number, its tags in an external lexicon, etc.). The feature is activated if a given configuration between t, y t , y t\u22121 and x is satisfied (i.e. f k (t, y t , y t\u22121 , x) = 1). Each feature f k is associated with a weight \u03bb k . The weights are the parameters of the model, to be estimated. The features used for MWER will be described in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification", "sec_num": "2.2" }, { "text": "Discriminative reranking consists in reranking the nbest parses of a baseline parser with a discriminative model, hence integrating features associated with each node of the candidate parses. Charniak and Johnson (2005) introduced different features that showed significant improvement in general parsing accuracy (e.g. around +1 point in English). Formally, given a sentence s, the reranker selects the best candidate parse p among a set of candidates P (s) with respect to a scoring function V \u03b8 :", "cite_spans": [ { "start": 192, "end": 219, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.2" }, { "text": "p * = argmax p\u2208P (s) V \u03b8 (p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.2" }, { "text": "The set of candidates P (s) corresponds to the n-best parses generated by the baseline parser. The scoring function V \u03b8 is the scalar product of a parameter vector \u03b8 and a feature vector f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.2" }, { "text": "V \u03b8 (p) = \u03b8.f (p) = m j=1 \u03b8 j .f j (p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.2" }, { "text": "where f j (p) corresponds to the number of occurrences of the feature f j in the parse p. According to Charniak and Johnson (2005) , the first feature f 1 is the probability of p provided by the baseline parser. The vector \u03b8 is estimated during the training stage from a reference treebank and the baseline parser ouputs.", "cite_spans": [ { "start": 103, "end": 130, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.2" }, { "text": "In this paper, we slightly deviate from the original reranker usage, by focusing on improving MWER in the context of parsing. Given the n-best parses, we want to select the one with the best MWE segmentation by keeping the overall parsing accuracy as high as possible. We therefore used MWE-dedicated features that we describe in section 5. The training stage was performed by using a Maximum entropy algorithm as in (Charniak and Johnson, 2005) .", "cite_spans": [ { "start": 417, "end": 445, "text": "(Charniak and Johnson, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.2" }, { "text": "The French Treebank 2 [FTB] (Abeill\u00e9 et al., 2003) is a syntactically annotated corpus made up of journalistic articles from Le Monde newspaper. We used the latest edition of the corpus (June 2010) that we preprocessed with the Stanford Parser preprocessing tools (Green et al., 2011) . It contains 473,904 tokens and 15,917 sentences. One benefit of this corpus is that its compounds are marked. Their annotation was driven by linguistic criteria such as the ones in (Gross, 1986) . Compounds are identified with a specific non-terminal symbol \"MWX\" where X is the part-of-speech of the expression. They have a flat structure made of the part-of-speech of their components as shown in figure 1. The French Treebank is composed of 435,860 lexical units (34,178 types). Among them, 5.3% are compounds (20.8% for types). In addition, 12.9% of the tokens belong to a MWE, which, on average, has 2.7 tokens. The non-terminal tagset is composed of 14 part-of-speech labels and 24 phrasal ones (including 11 MWE labels). The train/dev/test split is the same as in (Green et al., 2011) : 1,235 sentences for test, 1,235 for development and 13,347 for training. The development and test sections are the same as those generally used for experiments in French, e.g. (Candito and Crabb\u00e9, 2009) .", "cite_spans": [ { "start": 28, "end": 50, "text": "(Abeill\u00e9 et al., 2003)", "ref_id": "BIBREF0" }, { "start": 264, "end": 284, "text": "(Green et al., 2011)", "ref_id": "BIBREF11" }, { "start": 468, "end": 481, "text": "(Gross, 1986)", "ref_id": "BIBREF12" }, { "start": 1058, "end": 1078, "text": "(Green et al., 2011)", "ref_id": "BIBREF11" }, { "start": 1257, "end": 1283, "text": "(Candito and Crabb\u00e9, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Resources 4.1 Corpus", "sec_num": "4" }, { "text": "French is a resource-rich language as attested by the existing morphological dictionaries which include compounds. In this paper, we use two largecoverage general-purpose dictionaries: Dela (Courtois, 1990; Courtois et al., 1997) and Lefff (Sagot, 2010) . The Dela was manually developed in the 90's by a team of linguists. We used the distribution freely available in the platform Unitex 3 (Paumier, 2011) . It is composed of 840,813 lexical entries including 104,350 multiword ones (91,030 multiword nouns). The compounds present in the resources respect the linguistic criteria defined in (Gross, 1986) . The lefff is a freely available dictionary 4 that has been automatically compiled by drawing from different sources and that has been manually validated. We used a version with 553,138 lexical entries including 26,311 multiword ones (22,673 multiword nouns). Their different modes of acquisition makes those two resources complementary. In both, lexical entries are composed of a inflected form, a lemma, a part-of-speech and morphological features. The Dela has an additional feature for most of the multiword entries: their syntactic surface form. For instance, eau de vie (brandy) has the feature NDN because it has the internal flat structure noun -preposition de -noun.", "cite_spans": [ { "start": 190, "end": 206, "text": "(Courtois, 1990;", "ref_id": "BIBREF8" }, { "start": 207, "end": 229, "text": "Courtois et al., 1997)", "ref_id": "BIBREF9" }, { "start": 240, "end": 253, "text": "(Sagot, 2010)", "ref_id": "BIBREF21" }, { "start": 391, "end": 406, "text": "(Paumier, 2011)", "ref_id": "BIBREF16" }, { "start": 592, "end": 605, "text": "(Gross, 1986)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical resources", "sec_num": "4.2" }, { "text": "In order to compare compounds in these lexical resources with the ones in the French Treebank, we applied on the development corpus the dictionaries and the lexicon extracted from the training corpus. By a simple look-up, we obtained a preliminary lexicon-based MWE segmentation. The results are provided in table 1. They show that the use of external resources may improve recall, but they lead to a decrease in precision as numerous MWEs in the dictionaries are not encoded as such in the reference corpus; in addition, the FTB suffers from some inconsistency in the MWE annotations. In terms of statistical collocations, Watrin and Fran\u00e7ois (2011) described a system that lists all the potential nominal collocations of a given sentence along with their association measure. The authors provided us with a list of 17,315 candidate nominal collocations occurring in the French treebank with their log-likelihood and their internal flat structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical resources", "sec_num": "4.2" }, { "text": "The two discriminative models described in section 3 require MWE-dedicated features. In order to make these models comparable, we use two comparable sets of feature templates: one adapted to sequence labelling (CRF-based MWER) and the other one adapted to reranking (MaxEnt-based reranker). The MWER templates are instantiated at each position of the input sequence. The reranker templates are instantiated only for the nodes of the candidate parse tree, which are leaves dominated by a MWE node (i.e. the node has a MWE ancestor). We define a template T as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MWE-dedicated Features", "sec_num": "5" }, { "text": "\u2022 MWER: for each position n in the input sequence x, T = f (x, n)/y n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MWE-dedicated Features", "sec_num": "5" }, { "text": "\u2022 RERANKER: for each leaf (in position n) dominated by a MWE node m in the current parse tree p,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MWE-dedicated Features", "sec_num": "5" }, { "text": "T = f (p, n)/label(m)/pos(p, n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MWE-dedicated Features", "sec_num": "5" }, { "text": "where f is a function to be defined; y n is the output label at position n; label(m) is the label of node m and pos(p, n) indicates the position of the word corresponding to n in the MWE sequence: B (starting position), I (remaining positions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MWE-dedicated Features", "sec_num": "5" }, { "text": "Endogenous features are features directly extracted from properties of the words themselves or from a tool learnt from the training corpus (e.g. a tagger). Word n-grams. We use word unigrams and bigrams in order to capture multiwords present in the training section and to extract lexical cues to discover new MWEs. For instance, the bigram coup de is often the prefix of compounds such as coup de pied (kick), coup de foudre (love at first sight), coup de main (help). POS n-grams. We use part-of-speech unigrams and bigrams in order to capture MWEs with irregular syntactic structures that might indicate the idiomacity of a word sequence. For instance, the POS sequence preposition -adverb associated with the compound depuis peu (recently) is very unusual in French. We also integrated mixed bigrams made up of a word and a part-of-speech. Specific features. Due to their different use, each model integrates some specific features. In order to deal with unknown words and special tokens, we incorporate standard tagging features in the CRF: lowercase forms of the words, word prefixes of length 1 to 4, word suffice of length 1 to 4, whether the word is capitalized, whether the token has a digit, whether it is an hyphen. We also add label bigrams. The reranker models integrate features associated with each MWE node, the value of which is the compound itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Endogenous Features", "sec_num": "5.1" }, { "text": "Exogenous features are features that are not entirely derived from the (reference) corpus itself. They are computed from external data (in our case, our lexical resources). The lexical resources might be useful to discover new expressions: usually, expressions that have standard syntax like nominal compounds and are difficult to predict from the endogenous features. The resources are applied to the corpus through a lexical analysis that generates, for each sentence, a finite-state automaton TFSA which represents all the possible analyses. The features are computed from the automaton TFSA. Lexicon-based features. We associate each word with its part-of-speech tags found in our external morphological lexicon. All tags of a word constitute an ambiguity class ac. If the word belongs to a compound, the compound tag is also incorporated in the ambiguity class. For instance, the word night (either a simple noun or a simple adjective) in the context at night, is associated with the class adj noun adv+I as it is located inside a compound adverb. This feature is directly computed from TFSA. The lexical analysis can lead to a preliminary MWE segmentation by using a shortest path algorithm that gives priority to compound analyses. This segmentation is also a source of features: a word belonging to a compound segment is assigned different properties such as the segment part-of-speech mwt and its syntactic structure mws encoded in the lexical resource, its relative position mwpos in the segment ('B' or 'I'). Collocation-based features. In our collocation resource, each candidate collocation of the French treebank is associated with its internal syntactic structure and its association score (log-likelihood). We divided these candidates into two classes: those whose score is greater than a threshold and the other ones. Therefore, a given word in the corpus can be associated with different properties whether it belongs to a potential collocation: the class c and the internal structure cs of the collocation it belongs to, its position cpos in the collocation (B: beginning; I: remaining positions; O: outside). We manually set the threshold to 150 after some tuning on the development corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exogenous Features", "sec_num": "5.2" }, { "text": "All feature templates are given in table 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exogenous Features", "sec_num": "5.2" }, { "text": "Endogenous Features w(n + i), i \u2208 {\u22122, \u22121, 0, 1, 2} w(n + i)/w(n + i + 1), i \u2208 {\u22122, \u22121, 0, 1} t(n + i), i \u2208 {\u22122, \u22121, 0, 1, 2} t(n + i)/t(n + i + 1), i \u2208 {\u22122, \u22121, 0, 1} w(n + i)/t(n + j), (i, j) \u2208 {(1, 0), (0, 1), (\u22121, 0), (0, \u22121)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exogenous Features", "sec_num": "5.2" }, { "text": "Exogenous Features ac(n) mwt(n)/mwpos(n) mws(n)/mwpos(n) c(n)/cs(n)/cpos(n) Table 2 : Feature templates (f ) used both in the MWER and the reranker models: n is the current position in the sentence, w(i) is the word at position i; t(i) is the partof-speech tag of w(i); if the word at absolute position i is part of a compound in the Shortest Path Segmentation, mwt(i) and mws(i) are respectively the part-of-speech tag and the internal structure of the compound, mwpos(i) indicates its relative position in the compound (B or I).", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Exogenous Features", "sec_num": "5.2" }, { "text": "We carried out 3 different experiments. We first tested a standalone MWE recognizer based on CRF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "We then combined MWE pregrouping based on this recognizer and the Berkeley parser 5 (Petrov et al., 2006) trained on the FTB where the compounds were concatenated (BKYc). Finally, we combined the Berkeley parser trained on the FTB where the compounds are annotated with specific non-terminals (BKY), and the reranker. In all experiments, we varied the set of features: endo are all endogenous features; coll and lex include all endogenous features plus collocation-based features and lexicon-based ones, respectively; all is composed of both endogenous and exogenous features. The CRF recognizer relies on the software Wapiti 6 (Lavergne et al., 2010) to train and apply the model, and on the software Unitex (Paumier, 2011) to apply lexical resources. The part-of-speech tagger used to extract POS features was lgtagger 7 (Constant and Sigogne, 2011) . To train the reranker, we used a MaxEnt algorithm 8 as in (Charniak and Johnson, 2005) .", "cite_spans": [ { "start": 84, "end": 105, "text": "(Petrov et al., 2006)", "ref_id": "BIBREF17" }, { "start": 628, "end": 651, "text": "(Lavergne et al., 2010)", "ref_id": "BIBREF14" }, { "start": 709, "end": 724, "text": "(Paumier, 2011)", "ref_id": "BIBREF16" }, { "start": 823, "end": 851, "text": "(Constant and Sigogne, 2011)", "ref_id": "BIBREF6" }, { "start": 912, "end": 940, "text": "(Charniak and Johnson, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "Results are reported using several standard measures, the F 1 score, unlabeled attachment and Leaf Ancestor scores. The labeled F 1 score [F1] 9 , defined by the standard protocol called PARSEVAL (Black et al., 1991) , takes into account the bracketing and labeling of nodes. The unlabeled attachement score [UAS] evaluates the quality of unlabeled 5 We used the version adapted to French in the software Bonsai (Candito and Crabb\u00e9, 2009) : http://alpage.inria.fr/statgram/frdep/fr stat dep parsing.html. The original version is available at: http://code.google.com/p/berkeleyparser/.", "cite_spans": [ { "start": 196, "end": 216, "text": "(Black et al., 1991)", "ref_id": "BIBREF2" }, { "start": 349, "end": 350, "text": "5", "ref_id": null }, { "start": 412, "end": 438, "text": "(Candito and Crabb\u00e9, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "We trained the parser as follows: right binarization, no parent annotation, six split-merge cycles and default random seed initialisation (8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "6 Wapiti can be found at http://wapiti.limsi.fr/. It was configured as follows: rprop algorithm, default L1-penalty value (0.5), default L2-penalty value (0.00001), default stopping criterion value (0.02%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "7 Available at http://igm.univmlv.fr/\u02dcmconstan/research/software/. 8 We used the following mathematical libraries PETSc et TAO, freely available at http://www.mcs.anl.gov/petsc/ and http://www.mcs.anl.gov/research/projects/tao/ 9 Evalb tool available at http://nlp.cs.nyu.edu/evalb/. We also used the evaluation by category implemented in the class EvalbByCat in the Stanford Parser. dependencies between words of the sentence 10 . And finally, the Leaf-Ancestor score [LA] 11 (Sampson, 2003) computes the similarity between all paths (sequence of nodes) from each terminal node to the root node of the tree. The global score of a generated parse is equal to the average score of all terminal nodes. Punctuation tokens are ignored in all metrics. The quality of MWE identification was evaluated by computing the F 1 score on MWE nodes. We also evaluated the MWE segmentation by using the unlabeled F 1 score (U). In order to compare both approaches, parse trees generated by BKYc were automatically transformed in trees with the same MWE annotation scheme as the trees generated by BKY.", "cite_spans": [ { "start": 67, "end": 68, "text": "8", "ref_id": null }, { "start": 477, "end": 492, "text": "(Sampson, 2003)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "In order to establish the statistical significance of results between two parsing experiments in terms of F 1 and UAS, we used a unidirectional t-test for two independent samples 12 . The statistical significance between two MWE identification experiments was established by using the McNemar-s test (Gillick and Cox, 1989) . The results of the two experiments are considered statistically significant with the computed value p < 0.01.", "cite_spans": [ { "start": 300, "end": 323, "text": "(Gillick and Cox, 1989)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "6.1" }, { "text": "The results of the standalone MWE recognizer are given in table 3. They show that the lexicon-based system (lex) reaches the best score. Accuracy is improved by an absolute gain of +6.7 points as compared with BKY parser. The strictly endogenous system has a +4.9 point absolute gain, +5.4 points when collocations are added. That shows that most of the work is done by fully automatically acquired features (as opposed to features coming from a manually constructed lexicon). As expected, lexiconbased features lead to a 5.3 point recall improvement (with respect to non-lexicon based features) whereas precision is stable. The more precise system is the base one because it almost solely detects compounds present in the training corpus; nevertheless, it is unable to capture new MWEs (it has the 10 This score is computed by using the tool available at http://ilk.uvt.nl/conll/software.html. The constituent trees are automatically converted into dependency trees with the tool Bonsai.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standalone Multiword recognition", "sec_num": "6.2" }, { "text": "11 Leaf-ancestor assessment tool available at http://www.grsampson.net/Resources.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standalone Multiword recognition", "sec_num": "6.2" }, { "text": "12 Dan Bikel's tool available at http://www.cis.upenn.edu/\u02dcdbikel/software.html. lowest recall). BKY parser has the best recall among the non lexicon-based systems, i.e. it is the best one to discover new compounds as it is able to precisely detect irregular syntactic structures that are likely to be MWEs. Nevertheless, as it does not have a lexicalized strategy, it is not able to filter out incorrect candidates; the precision is therefore very low (the worst). (Gillick and Cox, 1989) , except lex/all and all/coll; lex/coll is \"border-line\". The results of the systems based on the Stanford Parser and the Tree Substitution Parser (DP-TSG) are reported from (Green et al., 2011) .", "cite_spans": [ { "start": 466, "end": 489, "text": "(Gillick and Cox, 1989)", "ref_id": "BIBREF10" }, { "start": 664, "end": 684, "text": "(Green et al., 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Standalone Multiword recognition", "sec_num": "6.2" }, { "text": "We tested and compared the two proposed discriminative strategies by varying the sets of MWEdedicated features. The results are reported in table 4. Firstly, we note that the accuracy of the best realistic parsers is much lower than that of a parser with a golden MWE segmentation 13 (-2.65 and -5.92 respectively in terms of F-score and UAS), which shows the importance of not neglecting MWE recognition in the framework of parsing. Furthermore, pre-grouping has no statistically significant impact on the F-score 14 , whereas reranking leads to a statistically significant improvement (except for collocations). Both strategies also lead to a statistically significant UAS increase. Whereas both strategies improve the MWE recognition, pre-grouping is much more accurate (+2-4%); this might be due to the fact that an unlexicalized parser is limited in terms of compound identification, even within nbest analyses (cf. Oracle in table 6). The benefits of lexicon-based features are confirmed in this experiment, whereas the use of collocations in the reranking strategy seems to be rejected. : Reranker F 1 evaluation with respect to n and the types of features. The F 1 (MWE) is given in parenthesis. Table 7 shows the results by category. It indicates that both discriminative strategies are of interest in locating multiword adjectives, determiners and prepositions; the pre-grouping method appears to be particularly relevant for multiword nouns and adverbs. However, it performs very poorly in multiword verb recognition. In terms of standard parsing accuracy, the pre-grouping approach has a very heterogeneous impact: Adverbial and Adjective Modifier phrases tend to be more accurate; verbal kernels and higher level constituents such as relative and subordinate clauses see their accuracy level drop, which shows that pre-recognition of MWE can have a negative impact on general parsing accuracy as MWE errors propagate to higher level constituents. ", "cite_spans": [], "ref_spans": [ { "start": 1204, "end": 1211, "text": "Table 7", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Combination of Multiword Expression Recognition and Parsing", "sec_num": "6.3" }, { "text": "In this paper, we evaluated two discriminative strategies to integrate Multiword Expression Recognition in probabilistic parsing: (a) pre-grouping MWEs with a state-of-the-art recognizer and (b) MWE identification with a reranker after parsing. We showed that MWE pre-grouping significantly improves compound recognition and unlabeled dependency annotation, which implies that this strategy could be useful for dependency parsing. The reranking procedure evenly improves all evaluation scores. Future work could consist in combining both strategies: pre-grouping could suggest a set of potential MWE segmentations in order to make it more flexible for a parser; final decisions would then be made by the reranker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" }, { "text": "Such MWEs may very rarely accept inserts, often limited to single word modifiers: e.g. in the short term, in the very short", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.llf.cnrs.fr/Gens/Abeille/French-Treebankfr.php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://igm.univ-mlv.fr/\u02dcunitex 4 http://atoll.inria.fr/\u02dcsagot/lefff.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The F1(MWE) is not 100% with a golden segmentation because of tagging errors by the parser.14 Note that we observe an increase of +0.5 in F1 on the development corpus with lexicon-based features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors are very grateful to Spence Green for his useful help on the treebank, and to Jennifer Thewissen for her careful proof-reading.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowlegments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Building a treebank for French. Treebanks", "authors": [ { "first": "A", "middle": [], "last": "Abeill\u00e9", "suffix": "" }, { "first": "L", "middle": [], "last": "Cl\u00e9ment", "suffix": "" }, { "first": "F", "middle": [], "last": "Toussenel", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Abeill\u00e9 and L. Cl\u00e9ment and F. Toussenel. 2003. Building a treebank for French. Treebanks. In A. Abeill\u00e9 (Ed.). Kluwer. Dordrecht.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Lexicalization in crosslinguistic probabilistic parsing: The case of French", "authors": [ { "first": "A", "middle": [], "last": "Arun", "suffix": "" }, { "first": "F", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Arun and F. Keller. 2005. Lexicalization in crosslin- guistic probabilistic parsing: The case of French. In ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A procedure for quantitatively comparing the syntactic coverage of English grammars", "authors": [ { "first": "E", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Abney", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "C", "middle": [], "last": "Gdaniec", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "P", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "R", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [], "last": "Klavans", "suffix": "" }, { "first": "M", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "T", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Black, S. Abney, D. Flickinger, C. Gdaniec, R. Gr- ishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of En- glish grammars. In Proceedings of the DARPA Speech and Natural Language Workshop.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multiword Expressions. Handbook of Natural Language Processing", "authors": [ { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "K", "middle": [ "S" ], "last": "Nam", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Baldwin and K.S. Nam. 2010. Multiword Ex- pressions. Handbook of Natural Language Process- ing, Second Edition. CRC Press, Taylor and Francis Group.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving generative statistical parsing with semi-supervised word clustering", "authors": [ { "first": "M. -H", "middle": [], "last": "Candito", "suffix": "" }, { "first": "B", "middle": [], "last": "Crabb\u00e9", "suffix": "" } ], "year": 2009, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. -H. Candito and B. Crabb\u00e9. 2009. Improving gen- erative statistical parsing with semi-supervised word clustering. Proceedings of IWPT 2009.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Coarse-to-Fine n-Best Parsing and MaxEnt Discriminative Reranking", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak and M. Johnson. 2005. Coarse-to-Fine n- Best Parsing and MaxEnt Discriminative Reranking. Proceedings of the 43rd Annual Meeting of the Asso- ciation for Computational Linguistics (ACL'05).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "MWU-aware Partof-Speech Tagging with a CRF model and lexical resources", "authors": [ { "first": "M", "middle": [], "last": "Constant", "suffix": "" }, { "first": "A", "middle": [], "last": "Sigogne", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World (MWE'11)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Constant and A. Sigogne. 2011. MWU-aware Part- of-Speech Tagging with a CRF model and lexical re- sources. In Proceedings of the Workshop on Multi- word Expressions: from Parsing and Generation to the Real World (MWE'11).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multiword Expressions: Linguistic Precision and Reusability", "authors": [ { "first": "A", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "F", "middle": [], "last": "Lambeau", "suffix": "" }, { "first": "A", "middle": [], "last": "Villavicencio", "suffix": "" }, { "first": "F", "middle": [], "last": "Bond", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Third International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Copestake, F. Lambeau, A. Villavicencio, F. Bond, T. Baldwin, I. Sag, D. Flickinger. 2002. Multi- word Expressions: Linguistic Precision and Reusabil- ity. Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Un syst\u00e8me de dictionnaire\u015b electroniques pour les mots simples du fran\u00e7ais", "authors": [ { "first": "B", "middle": [], "last": "Courtois", "suffix": "" } ], "year": 1990, "venue": "Langue Fran\u00e7aise", "volume": "87", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Courtois. 1990. Un syst\u00e8me de dictionnaire\u015b electroniques pour les mots simples du fran\u00e7ais. Langue Fran\u00e7aise. Vol. 87.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Dictionnaire\u00e9lectronique DELAC : les mots compos\u00e9s binaires", "authors": [ { "first": "B", "middle": [], "last": "Courtois", "suffix": "" }, { "first": "M", "middle": [], "last": "Garrigues", "suffix": "" }, { "first": "G", "middle": [], "last": "Gross", "suffix": "" }, { "first": "M", "middle": [], "last": "Gross", "suffix": "" }, { "first": "R", "middle": [], "last": "Jung", "suffix": "" }, { "first": "M", "middle": [], "last": "Mathieu-Colas", "suffix": "" }, { "first": "A", "middle": [], "last": "Monceaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Poncet-Montange", "suffix": "" }, { "first": "M", "middle": [], "last": "Silberztein", "suffix": "" }, { "first": "R", "middle": [], "last": "Viv\u00e9s", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Courtois, M. Garrigues, G. Gross, M. Gross, R. Jung, M. Mathieu-Colas, A. Monceaux, A. Poncet- Montange, M. Silberztein and R. Viv\u00e9s. 1997. Dic- tionnaire\u00e9lectronique DELAC : les mots compos\u00e9s bi- naires. Technical Report. n. 56. LADL, University Paris 7.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Some statistical issues in the comparison of speech recognition algorithms", "authors": [ { "first": "L", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "S", "middle": [], "last": "Cox", "suffix": "" } ], "year": 1989, "venue": "Proceedings of ICASSP'89", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Gillick and S. Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In Proceedings of ICASSP'89.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multiword Expression Identification with Tree Substitution Grammars: A Parsing tour de force with French", "authors": [ { "first": "S", "middle": [], "last": "Green", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "J", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Empirical Method for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Green, M.-C. de Marneffe, J. Bauer and C. D. Man- ning. 2011. Multiword Expression Identification with Tree Substitution Grammars: A Parsing tour de force with French. In Empirical Method for Natural Lan- guage Processing (EMNLP'11).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lexicon Grammar. The Representation of Compound Words", "authors": [ { "first": "M", "middle": [], "last": "Gross", "suffix": "" } ], "year": 1986, "venue": "Proceedings of Computational Linguistics (COLING'86)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Gross. 1986. Lexicon Grammar. The Representa- tion of Compound Words. In Proceedings of Compu- tational Linguistics (COLING'86).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional random Fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty and A. McCallum and F. Pereira. 2001. Con- ditional random Fields: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Practical Very Large Scale CRFs", "authors": [ { "first": "T", "middle": [], "last": "Lavergne", "suffix": "" }, { "first": "O", "middle": [], "last": "Capp\u00e9", "suffix": "" }, { "first": "F", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Lavergne, O. Capp\u00e9 and F. Yvon. 2010. Practical Very Large Scale CRFs. In ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multiword units in syntactic parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2004, "venue": "Methodologies and Evaluation of Multiword Units in Real-World Applications (MEMURA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre and J. Nilsson. 2004. Multiword units in syntac- tic parsing. In Methodologies and Evaluation of Mul- tiword Units in Real-World Applications (MEMURA).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Unitex 3.9 documentation", "authors": [ { "first": "S", "middle": [], "last": "Paumier", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Paumier. 2011. Unitex 3.9 documentation.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning accurate, compact and interpretable tree annotation", "authors": [ { "first": "S", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "L", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "R", "middle": [], "last": "Thibaux", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Petrov, L. Barrett, R. Thibaux and D. Klein. 2006. Learning accurate, compact and interpretable tree an- notation. In ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "mwetoolkit: a framework for multiword expression identification", "authors": [ { "first": "C", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "A", "middle": [], "last": "Villavicencio", "suffix": "" }, { "first": "C", "middle": [], "last": "Boitet", "suffix": "" } ], "year": 2010, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Ramisch, A. Villavicencio and C. Boitet. 2010. mwe- toolkit: a framework for multiword expression identi- fication. In LREC.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 3rd Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. A. Ramshaw and M. P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd Workshop on Very Large Corpora.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multiword Expressions: A Pain in the Neck for NLP", "authors": [ { "first": "I", "middle": [ "A" ], "last": "Sag", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "F", "middle": [], "last": "Bond", "suffix": "" }, { "first": "A", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. A. Sag, T. Baldwin, F. Bond, A. Copestake and D. Flickinger. 2002. Multiword Expressions: A Pain in the Neck for NLP. In CICLING 2002. Springer.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Lefff, a freely available, accurate and large-coverage lexicon for French", "authors": [ { "first": "B", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC'10)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Sagot. 2010. The Lefff, a freely available, accurate and large-coverage lexicon for French. In Proceed- ings of the 7th International Conference on Language Resources and Evaluation (LREC'10).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A test of the leafancestor metric for parsing accuracy", "authors": [ { "first": "G", "middle": [], "last": "Sampson", "suffix": "" }, { "first": "A", "middle": [], "last": "Babarczy", "suffix": "" } ], "year": 2003, "venue": "Natural Language Engineering", "volume": "9", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Sampson and A. Babarczy. 2003. A test of the leaf- ancestor metric for parsing accuracy. Natural Lan- guage Engineering. Vol. 9 (4).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Crossparser evaluation and tagset variation: a French treebank study", "authors": [ { "first": "D", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "M.-H", "middle": [], "last": "Candito", "suffix": "" }, { "first": "B", "middle": [], "last": "Crabb", "suffix": "" } ], "year": 2009, "venue": "Proceedings of International Workshop on Parsing Technologies (IWPT'09)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seddah D., Candito M.-H. and Crabb B. 2009. Cross- parser evaluation and tagset variation: a French tree- bank study. Proceedings of International Workshop on Parsing Technologies (IWPT'09).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Tree-rewriting models of multi-word expressions", "authors": [ { "first": "W", "middle": [], "last": "Schuler", "suffix": "" }, { "first": "A", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World (MWE'11)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Schuler, A. Joshi. 2011. Tree-rewriting models of multi-word expressions. Proceedings of the Workshop on Multiword Expressions: from Parsing and Genera- tion to the Real World (MWE'11).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "INTEX: an FST toolbox", "authors": [ { "first": "M", "middle": [], "last": "Silberztein", "suffix": "" } ], "year": 2000, "venue": "Theoretical Computer Science", "volume": "231", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Silberztein. 2000. INTEX: an FST toolbox. Theoret- ical Computer Science, vol. 231(1).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "N-gram frequency database reference to handle MWE extraction in NLP applications", "authors": [ { "first": "P", "middle": [], "last": "Watrin", "suffix": "" }, { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Workshop on MultiWord Expressions", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Watrin and T. Fran\u00e7ois. 2011. N-gram frequency database reference to handle MWE extraction in NLP applications. In Proceedings of the 2011 Workshop on MultiWord Expressions.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Subtree of MWE part de march\u00e9 (market share): The MWN node indicates that it is a multiword noun; it has a flat internal structure N P N (noun -preprosition -noun)", "num": null, "uris": null }, "TABREF2": { "type_str": "table", "content": "", "num": null, "html": null, "text": "Simple context-free application of the lexical resources on the development corpus: T is the MWE lexicon of the training corpus, L is the lefff, D is the Dela. The given scores solely evaluate MWE segmentation and not tagging." }, "TABREF4": { "type_str": "table", "content": "
: MWE identification with CRF: base are the
features corresponding to token properties and word n-
grams. The differences between all systems are statisti-
cally significant with respect to McNemar's test
", "num": null, "html": null, "text": "" }, "TABREF5": { "type_str": "table", "content": "
Strat. Feat. ParserF 1LAUASF 1 (MWE)
--BKY80.6192.9182.9971.1
pre-BKYc 75.4791.1076.740.0
preendoBKYc 80.2392.6983.6274.9
precollBKYc 80.3292.7383.7775.5
prelexBKYc 80.6692.8184.1677.4
preallBKYc 80.5192.7784.0577.2
postendoBKY80.8792.9483.4972.9
postcollBKY80.7192.8583.1671.2
postlexBKY81.0892.9883.9874.5
postallBKY81.0392.9683.9774.3
pregoldBKYc 83.7393.7790.0895.8
", "num": null, "html": null, "text": "compares the parsing systems, by showing the score differences between each of the tested system and the BKY parser." }, "TABREF6": { "type_str": "table", "content": "
\u2206F 1\u2206UAS\u2206F 1 (MWE)
prepostprepostprepost
endo-0.38+0.26+0.63 +0.50+3.8+1.8
coll-0.29+0.10+0.78 +0.17+4.4+0.1
lex+0.05 +0.47+1.17 +0.99+6.3+3.4
", "num": null, "html": null, "text": "Parsing evaluation: pre indicates a MWE pregrouping strategy, whereas post is a reranking strategy with n = 50. The feature gold means that we have applied the parser on a gold MWE segmentation." }, "TABREF7": { "type_str": "table", "content": "", "num": null, "html": null, "text": "Comparison of the strategies with respect to BKY parser." }, "TABREF9": { "type_str": "table", "content": "
", "num": null, "html": null, "text": "" }, "TABREF11": { "type_str": "table", "content": "
", "num": null, "html": null, "text": "Evaluation by category with respect to BKY parser. The BKY column indicates the F 1 of BKY parser." } } } }