{ "paper_id": "A97-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:14:09.961834Z" }, "title": "Developing a hybrid NP parser", "authors": [ { "first": "Atro", "middle": [], "last": "Voutilainen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": { "postBox": "P.O. Box 4", "postCode": "FIN-00014", "country": "Finland" } }, "email": "" }, { "first": "Llufs", "middle": [], "last": "Padr6", "suffix": "", "affiliation": { "laboratory": "Dept. Llenguatges i Sistemes InformS, tics Universitat Polit~cnica de Catalunya C/Grail Capit~ s/n", "institution": "", "location": { "postCode": "08034", "settlement": "Barcelona", "region": "Catalonia" } }, "email": "padro@lsi@upc.es" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe the use of energy function optimisation in very shallow syntactic parsing. The approach can use linguistic rules and corpus-based statistics, so the strengths of both linguistic and statistical approaches to NLP can be combined in a single framework. The rules are contextual constraints for resolving syntactic ambiguities expressed as alternative tags, and the statistical language model consists of corpus-based n-grams of syntactic tags. The success of the hybrid syntactic disambiguator is evaluated against a held-out benchmark corpus. Also the contributions of the linguistic and statistical language models to the hybrid model are estimated.", "pdf_parse": { "paper_id": "A97-1013", "_pdf_hash": "", "abstract": [ { "text": "We describe the use of energy function optimisation in very shallow syntactic parsing. The approach can use linguistic rules and corpus-based statistics, so the strengths of both linguistic and statistical approaches to NLP can be combined in a single framework. The rules are contextual constraints for resolving syntactic ambiguities expressed as alternative tags, and the statistical language model consists of corpus-based n-grams of syntactic tags. The success of the hybrid syntactic disambiguator is evaluated against a held-out benchmark corpus. Also the contributions of the linguistic and statistical language models to the hybrid model are estimated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The language models used by natural language analyzers are traditionally based on two approaches. In the linguistic approach, the model is based on hand-crafted rules derived from the linguist's general and/or corpus-based knowledge about the object language. In the data-driven approach, the model is automatically generated from annotated text corpora, and the model can be represented e.g. as n-grams (Garside et al., 1987) , local rules (Hindle, 1989) or neural nets (Schmid, 1994) .", "cite_spans": [ { "start": 404, "end": 426, "text": "(Garside et al., 1987)", "ref_id": null }, { "start": 441, "end": 455, "text": "(Hindle, 1989)", "ref_id": "BIBREF6" }, { "start": 471, "end": 485, "text": "(Schmid, 1994)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most hybrid approaches combine statistical information with automatically extracted rule-based information (Brill, 1995; Daelemans et al., 1996) . Relatively little attention has been paid to models where the statistical approach is combined with a truly linguistic model (i.e. one generated by a linguist). This paper reports one such approach: syntactic rules written by a linguist are combined with statistical information using the relaxation labelling algorithm.", "cite_spans": [ { "start": 107, "end": 120, "text": "(Brill, 1995;", "ref_id": "BIBREF0" }, { "start": 121, "end": 144, "text": "Daelemans et al., 1996)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our application is very shallow parsing: identification of verbs, premodifiers, nominal and adverbial heads, and certain kinds of postmodifiers. We call this parser a noun phrase parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The input is English text morphologically tagged with a rule-based tagger called EngCG (Voutilainen et al., 1992; Karlsson et al., 1995) . Syntactic wordtags are added as alternatives (e.g. each adjective gets a premodifier tag, postmodifier tag and a nominal head tag as alternatives). The system should remove contextually illegitimate tags and leave intact each word's most appropriate tag. In other words, the syntactic language model is applied by a disambiguator.", "cite_spans": [ { "start": 87, "end": 113, "text": "(Voutilainen et al., 1992;", "ref_id": "BIBREF14" }, { "start": 114, "end": 136, "text": "Karlsson et al., 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The parser has a recall of 100% if all words retain the correct morphological and syntactic reading; the system's precision is 100% if the output contains no illegitimate morphological or syntactic readings. In practice, some correct readings are discarded, and some ambiguities remain unresolved (i.e. some words retain two or more alternative analyses).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system can use linguistic rules and corpusbased statistics. Notable about the system is that minimal human effort was needed for creating its language models (the linguistic consisting of syntactic disambiguation rules based on the Constraint Grammar framework (Karlsson, 1990; Karlsson et al., 1995) ; the corpus-based consisting of bigrams and trigrams):", "cite_spans": [ { "start": 265, "end": 281, "text": "(Karlsson, 1990;", "ref_id": "BIBREF7" }, { "start": 282, "end": 304, "text": "Karlsson et al., 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Only one day was spent on writing the 107 syntactic disambiguation rules used by the linguistic parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "No human annotators were needed for annotating the training corpus (218,000 words of journalese) used by the data-driven learning modules of this system: the training corpus was annotated by (i) tagging it with the EngCG morphological tagger, (ii) making the tagged text syntactically ambiguous by adding the alternative syntactic tags to the words, and (iii) resolving most of these syntactic ambiguities by applying the parser with the 107 disambiguation rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system was tested against a fresh sample of five texts (6,500 words). The system's recall and precision was measured by comparing its output to a manually disambiguated version of the text. To increase the objectivity of the evaluation, system outputs and the benchmark corpus are made publicly accessible (see Section 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Also the relative contributions of the linguistic and statistical components are evaluated. The linguistic rules seldom discard the correct tag, i.e. they have a very high recall, but their problem is remaining ambiguity. The problems of the statistical components are the opposite: their recall is considerably lower, but more (if not all) ambiguities are resolved. When these components are used in a balanced way, the system's overall recall is 97.2% -that is, 97.2% of all words get the correct analysis -and its precision is 96.1% -that is, of the readings returned by the system, 96.1% are correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system architecture is presented in Figure 1 . The structure of the paper is the following. First, we describe our general framework, the relaxation labelling algorithm. Then we proceed to the application by outlining the grammatical representation used in our shallow syntax. After this, the disambiguation rules and their development are described. Next in turn is a description of how the data-driven language model was generated. The evaluation of the system is then presented: first the preparation of the benchmark corpus is described, then the results of the tests are given. The paper ends with some concluding remarks.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 48, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since we are dealing with a set of constraints and want to find a solution which optimally satisfies them M1, we can use a standard Constraint Satisfaction algorithm to solve that problem. Constraint Satisfaction Problems are naturally modelled as Consistent Labeling Problems (Larrosa and Meseguer, 1995) . An algorithm that solves CLPs is Relaxation Labelling.", "cite_spans": [ { "start": 277, "end": 305, "text": "(Larrosa and Meseguer, 1995)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "It has been applied to part-of-speech tagging (Padr6, 1996) showing that it can yield as good results as a HMM tagger when using the same information. In addition, it can deal with any kind of constraints, thus the model can be improved by adding any other constraints available, either statistics, hand-written or automatically extracted (Mhrquez and Rodrfguez, 1995; Samuelsson et al., 1996) .", "cite_spans": [ { "start": 46, "end": 59, "text": "(Padr6, 1996)", "ref_id": "BIBREF19" }, { "start": 339, "end": 368, "text": "(Mhrquez and Rodrfguez, 1995;", "ref_id": null }, { "start": 369, "end": 393, "text": "Samuelsson et al., 1996)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "Relaxation labelling is a generic name for a family of iterative algorithms which perform function optimisation, based on local information. See (Torras, 1989 ) for a summary.", "cite_spans": [ { "start": 145, "end": 158, "text": "(Torras, 1989", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "Given a set of variables, a set of possible labels for each variable, and a set of compatibility constraints between those labels, the algorithm finds a combination of weights for the labels that maximises \"global consistency\" (see below). Let CS be a set of constraints between the labels of the variables. Each constraint C E CS states a \"compatibility value\" Cr for a combination of pairs variable-label. Any number of variables may be involved in a constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "Let V = {vl, v2,..., v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "The aim of the algorithm is to find a weighted labelling I such that \"global consistency\" is maximised. Maximising \"global consistency\" is defined as maximising ~j p~. x Sij , Vvi, where p~. is the weight for label j in variable vi and Sij the support received by the same combination. The support for the pair variable-label expresses how compatible that pair is with the labels of neighbouring variables, according to the constraint set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "1A weighted labelling is a weight assignment for each label of each variable such that the weights for the labels of the same variable add up to one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "The support is defined as the sum of the influence of every constraint on a label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "Sij = ~ Inf(r) rER~j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "where: R~j is the set of constraints on label j for variable i, i.e. the constraints formed by any combination of variable--label pairs that includes the pair (vi, tj). Briefly, what the algorithm does is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "1. Start with a random weight assignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "2. Compute the support value for each label of each variable. (How compatible it is with the current weights for the labels of the other variables.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "3. Increase the weights of the labels more compatible with the context (support greater than 0) and decrease those of the less compatible labels (support less than 0) 3 , using the updating function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "p~.(rn) x (1 + Sij) pj(m + 1) = \u00d7 (1 + S,k) k=l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "where -l~Sij_~+l 4. If a stopping/convergence criterion 4 is satisfied, stop, otherwise go to to step 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relaxation Labelling Algorithm", "sec_num": "2" }, { "text": "The input of our parser is morphologically analyzed and disambiguated text enriched with alternative syntactic tags, e.g. SNegative values for support indicate incompatibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammatical representation", "sec_num": "3" }, { "text": "4The usual criterion is to stop when there are no more changes, although more sophisticated heuristic procedures are also used to stop relaxation processes (Eklundh and Rosenfeld, 1978; Richards et al. , 1981) .", "cite_spans": [ { "start": 156, "end": 185, "text": "(Eklundh and Rosenfeld, 1978;", "ref_id": "BIBREF3" }, { "start": 186, "end": 209, "text": "Richards et al. , 1981)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Grammatical representation", "sec_num": "3" }, { "text": "\"move\" V PAST VFIN @V \"\" \"away\" ADV ADVL @>A @AH \"\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\"from\" PREP @DUMMY \"\" \"traditional\" A ABS @>N @N< @NH \"\" \"jazz\" <-Indef> N NOM SG @>N @NH \"\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\"practice\" N N0M SG @>N @NH", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\"practice\" V PRES -SG3 VFIN @V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "Every indented line represents a morphological reading; the sample shows that some morphological ambiguities are not resolved by the rule-based morphological disambiguator, known as the EngCG tagger (Voutilainen et al., 1992; Karlsson et al., 1995) .", "cite_spans": [ { "start": 199, "end": 225, "text": "(Voutilainen et al., 1992;", "ref_id": "BIBREF14" }, { "start": 226, "end": 248, "text": "Karlsson et al., 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "Our syntactic tags start with the \"@\" sign. A word is syntactically ambiguous if it has more than one syntactic tags (e.g. practice above has three alternative syntactic tags). Syntactic tags are added to the morphological analysis with a simple lookup module. The syntactic parser's main task is disambiguating (rather than adding new information to the input sentence): contextuMly illegitimate alternatives should be discarded, while legitimate tags should be retained (note that also morphological ambiguities may be resolved as a side effect).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "Next we describe the syntactic tags:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @>N represents premodifiers and determiners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @N< represents a restricted range of postmodifiers and the determiner \"enough\" following its nominal head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @NH represents nominal heads (nouns, adjectives, pronouns, numerals, ING-forms and nonfinite ED-forms).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @>A represents those adverbs that premodify (intensify) adjectives (including adjectival INGforms and non-finite ED-forms), adverbs and various kinds of quantifiers (certain determiners, pronouns and numerals).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @AH represents adverbs that function as head of an adverbial phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @A< represents the postmodifying adverb \"enough\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @V represents verbs and auxiliaries (incl. the infinitive marker \"to\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @>CC represents words introducing a coordination (\" either\", \"neither\", \"both\" ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "\u2022 @CC represents coordinating conjunctions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "* @CS represents subordinating conjunctions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": ". @DUMMY represents all prepositions, i.e. the parser does not address the attachment of prepositional phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"\"", "sec_num": null }, { "text": "The rules follow the Constraint Grammar formalism, and they were applied using the recent parsercompiler CG-2 (Tapanainen, 1996) . The parser reads a sentence at a time and discards those ambiguity-forming readings that are disallowed by a constraint.", "cite_spans": [ { "start": 110, "end": 128, "text": "(Tapanainen, 1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic rules 4.1 Rule formalism", "sec_num": "4" }, { "text": "Next we describe some basic features of the rule formalism. The rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic rules 4.1 Rule formalism", "sec_num": "4" }, { "text": "removes the premodifier tag @>N from an ambiguous reading if somewhere to the right (*1) there is an unambiguous (C) occurrence of a member of the set <<< (sentence boundary symbols) or the verb tag @V or the subordinating conjunction tag @CS, and there are no intervening tags for nominal heads (@NH).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "This is a partial rule about coordination:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "REMOVE (\u00a9>N) (NOT 0 (DET) OR (NUM) OR (A)) (lc (cc)) (2C (DET)) ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "It removes the premodifier tag if all three contextconditions are satisfied:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "\u2022 the word to be disambiguated (0) is not a determiner, numeral or adjective,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "\u2022 the first word to the right (1) is an unambiguous coordinating conjunction, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "\u2022 the second word to the right is an unambiguous determiner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "In addition to REMOVing, also SELECTing a reading is possible: when all context-conditions are satisfied, all readings but the one the rule was expressly about are discarded. The rules can refer to words and tags directly or by means of predefined sets. They can refer not only to any fixed context positions; also reference to contextual patterns is possible. The rules never discard a last reading, so every word retains at least one analysis. On the other hand, an ambiguity remains unresolved if there are no rules for that particular type of ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R~.HOV~. (\u00a9>hi) (,ic <<< OR (\u00a9V) OR (~CS) BARRIER (@NH));", "sec_num": null }, { "text": "A day was spent on writing 107 constraints; about 15,000 words of the parser's output were proofread during the process. The routine was the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar development", "sec_num": "4.2" }, { "text": "1. The current grammar (containing e.g. 2 rules)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar development", "sec_num": "4.2" }, { "text": "is applied to the ambiguous input in a 'trace' mode in which the parser also indicates, which rule discarded which analysis, 2. The grammarian observes remaining ambiguities and proposes new rules for disambiguating them, and 3. He also tries to identify misanalyses (cases where the correct tag is discarded) and, using the trace information, corrects the faulty rule This routine is useful if the development time is very restricted, and only the most common ambiguity types have to be resolved with reasonable success. However, if the grammar should be of a very high quality (extremely few mispredictions, high degree of ambiguity resolution), a large test corpus, formally similar to the input except for the manually added extra information about the correct analysis, should be used. This kind of test corpus would enable the automatic identification of mispredictions as well as counting of various performance statistics for the rules. However, manually disambiguating a test corpus of a few hundred thousand words would probably require a human effort of at least a month.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar development", "sec_num": "4.2" }, { "text": "The following is genuine output of the linguistic (CG-2) parser using the 107 syntactic disambiguation rules. The traces starting with \"S:\" indicate the line on which the applied rule is in the grammar file. One syntactic (and morphological) ambiguity remains unresolved: until remains ambiguous due to preposition and subordinating conjunction readings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample output", "sec_num": "4.3" }, { "text": "\"\" S :46 \"aachen\" <*> hl hiOM SG ~1~ \"\" \"remain\" V PAST VFIN @V \"\" \"a\" DET CENTRAL ART SG @>N \"\" S:316, 49 \"free\" A ABS @>N \"\" S:49, 57 \"imperial\" A ABS @>N \"\" S:46", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample output", "sec_num": "4.3" }, { "text": "\"city\" N N0M SG @NH \"\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample output", "sec_num": "4.3" }, { "text": "\"until\" PREP @DUMMY \"until\" <**CLB> CS @CS \"\" S: 116, 345, 46 \"occupy\" PCP2 @V \"\" \"by\" PREP ~DUMMY \"\" S :46 \"france\" <*> N N0M SG @NH \"\" \"in\" PREP @DUMMY \"<1794>\" S:121, 49", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample output", "sec_num": "4.3" }, { "text": "\"1794\" <1900> NUM CARD @NH ,,<$.>,,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample output", "sec_num": "4.3" }, { "text": "To solve shallow parsing with the relaxation labelling algorithm we model each word in the sentence as a variable, and each of its possible readings as a label for that variable. We start with a uniform weight distribution. We will use the algorithm to select the right syntactic tag for every word. Each iteration will increase the weight for the tag which is currently most compatible with the context and decrease the weights for the others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "Since constraints are used to decide how compatible a tag is with its context, they have to assess the compatibility of a combination of readings. We adapt CG constraints described above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "The REMOVE constraints express total incompatibility 5 and SELECT constraints express total compatibility (actually, they express incompatibility of all other possibilities).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "The compatibility value for these should be at least as strong as the strongest value for a statistically obtained constraint (see below). This produces a value of about -4-10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "But because we want the linguistic part of the model to be more important than the statistical part and because a given label will receive the influence SWe model compatibility values using mutual information (Cover and Thomas, 1991) , which enables us to use negative numbers to state incompatibility. See (PadrS, 1996) for a performance comparison between M.I. and other measures when applying relaxation labelling to NLP. of about two bigrams and three trigrams 6, a single linguistic constraint might have to override five statistical constraints. So we will make the compatibility values six times stronger, that is, =h60.", "cite_spans": [ { "start": 209, "end": 233, "text": "(Cover and Thomas, 1991)", "ref_id": "BIBREF2" }, { "start": 307, "end": 320, "text": "(PadrS, 1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "Since in our implementation of the CG parser (Tapanainen, 1996) constraints tend to be applied in a certain order -e.g. SELECT constraints are usually applied before REMOVE constraints -we adjust the compatibility values to get a similar effect: if the value for SELECT constraints is +60, the value for REMOVE constraints will be lower in absolute value, (i.e. -50). With this we ensure that two contradictory constraints (if there are any) do not cancel each other. The SELECT constraint will win, as if it had been applied before.", "cite_spans": [ { "start": 45, "end": 63, "text": "(Tapanainen, 1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "This enables using any Constraint Grammar with this algorithm although we are applying it more flexibly: we do not decide whether a constraint is applied or not. It is always applied with an influence (perhaps zero) that depends on the weights of the labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "If the algorithm should apply the constraints in a more strict way, we can introduce an influence threshold under which a constraint does not have enough influence, i.e. is not applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "We can add more information to our model in the form of statistically derived constraints. Here we use bigrams and trigrams as constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "The 218,000-word corpus of journalese from which these constraints were extracted was analysed using the following modules: * EngCG morphological tagger", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "\u2022 Module for introducing syntactic ambiguities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "\u2022 The NP disambiguator using the 107 rules written in a day No human effort was spent on creating this training corpus. The training corpus is partly ambiguous, so the bi/trigram information acquired will be slightly noisy, but accurate enough to provide an almost supervised statistical model. For instance, the following constraints have been statistically extracted from bi/trigram occurrences in the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "-0.415371 (@Y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "(1 (e>N)) ; 6The algorithm tends to select one label per variable, so there is always a bi/trigram which is applied more significantly than the others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid language model", "sec_num": "5" }, { "text": "(-1 (~>A)) (1 (\u00a2AH)) ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "28089 (\u00a9>A)", "sec_num": "4." }, { "text": "The compatibility value is the mutual information, computed from the probabilities estimated from a training corpus. We do not need to assign the compatibility values here, since we can estimate them from the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "28089 (\u00a9>A)", "sec_num": "4." }, { "text": "The compatibility values assigned to the handwritten constraints express the strength of these constraints compared to the statistical ones. Modifying those values means changing the relative weights of the linguistic and statistical parts of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "28089 (\u00a9>A)", "sec_num": "4." }, { "text": "Preparation of the benchmark corpus For evaluating the systems, five roughly equal-sized benchmark corpora not used in the development of our parsers and taggers were prepared. The texts, totaling 6,500 words, were copied from the Gutenberg e-text archive, and they represent present-day American English. One text is from an article about AIDS; another concerns brainwashing techniques; the third describes guerilla warfare tactics; the fourth addresses the assassination of J. F. Kennedy; the last is an extract from a speech by Noam Chomsky.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The texts were first analysed by a recent version of the morphological analyser and rule-based disambiguator EngCG, then the syntactic ambiguities were added with a simple lookup module. The ambiguous text was then manually disambiguated. The disambiguated texts were also proofread afterwards. Usually, this practice resulted in one analysis per word. However, there were two types of exception:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "1. The input did not contain the desired alternative (due to a morphological disambiguation error). In these cases, no reading was marked as correct. Two such words were found in the corpora; they detract from the performance figures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "2. The input contained more than one analyses all of which seemed equally legitimate, even when semantic and textual criteria were consulted. In these cases, all the equal alternatives were marked as correct. The benchmark corpus contains 18 words (mainly ING-forms and nonfinite ED-forms) with two correct syntactic analyses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The number of multiple analyses could probably be made even smaller by specifying the grammatical representation (usage principles of the syn-tactic tags) in more detail, in particular incorporating some analysis conventions for certain apparent borderline cases (for a discussion of specifying a parser's linguistic task, see (Voutilainen and J~rvinen, 1995) ).", "cite_spans": [ { "start": 327, "end": 359, "text": "(Voutilainen and J~rvinen, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "To improve the objectivity of the evaluation, the benchmark corpus (as well as parser outputs) have been made available from the following URLs: http://www.ling.helsinki.fi/-avoutila/anlp97.html http://www-lsi.upc.es/-lluisp/anlp97.html 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Experiments and results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "We tested linguistic, statistical and hybrid language models, using the CG-2 parser (Tapanainen, 1996) and the relaxation labelling algorithm described in Section 2.", "cite_spans": [ { "start": 84, "end": 102, "text": "(Tapanainen, 1996)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The statistical models were obtained from a training corpus of 218,000 words of journalese, syntactically annotated using the linguistic parser (see above).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Although the linguistic CG-2 parser does not disambiguate completely, it seems to have an almost perfect recall (cf. Table 1 below), and the noise introduced by the remaining ambiguity is assumed to be sufficiently lower than the signal, following the idea used in (Yarowsky, 1992) .", "cite_spans": [ { "start": 265, "end": 281, "text": "(Yarowsky, 1992)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The collected statistics were bigram and trigram occurrences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The algorithms and models were tested against a hand-disambiguated benchmark corpus of over 6,500 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "We measure the performance of the different models in terms of recall and precision. Recall is the percentage of words that get the correct tag among the tags proposed by the system. Precision is the percentage of tags proposed by the system that are correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Rel. Labelling prec. -recall prec. -recall 90.8%-99.7% 93.3%-98.4% Table 3 : Results obtained with hybrid models.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "Precision and recall results (computed on all words except punctuation marks, which are unambiguous) are given in tables 1, 2 and 3. Models are coded as follows: B stands for bigrams, T for trigrams and C for hand-written constraints. All combinations of information types are tested. Since the CG-2 parser handles only Constraint Grammars, we cannot test this algorithm with statistical models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "These results suggest the following conclusions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "\u2022 Using the same language model (107 rules), the relaxation algorithm disambiguates more than the CG-2 parser. This is due to the weighted rule application, and results in more misanalyses and less remaining ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "\u2022 The statistical models are clearly worse than the linguistic one. This could be due to the noise in the training corpus, but it is more likely caused by the difficulty of the task: we are dealing here with shallow syntactic parsing, which is probably more difficult to capture in a statistical model than e.g. POS tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "\u2022 The hybrid models produce less ambiguous results than the other models. The number of errors is much lower than was the case with the statistical models, and somewhat higher than was the case with the linguistic model. The gain in precision seems to be enough to compensate for the loss in recall 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "\u2022 There does not seem to be much difference between BC and TC hybrid models. The reason is probably that the job is mainly done by the linguistic part of the model -which has a higher relative weight -and that the statistical part only helps to disambiguate cases where the linguistic model doesn't make a prediction. The BTC hybrid model is slightly better than the other two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "\u2022 The small difference between the hybrid models suggest that some reasonable statistics provide enough disambiguation, and that not very sophisticated information is needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "7This obviously depends on the flexibility of one's requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C CG-2 parser", "sec_num": null }, { "text": "In this paper we have presented a method for combining linguistic hand-crafted rules with statistical information, and we applied it to a shallow parsing task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "Results show that adding statistical information results in an increase in the disambiguation ratio, getting a higher precision. The price is a decrease in recall. Nevertheless, the risk can be controlled since more or less statistical information can be used depending on the precision/recall tradeoff one wants to achieve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "We also used this technique to build a shallow parser with minimal human effort:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "\u2022 107 disambiguation rules were written in a day.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "\u2022 These rules were used to analyze a training corpus, with a very high recall and a reasonable precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "\u2022 This slightly ambiguous training corpus is used for collecting bigram and trigram occurrences. The noise introduced by the remaining ambiguity is assumed not to distort the resulting statistics too much.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "\u2022 The hand-written constraints and the statistics are combined using a relaxation algorithm to analyze the test corpus, rising the precision to 96.1% and lowering the recall only to 97.2%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" }, { "text": "Finally, a reservation must be made: what we have not investigated in this paper is how much of the extra work done with the statistical module could have been done equally well or even better by spending e.g. another day writing a further collection of heuristic rules. As suggested e.g. by Tapanainen and Voutilainen (1994) and Chanod and Tapanainen (1995) , hand-coded heuristics may be a worthwhile addition to 'strictly' grammar-based rules.", "cite_spans": [ { "start": 292, "end": 325, "text": "Tapanainen and Voutilainen (1994)", "ref_id": "BIBREF12" }, { "start": 330, "end": 358, "text": "Chanod and Tapanainen (1995)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "8 Discussion", "sec_num": "86" } ], "back_matter": [ { "text": "We wish to thank Timo J/irvinen, Pasi Tapanalnen and two ANLP'97 referees for useful comments on earlier versions of this paper.The first author benefited from the collaboration of Juha Heikkil~ in the development of the linguistic description used by the EngCG morphological tagger; the two-level compiler for morphological anMysis in EngCG was written by Kimmo Koskenniemi; the recent version of the Constraint Grammar parser (CG-2) was written by Pasi Tapanainen. The Constraint Grammar framework was originally proposed by Fred Karlsson. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised Learning of Disambiguation Rules for Part-of-speech Tagging", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Proceedings of 3rd Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Brill. 1995. Unsupervised Learning of Disam- biguation Rules for Part-of-speech Tagging. In Proceedings of 3rd Workshop on Very Large Cor- pora, Massachusetts.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Tagging French: comparing a statistical and a constraintbased method", "authors": [ { "first": "J.-P", "middle": [], "last": "Chanod", "suffix": "" }, { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" } ], "year": 1995, "venue": "Proc. EACL'95. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-P. Chanod and P. Tapanainen 1995. Tagging French: comparing a statistical and a constraint- based method. In Proc. EACL'95. ACL, Dublin.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Elements of information theory", "authors": [ { "first": "T", "middle": [ "M" ], "last": "Cover", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Thomas", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T.M. Cover and J.A. Thomas (Editors) 1991. Ele- ments of information theory. John Wiley & Sons.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Convergence Properties of Relaxation Labelling", "authors": [ { "first": "J", "middle": [], "last": "Eklundh", "suffix": "" }, { "first": "A", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Eklundh and A. Rosenfeld. 1978. Convergence Properties of Relaxation Labelling. Technical Re- port no. 701. Computer Science Center. Univer- sity of Maryland.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MTB: A Memory-Based Part-of-Speech Tagger Generator", "authors": [ { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "J", "middle": [], "last": "Zavrel", "suffix": "" }, { "first": "P", "middle": [], "last": "Berck", "suffix": "" }, { "first": "S", "middle": [], "last": "Gillis", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ~th Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Daelemans, J. Zavrel, P. Berck and S. Gillis. 1996. MTB: A Memory-Based Part-of-Speech Tagger Generator. In Proceedings of ~th Work- shop on Very Large Corpora. Copenhagen, Den- mark.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Editors) 1987. The Computational Analysis of English", "authors": [ { "first": "R", "middle": [], "last": "Garside", "suffix": "" }, { "first": "G", "middle": [], "last": "Leech", "suffix": "" }, { "first": "G", "middle": [], "last": "Sampson", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Garside, G. Leech and G. Sampson (Editors) 1987. The Computational Analysis of English. London and New York: Longman.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Acquiring disambiguation rules from text", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1989, "venue": "Proc. A CL'89", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Hindle. 1989. Acquiring disambiguation rules from text. In Proc. A CL'89.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Constraint Grammar as a Framework for Parsing Running Text", "authors": [ { "first": "F", "middle": [], "last": "Karlsson", "suffix": "" } ], "year": 1990, "venue": "Papers presented to the 13th International Conference on Computational Linguistics", "volume": "3", "issue": "", "pages": "168--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Karlsson 1990. Constraint Grammar as a Frame- work for Parsing Running Text. In H. Karlgren (ed.), Papers presented to the 13th International Conference on Computational Linguistics, Vol. 3. Helsinki. 168-173.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text", "authors": [ { "first": "F", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "J", "middle": [], "last": "Heikkilk", "suffix": "" }, { "first": "A", "middle": [], "last": "Anttila", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Karlsson, A. Voutilainen, J. HeikkilK and A. Anttila. (Editors) 1995. Constraint Grammar: A Language-Independent System for Parsing Un- restricted Text. Mouton de Gruyter, Berlin and New York.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Inducing Constraint Grammars", "authors": [ { "first": "C", "middle": [], "last": "Samuelsson", "suffix": "" }, { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 3rd International Colloquium on Grammatical Inference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Samuelsson, P. Tapanainen and A. Voutilainen. 1996. Inducing Constraint Grammars. In Pro- ceedings of the 3rd International Colloquium on Grammatical Inference.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Part-of-speech tagging with neural networks", "authors": [ { "first": "H", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proceedings of 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Schmid 1994. Part-of-speech tagging with neu- ral networks. In Proceedings of 15th International Conference on Computational Linguistics, Kyoto, Japan.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Constraint Grammar Parser CG-2. Department of General Linguistics", "authors": [ { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Tapanainen 1996. The Constraint Grammar Parser CG-2. Department of General Linguistics, University of Helsinki.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Tagging accurately -Don't guess if you know", "authors": [ { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "A", "middle": [], "last": "Voutilainen ; A Cl", "suffix": "" }, { "first": "", "middle": [], "last": "Stuttgart", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 4th Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Tapanainen and A. Voutilainen 1994. Tagging accurately -Don't guess if you know. In Pro- ceedings of the 4th Conference on Applied Natural Language Processing, A CL. Stuttgart.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Relaxation and Neural Learning: Points of Convergence and Divergence", "authors": [ { "first": "C", "middle": [], "last": "Torras", "suffix": "" } ], "year": 1989, "venue": "Journal of Parallel and Distributed Computing", "volume": "6", "issue": "", "pages": "217--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Torras. 1989. Relaxation and Neural Learning: Points of Convergence and Divergence. Journal of Parallel and Distributed Computing, 6:217-244", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Constraint Grammar of English. A Performance-Oriented Introduction", "authors": [ { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "J", "middle": [], "last": "Heikkilk", "suffix": "" }, { "first": "A", "middle": [], "last": "Anttila", "suffix": "" } ], "year": 1992, "venue": "Publications", "volume": "21", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Voutilainen, J. HeikkilK and A. Anttila 1992. Constraint Grammar of English. A Performance- Oriented Introduction. Publications 21, De- partment of General Linguistics, University of Helsinki.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Specifying a shallow grammatical representation for parsing purposes", "authors": [ { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "T", "middle": [], "last": "Jervinen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 7th meeting of the European Association for Computational Linguistics", "volume": "", "issue": "", "pages": "210--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Voutilainen and T. JErvinen. 1995. Specifying a shallow grammatical representation for parsing purposes. In Proceedings of the 7th meeting of the European Association for Computational Linguis- tics. 210-214.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Word-sense disambiguations using statistical models of Roget's categories trained on large corpora", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings of 14th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Yarowsky. 1992. Word-sense disambiguations us- ing statistical models of Roget's categories trained on large corpora. In Proceedings of 14th Interna- tional Conference on Computational Linguistics. Nantes, France.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An Optimizationbased Heuristic for Maximal Constraint Satisfaction", "authors": [ { "first": "J", "middle": [], "last": "Larrosa", "suffix": "" }, { "first": "P", "middle": [], "last": "Meseguer", "suffix": "" } ], "year": 1995, "venue": "Proceedings of International Conference on Principles and Practice of Constraint Programming", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Larrosa and P. Meseguer. 1995. An Optimization- based Heuristic for Maximal Constraint Satisfac- tion. In Proceedings of International Conference on Principles and Practice of Constraint Program- ming.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Towards Learning a Constraint Grammar from Annotated Corpora Using Decision Trees. ESPRIT BRA-7315 Acquilex II", "authors": [ { "first": "L", "middle": [], "last": "Mhrquez", "suffix": "" }, { "first": "H", "middle": [], "last": "Rodriguez", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Mhrquez and H. Rodriguez. 1995. Towards Learning a Constraint Grammar from Annotated Corpora Using Decision Trees. ESPRIT BRA- 7315 Acquilex II, Working Paper.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "POS Tagging Using Relaxation Labelling", "authors": [ { "first": "L", "middle": [], "last": "Padr6", "suffix": "" } ], "year": 1996, "venue": "Proceedings of 16th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Padr6. 1996. POS Tagging Using Relaxation Labelling. In Proceedings of 16th International Conference on Computational Linguistics, Copen- hagen, Denmark.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "On the accuracy of pixel relaxation labelling", "authors": [ { "first": "J", "middle": [], "last": "Richards", "suffix": "" }, { "first": "D", "middle": [], "last": "Landgrebe", "suffix": "" }, { "first": "P", "middle": [], "last": "Swain", "suffix": "" } ], "year": 1981, "venue": "IEEE Transactions on System", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Richards, D. Landgrebe and P. Swain. 1981. On the accuracy of pixel relaxation labelling. In IEEE Transactions on System, Man and Cybernetics. Vol. SMC-11", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Parser architecture.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Cr x Pk,(m) x ... x Pkd(m), is the product of the current weights 2 for the labels appearing in the constraint except (vi,tj) (representing how applicable the constraint is in the current context) multiplied by Cr which is the constraint compatibility value (stating how compatible the pair is with the context).", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "2p~(m) is the weight assigned to label k for variable r at time m.", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "content": "
Rel. Labelling
prec. -recall
B87.4% -88.0%
T87.6% -88.4%
BT88.1%-88.8%
", "num": null, "text": "Results obtained with the linguistic model.", "html": null, "type_str": "table" }, "TABREF3": { "content": "
Rel. Labelling
prec. -recall
BC96.0%-97.0%
TC95.9%-97.0%
BTC96.1%-97.2%
", "num": null, "text": "Results obtained with statistical models.", "html": null, "type_str": "table" } } } }