{ "paper_id": "E99-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:37:33.146967Z" }, "title": "Cascaded Markov Models", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit/it des Saarlandes", "location": { "postCode": "D-66041", "settlement": "Computerlinguistik, Saarbriicken", "country": "Germany" } }, "email": "thorsten@coli@de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a new approach to partial parsing of context-free structures. The approach is based on Markov Models. Each layer of the resulting structure is represented by its own Markov Model, and output of a lower layer is passed as input to the next higher layer. An empirical evaluation of the method yields very good results for NP/PP chunking of German newspaper texts.", "pdf_parse": { "paper_id": "E99-1016", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a new approach to partial parsing of context-free structures. The approach is based on Markov Models. Each layer of the resulting structure is represented by its own Markov Model, and output of a lower layer is passed as input to the next higher layer. An empirical evaluation of the method yields very good results for NP/PP chunking of German newspaper texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Partial parsing, often referred to as chunking, is used as a pre-processing step before deep analysis or as shallow processing for applications like information retrieval, messsage extraction and text summarization. Chunking concentrates on constructs that can be recognized with a high degree of certainty. For several applications, this type of information with high accuracy is more valuable than deep analysis with lower accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will present a new approach to partial parsing that uses Markov Models. The presented models are extensions of the part-of-speech tagging technique and are capable of emitting structure. They utilize context-free grammar rules and add left-to-right transitional context information. This type of model is used to facilitate the syntactic annotation of the NEGRA corpus of German newspaper texts (Skut et al., 1997) .", "cite_spans": [ { "start": 398, "end": 417, "text": "(Skut et al., 1997)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Part-of-speech tagging is the assignment of syntactic categories (tags) to words that occur in the processed text. Among others, this task is efficiently solved with Markov Models. States of a Markov Model represent syntactic categories (or tuples of syntactic categories), and outputs represent words and punctuation (Church, 1988; DeRose, 1988, and others) . This technique of statistical part-of-speech tagging operates very suc-cessfully, and usually accuracy rates between 96 and 97% are reported for new, unseen text. Brants et al. (1997) showed that the technique of statistical tagging can be shifted to the next level of syntactic processing and is capable of assigning grammatical functions. These are functions like subject, direct object, head, etc. They mark the function of a child node within its parent phrase. Figure 1 shows an example sentence and its structure. The terminal sequence is complemented by tags (Stuttgart-Tiibingen-Tagset, Thielen and Schiller, 1995) . Non-terminal nodes are labeled with phrase categories, edges are labeled with grammatical functions (NEGRA tagset) .", "cite_spans": [ { "start": 318, "end": 332, "text": "(Church, 1988;", "ref_id": null }, { "start": 333, "end": 358, "text": "DeRose, 1988, and others)", "ref_id": null }, { "start": 524, "end": 544, "text": "Brants et al. (1997)", "ref_id": "BIBREF4" }, { "start": 927, "end": 983, "text": "(Stuttgart-Tiibingen-Tagset, Thielen and Schiller, 1995)", "ref_id": null }, { "start": 1086, "end": 1100, "text": "(NEGRA tagset)", "ref_id": null } ], "ref_spans": [ { "start": 827, "end": 835, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we will show that Markov Models are not restricted to the labeling task (i.e., the assignment of part-of-speech labels, phrase labels, or labels for grammatical functions), but are also capable of generating structural elements. We will use cascades of Markov Models. Starting with the part-of-speech layer, each layer of the resulting structure is represented by its own Markov Model. A lower layer passes its output as input to the next higher layer. The output of a layer can be ambiguous and it is complemented by a probability distribution for the alternatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This type of parsing is inspired by finite state cascades which are presented by several authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CASS (Abney, 1991; Abney, 1996) is a partial parser that recognizes non-recursive basic phrases (chunks) with finite state transducers. Each transducer emits a single best analysis (a longest match) that serves as input for the transducer at the next higher level. CASS needs a special grammar for which rules are manually coded. Each layer creates a particular subset of phrase types. FASTUS (Appelt et al., 1993) is heavily based on pattern matching. Each pattern is associated with one or more trigger words. It uses a series of non-deterministic finite-state transducers to build chunks; the output of one transducer is passed 'A large amount of money and work was raised by the involved organizations' Figure 1 : Example sentence and annotation. The structure consists of terminal nodes (words and their parts-of-speech), non-terminal nodes (phrases) and edges (labeled with grammatical functions).", "cite_spans": [ { "start": 5, "end": 18, "text": "(Abney, 1991;", "ref_id": "BIBREF0" }, { "start": 19, "end": 31, "text": "Abney, 1996)", "ref_id": "BIBREF1" }, { "start": 393, "end": 414, "text": "(Appelt et al., 1993)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 707, "end": 715, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "as input to the next transducer. (Roche, 1994) uses the fix point of a finite-state transducer. The transducer is iteratively applied to its own output until it remains identical to the input. The method is successfully used for efficient processing with large grammars. (Cardie and Pierce, 1998) present an approach to chunking based on a mixture of finite state and context-free techniques. They use N P rules of a pruned treebank grammar. For processing, each point of a text is matched against the treebank rules and the longest match is chosen. Cascades of automata and transducers can also be found in speech processing, see e.g. (Pereira et al., 1994; Mohri, 1997) . Contrary to finite-state transducers, Cascaded Markov Models exploit probabilities when processing layers of a syntactic structure. They do not generate longest matches but most-probable sequences. Furthermore, a higher layer sees different alternatives and their probabilities for the same span. It can choose a lower ranked alternative if it fits better into the context of the higher layer. An additional advantage is that Cascaded Markov Models do not need a \"stratified\" grammar (i.e., each layer encodes a disjoint subset of phrases). Instead the system can be immediately trained on existing treebank data.", "cite_spans": [ { "start": 33, "end": 46, "text": "(Roche, 1994)", "ref_id": null }, { "start": 271, "end": 296, "text": "(Cardie and Pierce, 1998)", "ref_id": "BIBREF5" }, { "start": 636, "end": 658, "text": "(Pereira et al., 1994;", "ref_id": null }, { "start": 659, "end": 671, "text": "Mohri, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is structured as follows. Section 2 addresses the encoding of parsing processes as Markov Models. Section 3 presents Cascaded Markov Models. Section 4 reports on the evaluation of Cascaded Markov Models using treebank data. Finally, section 5 will give conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When encoding a part-of-speech tagger as a Markov Model, states represent syntactic cate-gories 1 and outputs represent words. Contextual probabilities of tags are encoded as transition probabilities of tags, and lexical probabilities of the Markov Model are encoded as output probabilities of words in states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding of Syntactical Information as Markov Models", "sec_num": "2" }, { "text": "We introduce a modification to this encoding. States additionally may represent non-terminal categories (phrases). These new states emit partial parse trees (cf. figure 2). This can be seen as collapsing a sequence of terminals into one nonterminal. Transitions into and out of the new states are performed in the same way as for words and parts-of-speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding of Syntactical Information as Markov Models", "sec_num": "2" }, { "text": "Transitional probabilities for this new type of Markov Models can be estimated from annotated data in a way very similar to estimating probabilities for a part-of-speech tagger. The only difference is that sequences of terminals may be replaced by one non-terminal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding of Syntactical Information as Markov Models", "sec_num": "2" }, { "text": "Lexical probabilities need a new estimation method. We use probabilities of context-free partim parse trees. Thus, the lexical probability of the state NP in Note that the last three probabilities are the same as for the part-of-speech model. 1Categories and states directly correspond in bigram models. For higher order models, tuples of categories are combined to one state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding of Syntactical Information as Markov Models", "sec_num": "2" }, { "text": "z K\" A o. _z a. n- u. Z K\" ~, a. a. < ~. n Z <- o. > < \" ~ '~ 12. > e~ --~ z ~ z ~. a: e~ rr E. ~. O a_ / I\\P(AINP)IP(anlAPPR)/ I'~p(AICNP)IIVAFINJ?/ P(Z~IPP) ~P(aufgebrachtlVVPp) / ~ ~ a'n / ~ k wird~// k'X~aufgebracht ART ADJA NN NN KON NN APPR ART CARD ADJANN", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding of Syntactical Information as Markov Models", "sec_num": "2" }, { "text": "ein enormer Posten Arbeit und Geld von den 37 beteiligten Vereinen Figure 2 : Part of the Markov Models for layer I that is used to process the sentence of fignre 1. Contrary to part-of-speech tagging, outputs of states may consist of structures with probabilities according to a stochastic context-free grammar.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Encoding of Syntactical Information as Markov Models", "sec_num": "2" }, { "text": "The basic idea of Cascaded Markov Models is to construct the parse tree layer by layer, first structures of depth one, then structures of depth two, and so forth. For each layer, a Markov Model determines the best set of phrases. These phrases are used as input for the next layer, which adds one more layer. Phrase hypotheses at each layer are generated according to stochastic context-free grammar rules (the outputs of the Markov Model) and subsequently filtered from left to right by Markov Models. Figure 3 gives an overview of the parsing model. Starting with part-of-speech tagging, new phrases are created at higher layers and filtered by Markov Models operating from left to right.", "cite_spans": [], "ref_spans": [ { "start": 503, "end": 511, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Cascaded Markov Models", "sec_num": "3" }, { "text": "The processing example in figure 3 only shows the best hypothesis at each layer. But there are alternative phrase hypotheses and we need to determine the best one during the parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "All rules of the generated context-free grammar with right sides that are compatible with part of the sequence are added to the search space. Figure 4 shows an example for hypotheses at the first layer when processing the sentence of figure 1. Each bar represents one hypothesis. The position of the bar indicates the covered words. It is labeled with the type of the hypothetical phrase, an index in the upper left corner for later reference, the negative logarithm of the probability that this phrase generates the terminal yield (i.e., the smaller the better; probabilities for part-ofspeech tags are omitted for clarity). This part is very similar to chart entries of a chart parser.", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "All phrases that are newly introduced at this layer are marked with an asterisk (*). They are produced according to context-free rules, based on the elements passed from the next lower layer. The layer below layer 1 is the part-of-speech layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "The hypotheses form a lattice, with the word boundaries being states and the phrases being edges. Selecting the best hypothesis means to find the best path from node 0 to the last node (node 14 in the example). The best path can be efficiently found with the Viterbi algorithm (Viterbi, 1967) , which runs in time linear to the length of the word sequence. Having this view of finding the best hypothesis, processing of a layer is similar to word lattice processing in speech recognition (cf. Samuelsson, 1997) .", "cite_spans": [ { "start": 277, "end": 292, "text": "(Viterbi, 1967)", "ref_id": null }, { "start": 488, "end": 510, "text": "(cf. Samuelsson, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "Two types of probabilities are important when searching for the best path in a lattice. First, these are probabilities of the hypotheses (phrases) generating the underlying terminal nodes (words). They are calculated according to a stochastic context-free grammar and given in figure 4. The second type are context probabilities, i.e., the probability that some type of phrase follows or precedes another. The two types of probabilities coincide with lexical and contextual probabilities of a Markov Model, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "According to a trigram model (generated from a corpus), the path in figure 4 that is marked grey is the best path in the lattice. Its probability is composed of -P(PPICNP, VAFIN) P(PP =~* yon den 37 beteiligten Vereinen)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "\u2022 P(VVPP]VAFIN, PP)P(VVPP --+ aufgebracht)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "\u2022 P($1PP, VVPP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "Start and end of the path are indicated by a dollar sign ($). This path is very close to the correct structure for layer 1. The CNP and PP are correctly recognized. Additionally, the best path correctly predicts that APPR, VAFIN and VVPP should not be attached in layer 1. The only error is the NP ein enormer Posten. Although this is on its own a perfect NP, it is not complete because the PP an Arbeit und Geld is missing. ART, ADJA and NN should be left unattached in this layer in order to be able to create the correct structure at higher layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "The presented Markov Models act as filters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "The probability of a connected structure is determined only based on a stochastic context-free grammar. The joint probabilities of unconnected partial structures are determined by additionally using Markov Models. While building the structure bottom up, parses that are unlikely according to the Markov Models are pruned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging Lattices", "sec_num": "3.1" }, { "text": "The standard Viterbi algorithm is modified in order to process Markov Models operating on lattices. In part-of-speech tagging, each hypothesis (a tag) spans exactly one word. Now, a hypothesis can span an arbitrary number of words, and the same span can be covered by an arbitrary number of alternative word or phrase hypotheses. Using terms of a Markov Model, a state is allowed to emit a context-free partial parse tree, starting with the represented non-terminal symbol, yielding part of the sequence of words. This is in contrast to standard Markov Models. There, states emit atomic symbols. Note that an edge in the lattice is represented by a state in the corresponding Markov Model. Figure 2 shows the part of the Markov Model that represents the best path in the lattice of figure 4.", "cite_spans": [], "ref_spans": [ { "start": 690, "end": 698, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Method", "sec_num": "3.2" }, { "text": "The equations of the Viterbi algorithm are adapted to process a language model operating on a lattice. Instead of the words, the gaps between the words are enumerated (see figure 4) , and an edge between two states can span one or more words, such that an edge is represented by a triple , starting at time t, ending at time t' and representing state q.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 181, "text": "figure 4)", "ref_id": null } ], "eq_spans": [], "section": "The Method", "sec_num": "3.2" }, { "text": "We introduce accumulators At,t, (q) that collect the maximum probability of state q covering words from position t to t '. We use 6i,j (q) to denote the probability of the deriviation emitted by state q having a terminal yield that spans positions i to j. These are needed here as part of the accumulators A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Method", "sec_num": "3.2" }, { "text": "Ao,t(q) = P(qlqs)6o,t (q) (1) 13 . 14 enor-Po-an 4 Ar-und Geld 7 wird von den oetel-ver-autgemet sten beit ligten einen bracht Figure 4 : Phrase hypotheses according to a context-free grammar for the first layer. Hypotheses marked with an asterisk (*) are newly generated at this layer, the others are passed from the next lower layer (layer 0: part-of-speech tagging). The best path in the lattice is marked grey.", "cite_spans": [ { "start": 22, "end": 25, "text": "(q)", "ref_id": null } ], "ref_spans": [ { "start": 127, "end": 135, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Initialization:", "sec_num": null }, { "text": "At,t, (q) = max At,,,t(q')P(qlq')6t,t, (q), (t,,,t,q,>ELattice (2) for lELattice", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recursion:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max P(Q, Lattice) = m_ax At T(q)P(qe]q). QEQ.* (t,T,q)eUattice '", "eq_num": "(3)" } ], "section": "Recursion:", "sec_num": null }, { "text": "Additionally, it is necessary to keep track of the elements in the lattice that maximized each At,r (q).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursion:", "sec_num": null }, { "text": "When reaching time T, we get the best last element in the lattice (t~ n, T, q~n) = argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recursion:", "sec_num": null }, { "text": "Setting t~ n = T, we collect the arguments eLattice", "sec_num": null }, { "text": "rn rn m (ti+i,ti , qi+i) = ,p m ,g~ ~, argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "At,T(q)P(qe[q). (4) eLattice", "sec_num": null }, { "text": "At,,,t 7 (q) (q~ Iq ) t, ,t,_ x(q~) \u2022Lattice (5) for i > 1, until we reach t~ = 0. Now, q~... q~ is the best sequence of phrase hypotheses (read backwards).", "cite_spans": [ { "start": 58, "end": 61, "text": "(5)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "At,T(q)P(qe[q). (4) eLattice", "sec_num": null }, { "text": "The process can move on to layer 2 after the first layer is computed. The results of the first layer are taken as the base and all context-free rules that apply to the base are retrieved. These again form a lattice and we can calculate the best path for layer 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passing Ambiguity to the Next Layer", "sec_num": "3.3" }, { "text": "The Markov Model for layer 1 operates on the output of the Markov Model for part-of-speech tagging, the model for layer 2 operates on the output of layer 1, and so on. Hence the name of the processing model: Cascaded Markov Models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passing Ambiguity to the Next Layer", "sec_num": "3.3" }, { "text": "Very often, it is not sufficient to calculate just the best sequences of words/tags/phrases. This may result in an error leading to subsequent errors at higher layers. Therefore, we not only calculate the best sequence but several top ranked sequences. The number of the passed hypotheses depends on a pre-defined threshold ~ > 1. We select all hypotheses with probabilities P > Pbest/8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passing Ambiguity to the Next Layer", "sec_num": "3.3" }, { "text": "These are passed to the next layer together with their probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passing Ambiguity to the Next Layer", "sec_num": "3.3" }, { "text": "Transitional parameters for Cascaded Markov Models are estimated separately for each layer. Output parameters are the same for all layers, they are taken from the stochastic context-free grammar that is read off the treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "3.4" }, { "text": "Training on annotated data is straight forward. First, we number the layers, starting with 0 for the part-of-speech layer. Subsequently, information for the different layers is collected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "3.4" }, { "text": "Each sentence in the corpus represents one training sequence for each layer. This sequence consists of the tags or phrases at that layer. If a span is not covered by a phrase at a particular layer, we take the elements of the highest layer below the actual layer. Figure 5 shows the training sequences for layers 0 -4 generated from the sentence in figure 1. Each sentence gives rise to one training sequence for each layer. Contextual parameter estimation is done in analogy to models for part-of-speech tagging, and the same smoothing techniques can be applied. We use a linear interpolation of uni-, bi-, and trigram models.", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 272, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "3.4" }, { "text": "A stochastic context-free grammar is read off the corpus. The rules derived from the annotated sentence in figure 1 are also shown in figure 5 . The grammar is used to estimate output parameters for all Markov Models, i.e., they are the L.3er Sequence same for all layers. We could estimate probabilities for rules separately for each layer, but this would worsen the sparse data problem.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 143, "text": "figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "3.4" }, { "text": "This section reports on results of experiments with Cascaded Markov Models. We evaluate chunking precision and recall, i.e., the recognition of kernel NPs and PPs. These exclude prenominal adverbs and postnominal PPs and relative clauses, but include all other prenominal modifiers, which can be fairly complex adjective phrases in German. Figure 6 shows an example of a complex N P and the output of the parsing process.", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 348, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For our experiments, we use the NEGRA corpus (Skut et al., 1997) . It consists of German newspaper texts (Frankfurter Rundschau) that are annotated with predicate-argument structures. We extracted all structures for NPs, PPs, APs, AVPs (i.e., we mainly excluded sentences, VPs and coordinations). The version of the corpus used contains 17,000 sentences (300,000 tokens).", "cite_spans": [ { "start": 45, "end": 64, "text": "(Skut et al., 1997)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The corpus was divided into training part (90%) and test part (10%). Experiments were repeated 10 times, results were averaged. Cross-evaluation was done in order to obtain more reliable performance estimates than by just one test run. Input of the process is a sequence of words (divided into sentences), output are part-of-speech tags and structures like the one indicated in figure 6. Figure 7 presents results of the chunking task using Cascaded Markov Models for different numbers of layers. 2 Percentages are slightly below those presented by (Skut and Brants, 1998) . But 2The figure indicates unlabeled recall and precision. Differences to labeled recall/precision are small, since the number of different non-terminal categories is very restricted. they started with correctly tagged data, so our task is harder since it includes the process of partof-speech tagging.", "cite_spans": [ { "start": 549, "end": 572, "text": "(Skut and Brants, 1998)", "ref_id": null } ], "ref_spans": [ { "start": 388, "end": 396, "text": "Figure 7", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Recall increases with the number of layers. It ranges from 54.0% for 1 layer to 84.8% for 9 layers. This could be expected, because the number of layers determines the number of phrases that can be parsed by the model. The additional line for \"topline recall\" indicates the percentage of phrases that can be parsed by Cascaded Markov Models with the given number of layers. All nodes that belong to higher layers cannot be recognized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Precision slightly decreases with the number of layers. It ranges from 91.4% for 1 layer to 88.3% for 9 layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The F-score is a weighted combination of recall R and precision P and defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "F -(/32 + 1)PR /32p -b R (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "/3 is a parameter encoding the importance of recall and precision. Using an equal weight for both (/3 = 1), the maximum F-score is reached for 7 layers (F =86.5%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The part-of-speech tagging accuracy slightly increases with the number of Markov Model layers (bottom line in figure 7) . This can be explained by top-down decisions of Cascaded Markov Models. A model at a higher layer can select a tag with a lower probability if this increases the probability at that layer. Thereby some errors made at lower layers can be corrected. This leads to the increase of up to 0.3% in accuracy.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 119, "text": "figure 7)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Results for chunking Penn Treebank data were previously presented by several authors (Ramshaw and Marcus, 1995; Argamon et al., 1998; Veenstra, 1998; Cardie and Pierce, 1998 because they processed a different language and generated only one layer of structure (the chunk boundaries), while our algorithm also generates the internal structure of chunks. But generally, Cascaded Markov Models can be reduced to generating just one layer and can be trained on Penn Treebank data.", "cite_spans": [ { "start": 85, "end": 111, "text": "(Ramshaw and Marcus, 1995;", "ref_id": null }, { "start": 112, "end": 133, "text": "Argamon et al., 1998;", "ref_id": "BIBREF3" }, { "start": 134, "end": 149, "text": "Veenstra, 1998;", "ref_id": null }, { "start": 150, "end": 173, "text": "Cardie and Pierce, 1998", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We have presented a new parsing model for shallow processing. The model parses by representing each layer of the resulting structure as a separate Markov Model. States represent categories of words and phrases, outputs consist of partial parse trees. Starting with the layer for part-ofspeech tags, the output of lower layers is passed as input to higher layers. This type of model is restricted to a fixed maximum number of layers in the parsed structure, since the number of Markov Models is determined before parsing. While the effects of these restrictions on the parsing of sentences and VPs are still to be investigated, we obtain excellent results for the chunking task, i.e., the recognition of kernel NPs and PPs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "It will be interesting to see in future work if Cascaded Markov Models can be extended to parsing sentences and VPs. The average number of layers per sentence in the NEGRA corpus is only 5; 99.9% of all sentences have 10 or less layers, thus a very limited number of Markov Models would be sufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "Cascaded Markov Models add left-to-right context-information to context-free parsing. This contextualization is orthogonal to another important trend in language processing: lexicalization. We expect that the combination of these techniques results in improved models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "We presented the generation of parameters from annotated corpora and used linear interpolation for smoothing. While we do not expect ira-provements by re-estimation on raw data, other smoothing methods may result in better accuracies, e.g. the maximum entropy framework. Yet, the high complexity of maximum entropy parameter estimation requires careful pre-selection of relevant linguistic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "The presented Markov Models act as filters. The probability of the resulting structure is determined only based on a stochastic context-free grammar. While building the structure bottom up, parses that are unlikely according to the Markov Models are pruned. We think that a combined probability measure would improve the model. For this, a mathematically motivated combination needs to be determined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" } ], "back_matter": [ { "text": "I would like to thank Hans Uszkoreit, Yves Schabes, Wojciech Skut, and Matthew Crocker for fruitful discussions and valuable comments on the work presented here. And I am grateful to Sabine Kramp for proof-reading this paper.This research was funded by the Deutsche Forschungsgemeinschaft in the Sonderforschungsbereich 378, Project C3 NEGRA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Parsing by chunks", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1991, "venue": "Principle-Based Parsing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney. 1991. Parsing by chunks. In Robert Berwick, Steven Abney, and Carol Tenny, editors, Principle-Based Parsing, Dor- drecht. Kluwer Academic Publishers.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Partial parsing via finitestate cascades", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the ESSLLI Workshop on Robust Parsing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney. 1996. Partial parsing via finite- state cascades. In Proceedings of the ESSLLI Workshop on Robust Parsing, Prague, Czech Republic.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "FASTUS: a finite-state processor for information extraction from real-world text", "authors": [ { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "J", "middle": [], "last": "Hobbs", "suffix": "" }, { "first": "J", "middle": [], "last": "Bear", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Israel", "suffix": "" }, { "first": "M", "middle": [], "last": "Tyson", "suffix": "" } ], "year": 1993, "venue": "Proceedings of IJCAI-93", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Appelt, J. Hobbs, J. Bear, D. J. Israel, and M. Tyson. 1993. FASTUS: a finite-state proces- sor for information extraction from real-world text. In Proceedings of IJCAI-93, Washington, DC.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A memory-based approach to learning shallow natural language patterns", "authors": [ { "first": "Shlomo", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Krymolowski", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th International Conference on Computational Linguistics COLING-ACL-98)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shlomo Argamon, Ido Dagan, and Yuval Kry- molowski. 1998. A memory-based approach to learning shallow natural language patterns. In Proceedings of the 17th International Confer- ence on Computational Linguistics COLING- ACL-98), Montreal, Canada.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Tagging grammatical functions", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Skut", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "Krenn", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing EMNLP-97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants, Wojciech Skut, and Brigitte Krenn. 1997. Tagging grammatical functions. In Proceedings of the Conference on Empir- ical Methods in Natural Language Processing EMNLP-97, Providence, RI, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Error", "authors": [ { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "David", "middle": [], "last": "Pierce", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire Cardie and David Pierce. 1998. Error-", "links": null } }, "ref_entries": { "FIGREF0": { "text": "figure 2 is determined by P(NP ~ ART ADJA NN, ART ~ ein, ADJA --~ enormer, NN ~ Posten) = P(NP ~ ART ADJA NN) \u2022 P(ART ~ ein)-P(ADJA --+ enormer) \u2022 P(NN -+ Posten)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Pbesf P(NP[$, $)P(NP ~* ein enormer Posten) \u2022 P(APPRI$, NP)P(APPR ~ an) \u2022 P(CNPINP, APPR)P(\u00a2NP ~* Arbeit und Geld) \u2022 P(VAFINIAPPR , CNP)P(VAFIN --+ wird) have with their music bridges built \"Kronos built bridges with their music\"", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "The combined, layered processing model. Starting with part-of-speech tagging (layer 0), possibly ambiguous output together with probabilities is passed to higher layers (only the best hypotheses are shown for clarity). At each layer, new phrases and grammatical functions are added.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "NN APPR NN KON NN VAFIN APPR ART CARD ADJA NN VVPPContext-free rules and their frequencies Training material generated from the sentence in figure 1. The sequences for layers 0 -4 are used to estimate transition probabilities for the corresponding Markov Models. The context-free rules are used to estimate the SCFG, which determines the output probabilities of the Markov Models.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "NP/PP chunking results for the NEGI~A Corpus. The diagram shows recall and precision depending on the number of layers that are used for parsing. Layer 0 is used for part-of-speech tagging, for which tagging accuracies are given at the bottom line. Topline recall is the maximum recall possible for that number of layers.", "num": null, "uris": null, "type_str": "figure" } } } }