{ "paper_id": "C10-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:59:20.551183Z" }, "title": "Broad Coverage Multilingual Deep Sentence Generation with a Stochastic Multi-Level Realizer", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fabra University", "location": {} }, "email": "" }, { "first": "Leo", "middle": [], "last": "Wanner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fabra University", "location": {} }, "email": "" }, { "first": "Simon", "middle": [], "last": "Mille", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fabra University", "location": {} }, "email": "" }, { "first": "Alicia", "middle": [], "last": "Burga", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fabra University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most of the known stochastic sentence generators use syntactically annotated corpora, performing the projection to the surface in one stage. However, in full-fledged text generation, sentence realization usually starts from semantic (predicate-argument) structures. To be able to deal with semantic structures, stochastic generators require semantically annotated, or, even better, multilevel annotated corpora. Only then can they deal with such crucial generation issues as sentence planning, linearization and morphologization. Multilevel annotated corpora are increasingly available for multiple languages. We take advantage of them and propose a multilingual deep stochastic sentence realizer that mirrors the state-ofthe-art research in semantic parsing. The realizer uses an SVM learning algorithm. For each pair of adjacent levels of annotation, a separate decoder is defined. So far, we evaluated the realizer for Chinese, English, German, and Spanish.", "pdf_parse": { "paper_id": "C10-1012", "_pdf_hash": "", "abstract": [ { "text": "Most of the known stochastic sentence generators use syntactically annotated corpora, performing the projection to the surface in one stage. However, in full-fledged text generation, sentence realization usually starts from semantic (predicate-argument) structures. To be able to deal with semantic structures, stochastic generators require semantically annotated, or, even better, multilevel annotated corpora. Only then can they deal with such crucial generation issues as sentence planning, linearization and morphologization. Multilevel annotated corpora are increasingly available for multiple languages. We take advantage of them and propose a multilingual deep stochastic sentence realizer that mirrors the state-ofthe-art research in semantic parsing. The realizer uses an SVM learning algorithm. For each pair of adjacent levels of annotation, a separate decoder is defined. So far, we evaluated the realizer for Chinese, English, German, and Spanish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent years saw a significant increase of interest in corpus-based natural language generation (NLG), and, in particular, in corpus-based (or stochastic) sentence realization, i.e., that part of NLG which deals with mapping of a formal (more or less abstract) sentence plan onto a chain of inflected words; cf., among others, (Langkilde and Knight, 1998; Oh and Rudnicky, 2000; Bangalore and Rambow, 2000; Wan et al., 2009) . The advantage of stochastic sentence realization over traditional rule-based realization is mainly threefold: (i) it is more robust, (ii) it usually has a significantly larger coverage; (iii) it is per se languageand domain-independent. Its disadvantage is that it requires at least syntactically annotated corpora of significant size (Bangalore et al., 2001) . Given the aspiration of NLG to start from numeric time series or conceptual or semantic structures, syntactic annotation even does not suffice: the corpora must also be at least semantically annotated. Up to date, deep stochastic sentence realization was hampered by the lack of multiple-level annotated corpora. As a consequence, available stochastic sentence generators either take syntactic structures as input (and avoid thus the need for multiple-level annotation) (Bangalore and Rambow, 2000; Langkilde-Geary, 2002; Filippova and Strube, 2008) , or draw upon hybrid models that imply a symbolic submodule which derives the syntactic representation that is then used by the stochastic submodule (Knight and Hatzivassiloglou, 1995; Langkilde and Knight, 1998) .", "cite_spans": [ { "start": 327, "end": 355, "text": "(Langkilde and Knight, 1998;", "ref_id": "BIBREF15" }, { "start": 356, "end": 378, "text": "Oh and Rudnicky, 2000;", "ref_id": "BIBREF21" }, { "start": 379, "end": 406, "text": "Bangalore and Rambow, 2000;", "ref_id": "BIBREF0" }, { "start": 407, "end": 424, "text": "Wan et al., 2009)", "ref_id": "BIBREF24" }, { "start": 762, "end": 786, "text": "(Bangalore et al., 2001)", "ref_id": "BIBREF1" }, { "start": 1259, "end": 1287, "text": "(Bangalore and Rambow, 2000;", "ref_id": "BIBREF0" }, { "start": 1288, "end": 1310, "text": "Langkilde-Geary, 2002;", "ref_id": "BIBREF16" }, { "start": 1311, "end": 1338, "text": "Filippova and Strube, 2008)", "ref_id": "BIBREF9" }, { "start": 1489, "end": 1524, "text": "(Knight and Hatzivassiloglou, 1995;", "ref_id": "BIBREF14" }, { "start": 1525, "end": 1552, "text": "Langkilde and Knight, 1998)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The increasing availability of multilevel annotated corpora, such as the corpora of the shared task of the Conference on Computational Natural Language Learning (CoNLL), opens new perspectives with respect to deep stochastic sentence generation-although the fact that these corpora have not been annotated with the needs of generation in mind, may require additional adjustments, as has been, in fact, in the case of our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a Support Vector Machine (SVM)-based multilingual dependencyoriented stochastic deep sentence realizer that uses multilingual corpora of the CoNLL '09 shared task (Haji\u010d, 2009) for training. The sentences of these corpora are annotated with shallow semantic structures, dependency trees, and lemmata; for some of the languages involved, they also contain morphological feature annotations. The multilevel annotation allows us to take into account all levels of representation needed for linguistic generation and to model the projection between pairs of adjacent levels by separate decoders, which, in its turn, facilitates the coverage of such critical generation tasks as sentence planning, linearization, and morphologization. The presented realizer is, in principle, language-independent in that it is trainable on any multilevel annotated corpus. In this paper, we discuss its performance for Chinese, English, German, and Spanish.", "cite_spans": [ { "start": 189, "end": 202, "text": "(Haji\u010d, 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is structured as follows. In Section 2, we discuss how the shallow semantic annotation in the CoNLL '09 shared task corpora should be completed in order to be suitable for generation. Section 3 presents the training setup of our realizer. Section 4 shows the individual stages of sentence realization: from the semantic structure to the syntactic structure, from the syntactic structure to the linearized structure and from the linearized structure to a chain of inflected word forms (if applicable for the language in question). Section 5 outlines the experimental set up for the evaluation of our realizer and discusses the results of this evaluation. In Section 6, finally, some conclusions with respect to the characteristics of our realizer and its place in the research landscape are drawn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The amount of the material which comes into play makes it impossible to describe all stages in adequate detail. However, we hope that the overview provided in what follows still suffices to fully assess our proposal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The semantic annotation of sentences in CoNLL '09 shared task corpora follows the PropBank annotation guidelines (Palmer et al., 2005) . Prob-lematic from the viewpoint of generation is that this annotation is not always a connected acyclic graph. As a consequence, in these cases no valid (connected) syntactic tree can be derived. The most frequent cases of violation of the connectivity principle are not attached adjectival modifiers, determiners, adverbs, and coordinations; sometimes, the verb is not connected with its argument(s). Therefore, prior to starting the training procedure, the semantic annotation must be completed: non-connected adjectival modifiers must be annotated as predicates with their syntactic heads as arguments, determiners must be \"translated\" into quantifiers, detached verbal arguments must be connected with their head, etc.", "cite_spans": [ { "start": 113, "end": 134, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Completing the Semantic Annotation", "sec_num": "2" }, { "text": "Algorithm 1 displays the algorithm that completes the semantic annotations of the corpora. Each sentence x i of the corpus I, with i = 1, . . . , |I|, is annotated with its dependency tree y i and its shallow semantic graph s i . The algorithm traverses y i breath-first, and examines for each node n in y i whether n's corresponding node in s i is connected with the node corresponding to the parent of n. If not, the algorithm connects both by a directed labeled edge. The direction and the label of the edge are selected consulting a look up table in which default labels and the orientation of the edges between different node categories are specified. Figure 1 shows the semantic representation of a sample English sentence obtained after the application of Algorithm 1. The solid edges are the edges available in the original annotation; the dashed edges have been introduced by the algorithm. The edge labels 'A0' and 'A1' stand for \"first argument\" and \"second argument\" (of the corresponding head), respectively, 'R-A0' for \"A0 realized as a relative clause\", and 'AM-MNR' for \"manner modifier\". As can be seen, 6 out of the total of 14 edges in the complete representation of this example have been added by Algorithm 1. We still did not finish the formal evaluation of the principal changes necessary to adapt the Prop-Bank annotation for generation, nor the quality of our completion algorithm. However, the need of an annotation with generation in mind is obvious.", "cite_spans": [], "ref_spans": [ { "start": 657, "end": 665, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Completing the Semantic Annotation", "sec_num": "2" }, { "text": "Algorithm 1: Complete semantic graph //si is a semantic graph and yi a dependency tree // si = Ns i , Ls i , Es i , where Ns i is the set of nodes // Ls i the set of edge labels // Es i \u2286 Ns \u00d7 Ns \u00d7 Ls is the set of edges for i \u2190 1 to |I| // iteration over the training examples let ry \u2208 yi be the root node of the dependency tree // initialization of the queue nodeQueue \u2190 children ( 3 Realizer Training Setup Figure 2 shows the training setup of our realizer. For each level of annotation, an SVM feature extractor and for each pair of adjacent levels of annotation, an SVM decoder is defined. The Sem-Synt decoder constructs from a semantic graph the corresponding dependency tree. The Synt-Linearization decoder derives from a dependency tree a chain of lemmata, i.e., determines the word order within the sentence. The Linearization-Morph decoder generates the inflected word form for each lemma in the chain. Both the feature extractors and the decoders are languageindependent, which makes the realizer applicable to any language for which multilevel-annotated corpora are available.", "cite_spans": [], "ref_spans": [ { "start": 410, "end": 418, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Completing the Semantic Annotation", "sec_num": "2" }, { "text": "To compute the score of the alternative realizations by each decoder, we apply MIRA (Margin Infused Relaxed Algorithm) to the features provided by the feature extractors. MIRA is one of the most successful large-margin training techniques for structured data (Crammer et al., 2006) . It has been used, e.g., for dependency parsing, semantic role labelling, chunking and tagging. Since we have similar feature sets (of comparable size) as those for which MIRA has proven to work well, we assume that it will also perform Figure 1: Semantic representation of the sentence But Panama illustrates that their substitute is a system that produces an absurd gridlock. after completion well for sentence realization. Unfortunately, due to the lack of space, we cannot present here the instantiation of MIRA for all stages of our model. For illustration, Algorithm 2 outlines it for morphological realization.", "cite_spans": [ { "start": 259, "end": 281, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Completing the Semantic Annotation", "sec_num": "2" }, { "text": "The morphologic realization uses the minimal string edit distance (Levenshtein, 1966) to map lemmata to word forms. As input to the MIRAclassifier, we use the lemmata of a sentence, its dependency tree and the already ordered sentence. The characters of the input strings are reversed since most of the changes occur at the end of the words and the string edit scripts work relatively to the beginning of the string. For example, to calculate the minimal string edit distance between the lemma go and the form goes, both are first reversed by the function compute-edit-dist and then the minimal string edit script between og and seog is computed. The resulting script is Ie0Is0. It translates into the operations 'insert e at the position 0 of the input string' and 'insert s at the position 0'.", "cite_spans": [ { "start": 66, "end": 85, "text": "(Levenshtein, 1966)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Completing the Semantic Annotation", "sec_num": "2" }, { "text": "Before MIRA starts, we compute all minimal edit distance scripts to be used as classes of MIRA. Only scripts that occur more often than twice are used. The number of the resulting edit scripts is language-dependent; e.g., we get about The training algorithms typically perform 6 iterations (epochs) over the training examples. For each training example, a minimal edit script is selected. If this script is different from the gold script, the features of the gold script are calculated and the weight vector of the SVM is adjusted according to the difference between the predicted vector and the gold feature vector. The classification task consists then in finding the classification script that maps the lemma to the correct word form. For this purpose, the classifier scores each of the minimal edit scripts according to the input, choosing the one with the highest score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Completing the Semantic Annotation", "sec_num": "2" }, { "text": "Sentence generation that starts from a given semantic structure as input consists in the application of the previously trained SVM decoders in sequence in order to realize the following sequence of mappings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Generation", "sec_num": "4" }, { "text": "SemStr \u2192 SyntStr \u2192 LinearStr \u2192 Surface", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Generation", "sec_num": "4" }, { "text": "Algorithm 3 shows the algorithm for semantic generation, i.e., the derivation of a dependency tree from a semantic structure. It is a beam search that creates a maximum spanning tree. In the first step, a seed tree consisting of one edge is built.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "In each of the subsequent steps, this tree is extended by one node. For the decision, which node Algorithm 2: Morphological realization training with MIRA // yi, li; yi is a dependency tree, li lemmatized sentence script-list \u2190 {} //initialize the script-list for i \u2190 1 to |I| // iteration over the training examples for l \u2190 1 to |li| do//// iteration over the lemmata of li lemma l \u2190 lower-case (li,l) //ensure that all lemmata start with a lower case letter script \u2190 compute-edit-dist-script(lemma l , form(li,l)) if script \u2208 script-list script-list \u2190 script-list \u222a { script } for k \u2190 1 to E // E = number of traininig epochs for i \u2190 1 to |I| // iteration over the training examples", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "for l \u2190 1 to |li| do scriptp \u2190 predict-script(li,yi,l) scriptg \u2190 edit-dist-script(lemma l , form(li,l))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "if scriptp = scriptg then // update the weight vector v and the vector w, which // averages over all collected weight vectors acc. // to diff. of the predicted and gold feature vector update w, v according to \u2206(\u03c6(scriptp), \u03c6(scriptg)) //with \u03c6(scriptp), \u03c6(scriptg) as feature vectors of //scriptp and scriptg, respectively is to be attached next and to which node, we consider the highest scoring options. This procedure works well since nodes that are close in the semantic structure are usually close in the syntactic tree as well. Therefore subtrees that contain those nodes are considered first. Unlike the traditional n-gram based stochastic realizers such as (Langkilde and Knight, 1998) , we use for the score calculation structured features composed of the following elements: (i) the lemmata, (ii) the distance between the starting node s and the target node t, (iii) the direction of the path (if the path has a direction), (iv) the sorted bag of in-going edges labels without repitition, (v) the path of edge labels between source and target node.", "cite_spans": [ { "start": 665, "end": 693, "text": "(Langkilde and Knight, 1998)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "The composed structured features are: 1 labelw1+labelw2 13 PoS1+PoS2+PoS3 2 labelw1+lemma1 14 PoS1+PoS2+PoS3+dist 3 labelw1+lemma2 15 lemma1+lemma2+lemma3 4 labelw2+lemma1 16 lemma1+lemma2+lemma3+dist 5 labelw2+lemma2 17 lemma1+lemma3+head(w1,w2,w3) 6 PoS1+PoS2 18 lemma1+lemma3+head(w1,w2,w3)+dist 7", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 266, "text": "1 labelw1+labelw2 13 PoS1+PoS2+PoS3 2 labelw1+lemma1 14 PoS1+PoS2+PoS3+dist 3 labelw1+lemma2 15 lemma1+lemma2+lemma3 4 labelw2+lemma1 16 lemma1+lemma2+lemma3+dist 5 labelw2+lemma2 17 lemma1+lemma3+head(w1,w2,w3) 6", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "-label+dist(s, t)+dir -label+dist(s, t)+lemmas+dir -label+dist(s, t)+lemmat+dir -label+dist(s, t)+lemmas+lemmat+dir -label+dist(s, t)+bags+dir -label+dist(s, t)+bagt+dir -label+path(s, t)+dir # word-pairs(w1,w2) # n-grams", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "PoS1+PoS2+head(w1,w2) 19 label1+label2+label3+head(w1,w2,w3) 8 labelw1+labelw2+PoS1+head(w1,w2) 20 label1+label2+label3+head(w1,w2,w3)+dist 9 labelw1+labelw2+PoS2+head(w1,w2) 21 label1+label2+label3+lemma1+PoS2+head(w1,w2,w3) 10 labelw1+labelw2+PoS1+PoS2+head(w1,w2) 22 label1+label2+label3+lemma1+PoS2+head(w1,w2,w3)+dist 11 labelw1+labelw2+PoS1+#children2+head(w1,w2) 23 label1+label2+label3+lemma2+PoS1+head(w1,w2,w3) 12 labelw1+labelw2+PoS2+#children1+head(w1,w2) 24 label1+label2+label3+lemma2+PoS1+head (w1,w2,w3) Table 1 : Feature schemas used for linearization (label w is the label of the in-going edge to a word w in the dependency tree; lemma w is the lemma of w, and PoS w is the part-of-speech tag of w; head(w 1 ,w 2 , . . . ) is a function which is 1 if w 1 is the head, 2 if w 2 is the head, etc. and else 0; dist is the position within the constituent; contains-? is a boolean value which is true if the sentence contains a question mark and false otherwise; pos-head is the position of the head in the constituent)", "cite_spans": [], "ref_spans": [ { "start": 509, "end": 519, "text": "(w1,w2,w3)", "ref_id": "FIGREF1" }, { "start": 520, "end": 527, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Semantic Generation", "sec_num": "4.1" }, { "text": "Since we use unordered dependency trees as syntactic structures, our realizer has to find the optimal linear order for the lexemes of each dependency tree. Algorithm 4 shows our linearization algorithm. To order the dependency tree, we use a one classifier-approach for all languages-in contrast to, e.g., Filippova and Strube (2009) , who use a two-classifier approach for German. 1 The algorithm is again a beam search. It starts with an elementary list for each node of the dependency tree. Each elementary list is first extended by the children of the node in the list; then, the lists are extended stepwise by the children of the newly added nodes. If the number of lists during this procedure exceeds the threshold of 1000, the lists are sorted in accordance with their score, and the first 1000 are kept. The remaining lists are removed. Afterwards, the score of each list is adjusted according to a global score function which takes into account complex features such as the first word of a consitutent, last word, the head, and the edge label to the head (cf. Table 1 for the list of the features). Finally, the nodes of the depen-dency tree are ordered with respect to the highest ranked lists.", "cite_spans": [ { "start": 306, "end": 333, "text": "Filippova and Strube (2009)", "ref_id": "BIBREF10" }, { "start": 382, "end": 383, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 1069, "end": 1076, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dependency Tree Linearization", "sec_num": "4.2" }, { "text": "Only in a very rare case, the threshold of the beam search is exceeded. Even with a rich feature set, the procedure is very fast. The linearization takes about 3 milliseconds in average per dependency tree on a computer with a 2.8 Ghz CPU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Tree Linearization", "sec_num": "4.2" }, { "text": "The morphological realization algorithm selects the edit script in accordance with the highest score for each lemma of a sentence obtained during training (see Algorithm 2 above) and applies then the scripts to obtain the word forms; cf. Algorithm 5. Table 2 lists the feature schemas used for morphological realization.", "cite_spans": [], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Morphological Realization", "sec_num": "4.3" }, { "text": "To evaluate the performance of our realizer, we carried out experiments on deep generation of Chinese, English, German and Spanish, starting from CoNLL '09 shared task corpora. The size of the test sets is listed in Table 3. 2 Algorithm 3: Semantic generation //si, y semantic graph and its dependency tree for i \u2190 1 to |I| // iteration over the training examples // build an initial tree for all n1 \u2208 si do trees \u2190 {} // initialize the constructed trees list for all n2 \u2208 si do if n1 = n2 then for all l \u2208 dependency-labels do trees = trees \u222a {(synt(n1),synt(n2),l)} trees \u2190 sort-trees-descending-to-score(trees) trees \u2190 look-forward(1000,sublist(trees,20)) //assess at most 1000 edges of the 20 best trees tree \u2190 get-best-tree-due-to-score(trees) (s,t,l) \u2190 first-added-edge(tree) // create the best tree best-tree \u2190 (s,t,l) // compute the nodes that still need to be attached rest \u2190 nodes(si) -{s, t} while rest = \u2205 do trees \u2190 look-forward(1000,best-tree,rest) tree \u2190 get-best-tree-due-to-score(trees) (s,t,l) \u2190 first-added-edge(tree) best-tree \u2190 best-tree \u222a { (s,t,l) } if (root(s,best-tree)) then rest \u2190 rest -{s} else rest \u2190 rest -{t}", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 226, "text": "Table 3. 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The performance of both the isolated stages and the realizer as a whole has been assessed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "In order to measure the correctness of the semantics to syntax mapping, we use the unlabeled and labeled attachment score as it commonly used in dependency parsing. The labeled attachment score (LAS) is the proportion of tokens that are assigned both the correct head and the correct edge label. The unlabeled attachment score (ULA) is the proportion of correct tokens that are assigned the correct head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "To assess the quality of linearization, we use three different evaluation metrics. The first metric is the per-phrase/per-clause accuracy (acc snt.), which facilitates the automatic evaluation of results:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "acc = correct constituents all constituents", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "As second evaluation metric, we use a metric related to the edit distance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "di = 1 \u2212 m total number of words", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "(with m as the minimum number of deletions combined with insertions to obtain the correct order (Ringger et al., 2004) ).", "cite_spans": [ { "start": 96, "end": 118, "text": "(Ringger et al., 2004)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "Algorithm 4: Dependency tree linearization //yi a dependency tree for i \u2190 1 to |I| // iteration over the training examples // iterate over all nodes of the dependency tree yi for n \u2190 1 to |yi| do subtreen \u2190 children(n) \u222a {n} ordered-listsn \u2190 {} // initialize for all m \u2208 subtreen do beam \u2190 {} for all l \u2208 ordered-lists do beam \u2190 beam \u222a { append(clone(l),m)} for all l \u2208 ordered-lists do score(l) \u2190 compute-score-for-word-list(l) sort-lists-descending-to-score(beam,score) if | beam | > beam-size then beam \u2190 sublist(0,1000,beam) ordered-listsn \u2190 beam scoreg(l) \u2190 score(l) + compute-global-score(l) sort-lists-descending-in-score(beam,scoreg)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "Algorithm 5: Morphological realization // yi a dependency tree, and li an ordered list of lemmata for l \u2190 1 to |li| do scriptp \u2190 predict-script(li,yi,l) form l \u2190 apply-edit-dist-script(lemma l , scriptp)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "To be able to compare our results with (He et al., 2009) and (Ringger et al., 2004) , we use the BLEU score as a third metric.", "cite_spans": [ { "start": 39, "end": 56, "text": "(He et al., 2009)", "ref_id": "BIBREF13" }, { "start": 61, "end": 83, "text": "(Ringger et al., 2004)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "For the asessment of the quality of the word form generation, we use the accuracy score. The accuracy is the ratio between correctly generated word forms and the entire set of generated word forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "For the evaluation of the sentence realizer as a whole, we use the BLEU metric. Table 4 displays the results obtained for the isolated stages of sentence realization and of the realization as a whole, with reference to a baseline and to some state-of-the-art works. The baseline is the deep sentence realization over all stages starting from the original semantic annotation in the CoNLL '09 shared task corpora.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.1" }, { "text": "Note, that our results are not fully comparable with (He et al., 2009; Filippova and Strube, 2009) and (Ringger et al., 2004) , respectively, since the data are different. Furthermore, Filippova and Strube (2009) Table 3 : The number of sentences in the test sets used in the experiments that do not contain phrases that exceed 20,000 linearization options-which means that they filter out about 1% of the phrases. For Spanish, to the best of our knowledge, no linearization experiments have been carried out so far. Therefore, we cannot contrast our results with any reference work.", "cite_spans": [ { "start": 53, "end": 70, "text": "(He et al., 2009;", "ref_id": "BIBREF13" }, { "start": 71, "end": 98, "text": "Filippova and Strube, 2009)", "ref_id": "BIBREF10" }, { "start": 103, "end": 125, "text": "(Ringger et al., 2004)", "ref_id": "BIBREF23" }, { "start": 185, "end": 212, "text": "Filippova and Strube (2009)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 213, "end": 220, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "As far as morphologization is concerned, the performance achieved by our realizer for English is somewhat lower than in (Minnen et al., 2001 ) (97.8% vs. 99.8% of accuracy). Note, however, that Minnen et al. describe a combined analyzergenerator, in which the generator is directly derived from the analyzer, which makes both approaches not directly comparable.", "cite_spans": [ { "start": 120, "end": 140, "text": "(Minnen et al., 2001", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "The overall performance of our SVM-based deep sentence generator ranges between 0.611 (for German) and 0.688 (for Chinese) of the BLEU score. HALogen's (Langkilde-Geary, 2002) scores range between 0.514 and 0.924, depending on the completeness of the input. The figures are not directly comparable since HALogen takes as input syntactic structures. However, it gives us an idea where our generator is situated.", "cite_spans": [ { "start": 152, "end": 175, "text": "(Langkilde-Geary, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "Traditional linearization approaches are rulebased; cf., e.g., (Br\u00f6ker, 1998; Gerdes and Kahane, 2001; Duchier and Debusmann, 2001) , and (Bohnet, 2004) . More recently, statistic language models have been used to derive word order, cf. (Ringger et al., 2004; Wan et al., 2009) and (Filippova and Strube, 2009) . Because of its partially free order, which is more difficult to handle than fixed word order, German has often been worked with in the context of linearization. Filippova and Strube (2009) adapted their linearization model originally developed for German to English. They use two classifiers to determine the word order in a sentence. The first classifier uses a trigram LM to order words within constituents, and the second (which is a maximum entropy classifier) determines the order of constituents that depend on a finite verb. For English, we achieve with our SVM-based classifier a better performance. As mentioned above, for German, Filippova and Strube (2009) 's two classifier approach pays off because it allows them to handle non-projective structures for the Vorfeld within the field model. It is certainly appropriate to optimize the performance of the realizer for the languages covered in a specific application. However, our goal has been so far different: to offer an off-the-shelf languageindependent solution.", "cite_spans": [ { "start": 63, "end": 77, "text": "(Br\u00f6ker, 1998;", "ref_id": "BIBREF5" }, { "start": 78, "end": 102, "text": "Gerdes and Kahane, 2001;", "ref_id": "BIBREF11" }, { "start": 103, "end": 131, "text": "Duchier and Debusmann, 2001)", "ref_id": "BIBREF7" }, { "start": 138, "end": 152, "text": "(Bohnet, 2004)", "ref_id": "BIBREF4" }, { "start": 237, "end": 259, "text": "(Ringger et al., 2004;", "ref_id": "BIBREF23" }, { "start": 260, "end": 277, "text": "Wan et al., 2009)", "ref_id": "BIBREF24" }, { "start": 282, "end": 310, "text": "(Filippova and Strube, 2009)", "ref_id": "BIBREF10" }, { "start": 474, "end": 501, "text": "Filippova and Strube (2009)", "ref_id": "BIBREF10" }, { "start": 953, "end": 980, "text": "Filippova and Strube (2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "The linearization error analysis, first of all of German and Spanish, reveals that the annotation of coordinations in corpora of these languages as 'X \u2190 and/or/. . . \u2192 Y' is a source of errors. The \"linear\" annotation used in the PropBank ('X \u2192 and/or/. . . \u2192 Y') appears to facilitate higher quality linearization. A preprocessing stage for automatic conversion of the annotation of coordinations in the corpora would have certainly contributed to a higher quality. We refrained from doing this because we did not want to distort the figures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "The morphologization error analysis indicates a number of error sources that we will address in the process of the improvement of the model. Among those sources are: quotes at the beginning of a sentence, acronyms, specific cases of starting capital letters of proper nouns (for English and Spanish), etc. (He et al., 2009 ) (di/acc) 0.89/----Syntax-Topology (He et al., 2009) (BLEU) 0.887 ---Syntax-Topology (Filippova and Strube, 2009 ) (di/acc) -0.88/67 0.87/61 -Syntax-Topology (Ringger et al., 2004) Table 4 : Quality figures for the isolated stages of deep sentence realization and the complete process.", "cite_spans": [ { "start": 306, "end": 322, "text": "(He et al., 2009", "ref_id": "BIBREF13" }, { "start": 359, "end": 383, "text": "(He et al., 2009) (BLEU)", "ref_id": null }, { "start": 409, "end": 436, "text": "(Filippova and Strube, 2009", "ref_id": "BIBREF10" }, { "start": 482, "end": 504, "text": "(Ringger et al., 2004)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 505, "end": 512, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "As far as the contrastive evaluation of the quality of our morphologization stage is concerned, it is hampered by the fact that for the traditional manually crafted morphological generators, it is difficult to find thorough quantitative evaluations, and stochastic morphological generators are rare.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "As already repeatedly pointed out above, so far we intentionally refrained from optimizing the individual realization stages for specific languages. Therefore, there is still quite a lot of room for improvement of our realizer when one concentrates on a selected set of languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "We presented an SVM-based stochastic deep multilingual sentence generator that is inspired by the state-of-the-art research in semantic parsing. It uses similar techniques and relies on the same resources. This shows that there is a potential for stochastic sentence realization to catch up with the level of progress recently achieved in parsing technologies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "The generator exploits recently available multilevel-annotated corpora for training. While the availability of such corpora is a condition for deep sentence realization that starts, as is usually the case, from semantic (predicate-argument) structures, we discovered that current annotation schemata do not always favor generation such that additional preprocessing is necessary. This is not surprising since stochastic generation is a very young field. An initiative of the generation community would be appropriate to influence future multilevel annotation campaigns or to feed back the enriched annotations to the \"official\" resources. 3 The most prominent features of our generator are that it is per se multilingual, it achieves an extremely broad coverage, and it starts from abstract semantic structures. The last feature allows us to cover a number of critical generation issues: sentence planning, linearization and morphological generation. The separation of the semantic, syntactic, linearization and morphological levels of annotation and their modular processing by separate SVM decoders also facilitates a subsequent integration of other generation tasks such as referring expression generation, ellipsis generation, and aggregation. As a matter of fact, this generator instantiates the Reference Architecture for Generation Systems (Mellish et al., 2006) for linguistic generation.", "cite_spans": [ { "start": 639, "end": 640, "text": "3", "ref_id": null }, { "start": 1347, "end": 1369, "text": "(Mellish et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "A more practical advantage of the presented deep stochastic sentence generator (as, in principle, of all stochastic generators) is that, if trained on a representative corpus, it is domainindependent. As rightly pointed out by Belz (2008) , traditional wide coverage realizers such as KPML (Bateman et al., 2005) , FUF/SURGE (Elhadad and Robin, 1996) and RealPro (Lavoie and Rambow, 1997) , which were also intended as off-the-shelf plug-in realizers still tend to require a considerable amount of work for integration and fine-tuning of the grammatical and lexical resources. Deep stochastic sentence realizers have the potential to become real off-the-shelf modules. Our realizer is freely available for download at http://www.recerca.upf.edu/taln.", "cite_spans": [ { "start": 227, "end": 238, "text": "Belz (2008)", "ref_id": "BIBREF3" }, { "start": 290, "end": 312, "text": "(Bateman et al., 2005)", "ref_id": "BIBREF2" }, { "start": 325, "end": 350, "text": "(Elhadad and Robin, 1996)", "ref_id": "BIBREF8" }, { "start": 363, "end": 388, "text": "(Lavoie and Rambow, 1997)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We decided to test at this stage of our work a uniform technology for all languages, even if the idiosyncrasies of some languages may be handled better by specific solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As in(Langkilde-Geary, 2002) and(Ringger et al., 2004), we used Section 23 of the WSJ corpus as test set for English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We are currently working on a generation-oriented multilevel annotation of corpora for a number of languages. The corpora will be made available to the community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Many thanks to the three anonymous reviewers for their very valuable comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Exploiting a Probabilistic Hierarchical Model for Generation", "authors": [ { "first": "S", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2000, "venue": "Proceedings of COLING '00", "volume": "", "issue": "", "pages": "42--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bangalore, S. and O. Rambow. 2000. Exploiting a Probabilistic Hierarchical Model for Generation. In Proceedings of COLING '00, pages 42-48.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Impact of Quality and Quantity of Corpora on Stochastic Generation", "authors": [ { "first": "S", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the EMNLP Conference", "volume": "", "issue": "", "pages": "159--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bangalore, S., J. Chen, and O. Rambow. 2001. Impact of Quality and Quantity of Corpora on Stochastic Generation. In Proceedings of the EMNLP Confer- ence, pages 159-166.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingual Resource Sharing Across Both Related and Unrelated Languages: An Implemented, Open-Source Framework for Practical Natural Language Generation", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Bateman", "suffix": "" }, { "first": "I", "middle": [], "last": "Kruijff-Korbayov\u00e1", "suffix": "" }, { "first": "G.-J", "middle": [], "last": "Kruijff", "suffix": "" } ], "year": 2005, "venue": "Research on Language and Computation", "volume": "15", "issue": "", "pages": "1--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bateman, J.A., I. Kruijff-Korbayov\u00e1, and G.-J. Krui- jff. 2005. Multilingual Resource Sharing Across Both Related and Unrelated Languages: An Imple- mented, Open-Source Framework for Practical Nat- ural Language Generation. Research on Language and Computation, 15:1-29.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models", "authors": [ { "first": "A", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2008, "venue": "Natural Language Engineering", "volume": "14", "issue": "4", "pages": "431--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Belz, A. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Natural Language Engi- neering, 14(4):431-455.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A graph grammar approach to map between dependency trees and topological models", "authors": [ { "first": "B", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the IJCNLP", "volume": "", "issue": "", "pages": "636--645", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bohnet, B. 2004. A graph grammar approach to map between dependency trees and topological models. In Proceedings of the IJCNLP, pages 636-645.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Separating Surface Order and Syntactic Relations in a Dependency Grammar", "authors": [ { "first": "N", "middle": [], "last": "Br\u00f6ker", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the COLING/ACL '98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Br\u00f6ker, N. 1998. Separating Surface Order and Syn- tactic Relations in a Dependency Grammar. In Pro- ceedings of the COLING/ACL '98.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Online Passive-Aggressive Algorithms", "authors": [ { "first": "K", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "O", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "S", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "551--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crammer, K., O. Dekel, S. Shalev-Shwartz, and Y. Singer. 2006. Online Passive-Aggressive Al- gorithms. Journal of Machine Learning Research, 7:551-585.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Topological dependency trees: A constraint-based account of linear precedence", "authors": [ { "first": "D", "middle": [], "last": "Duchier", "suffix": "" }, { "first": "R", "middle": [], "last": "Debusmann", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duchier, D. and R. Debusmann. 2001. Topological de- pendency trees: A constraint-based account of lin- ear precedence. In Proceedings of the ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An overview of SURGE: A reusable comprehensive syntactic realization component", "authors": [ { "first": "M", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "J", "middle": [], "last": "Robin", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elhadad, M. and J. Robin. 1996. An overview of SURGE: A reusable comprehensive syntactic real- ization component. Technical Report TR 96-03, Department of Mathematics and Computer Science, Ben Gurion University.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sentence fusion via dependency graph compression", "authors": [ { "first": "K", "middle": [], "last": "Filippova", "suffix": "" }, { "first": "M", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the EMNLP Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filippova, K. and M. Strube. 2008. Sentence fusion via dependency graph compression. In Proceedings of the EMNLP Conference.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Tree linearization in English: Improving language model based approaches", "authors": [ { "first": "K", "middle": [], "last": "Filippova", "suffix": "" }, { "first": "M", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the NAACL '09 and HLT, Short Papers", "volume": "", "issue": "", "pages": "225--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filippova, K. and M. Strube. 2009. Tree lineariza- tion in English: Improving language model based approaches. In Proceedings of the NAACL '09 and HLT, Short Papers, pages 225-228.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Word order in German: A formal dependency grammar using a topological hierarchy", "authors": [ { "first": "K", "middle": [], "last": "Gerdes", "suffix": "" }, { "first": "S", "middle": [], "last": "Kahane", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerdes, K. and S. Kahane. 2001. Word order in Ger- man: A formal dependency grammar using a topo- logical hierarchy. In Proceedings of the ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages", "authors": [ { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haji\u010d, J. et al. 2009. The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. In Proceedings of the CoNLL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dependency based chinese sentence realization", "authors": [ { "first": "W", "middle": [], "last": "He", "suffix": "" }, { "first": "H", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Guo", "suffix": "" }, { "first": "T", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL and of the IJCNLP of the AFNLP", "volume": "", "issue": "", "pages": "809--816", "other_ids": {}, "num": null, "urls": [], "raw_text": "He, W., H. Wang, Y. Guo, and T. Liu. 2009. De- pendency based chinese sentence realization. In Proceedings of the ACL and of the IJCNLP of the AFNLP, pages 809-816.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Two-level, many paths generation", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K. and V. Hatzivassiloglou. 1995. Two-level, many paths generation. In Proceedings of the ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generation that exploits corpus-based statistical knowledge", "authors": [ { "first": "I", "middle": [], "last": "Langkilde", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the COLING/ACL", "volume": "", "issue": "", "pages": "704--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Langkilde, I. and K. Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Pro- ceedings of the COLING/ACL, pages 704-710.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An empirical verification of coverage and correctness for a general-purpose sentence generator", "authors": [ { "first": "I", "middle": [], "last": "Langkilde-Geary", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Second INLG Conference", "volume": "", "issue": "", "pages": "17--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Langkilde-Geary, I. 2002. An empirical verification of coverage and correctness for a general-purpose sentence generator. In Proceedings of the Second INLG Conference, pages 17-28.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A fast and portable realizer for text generation systems", "authors": [ { "first": "B", "middle": [], "last": "Lavoie", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th Conference on ANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lavoie, B. and O. Rambow. 1997. A fast and portable realizer for text generation systems. In Proceedings of the 5th Conference on ANLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics", "authors": [ { "first": "V", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1966, "venue": "", "volume": "10", "issue": "", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levenshtein, V.I. 1966. Binary codes capable of cor- recting deletions, insertions, and reversals. Soviet Physics, 10:707-710.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A reference architecture for natural language generation systems", "authors": [ { "first": "C", "middle": [], "last": "Mellish", "suffix": "" }, { "first": "D", "middle": [], "last": "Scott", "suffix": "" }, { "first": "L", "middle": [], "last": "Cahill", "suffix": "" }, { "first": "D", "middle": [], "last": "Paiva", "suffix": "" }, { "first": "R", "middle": [], "last": "Evans", "suffix": "" }, { "first": "M", "middle": [], "last": "Reape", "suffix": "" } ], "year": 2006, "venue": "Natural Language Engineering", "volume": "12", "issue": "1", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mellish, C., D. Scott, L. Cahill, D. Paiva, R. Evans, and M. Reape. 2006. A reference architecture for natu- ral language generation systems. Natural Language Engineering, 12(1):1-34.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Applied morphological processing for English", "authors": [ { "first": "G", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "J", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "D", "middle": [], "last": "Pearce", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "3", "pages": "207--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minnen, G., J. Carroll, and D. Pearce. 2001. Ap- plied morphological processing for English. Nat- ural Language Engineering, 7(3):207-223.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Stochastic language generation for spoken dialogue systems", "authors": [ { "first": "A", "middle": [ "H" ], "last": "Oh", "suffix": "" }, { "first": "A", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the ANL/NAACL Workshop on Conversational Systems", "volume": "", "issue": "", "pages": "27--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oh, A.H. and A.I. Rudnicky. 2000. Stochastic lan- guage generation for spoken dialogue systems. In Proceedings of the ANL/NAACL Workshop on Con- versational Systems, pages 27-32.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "P", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, M., D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-105.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Linguistically informed statistical models of constituent structure for ordering in sentence realization", "authors": [ { "first": "E", "middle": [], "last": "Ringger", "suffix": "" }, { "first": "M", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Moore", "suffix": "" }, { "first": "D", "middle": [], "last": "Rojas", "suffix": "" }, { "first": "M", "middle": [], "last": "Smets", "suffix": "" }, { "first": "S", "middle": [], "last": "Corston-Oliver", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "673--679", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ringger, E., M. Gamon, R.C. Moore, D. Rojas, M. Smets, and S. Corston-Oliver. 2004. Linguis- tically informed statistical models of constituent structure for ordering in sentence realization. In Proceedings of COLING, pages 673-679.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving Grammaticality in Statistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model", "authors": [ { "first": "S", "middle": [], "last": "Wan", "suffix": "" }, { "first": "M", "middle": [], "last": "Dras", "suffix": "" }, { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "C", "middle": [], "last": "Paris", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the EACL '09", "volume": "", "issue": "", "pages": "852--860", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wan, S., M. Dras, Dale R., and C. Paris. 2009. Im- proving Grammaticality in Statistical Sentence Gen- eration: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of the EACL '09, pages 852-860.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "Realizer training scenario setup 1500 scripts for English and 2500 for German.", "num": null, "type_str": "figure" }, "TABREF1": { "num": null, "content": "", "text": "+dist # global features for constituents 25 if |constituent| > 1 then label1st+label last +label last\u22121 +PoS f irst +PoS last +PoS head 26 if |constituent| > 2 then label1st+label 2d +label 3d +PoS last +PoS last\u22121 +PoS head +contains-? 27 if |constituent| > 2 then label1st+label 2d +label 3d +PoS last +PoS last\u22121 +lemma head +contains-? 28 if |constituent| > 3 then PoS1st+PoS 2d +PoS 3d +PoS 4th +PoS last +label head +contains-?+pos-head 29 if |constituent| > 3 then PoS last +PoS last\u22121 +PoS last\u22122 +PoS last\u22123 +PoS f irst +label head +contains-?+pos-head 30 PoS f irst +PoS last +lemma f irst +lemma last +lemma head +contains-?+pos-head", "html": null, "type_str": "table" }, "TABREF3": { "num": null, "content": "
: Feature schemas used for morphological
realization
Chinese English German Spanish
2556240020001725
", "text": "", "html": null, "type_str": "table" } } } }