|
{ |
|
"paper_id": "Y12-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:45:36.419281Z" |
|
}, |
|
"title": "A Reranking Approach for Dependency Parsing with Variable-sized Subtree Features", |
|
"authors": [ |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University Yoshida-honmachi", |
|
"location": { |
|
"addrLine": "Sakyo-ku", |
|
"postCode": "606-8501", |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "shen@nlp.ist.i.kyoto-u.ac.jp" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University Yoshida-honmachi", |
|
"location": { |
|
"addrLine": "Sakyo-ku", |
|
"postCode": "606-8501", |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University Yoshida-honmachi", |
|
"location": { |
|
"addrLine": "Sakyo-ku", |
|
"postCode": "606-8501", |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Employing higher-order subtree structures in graph-based dependency parsing has shown substantial improvement over the accuracy, however suffers from the inefficiency increasing with the order of subtrees. We present a new reranking approach for dependency parsing that can utilize complex subtree representation by applying efficient subtree selection heuristics. We demonstrate the effectiveness of the approach in experiments conducted on the Penn Treebank and the Chinese Treebank. Our system improves the baseline accuracy from 91.88% to 93.37% for English, and in the case of Chinese from 87.39% to 89.16%.", |
|
"pdf_parse": { |
|
"paper_id": "Y12-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Employing higher-order subtree structures in graph-based dependency parsing has shown substantial improvement over the accuracy, however suffers from the inefficiency increasing with the order of subtrees. We present a new reranking approach for dependency parsing that can utilize complex subtree representation by applying efficient subtree selection heuristics. We demonstrate the effectiveness of the approach in experiments conducted on the Penn Treebank and the Chinese Treebank. Our system improves the baseline accuracy from 91.88% to 93.37% for English, and in the case of Chinese from 87.39% to 89.16%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In dependency parsing, graph-based models are prevalent for their state-of-the-art accuracy and efficiency, which are gained from their ability to combine exact inference and discriminative learning methods. The ability to perform efficient exact inference lies on the so-called factorization technique which breaks down a parse tree into smaller substructures to perform an efficient dynamic programming search. This treatment however restricts the representation of features to in a local context which can be, for example, single edges or adjacent edges. Such restriction prohibits the model from exploring large or complex structures for linguistic evidence, which can be considered as the major drawback of the graphbased approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Attempts have been made in developing more complex factorization techniques and corresponding decoding methods. Higher-order models that use grand-child, grand-sibling or trisibling factorization were proposed in (Koo and Collins, 2010) to explore more expressive features and have proven significant improvement on parsing accuracy. However, the power of higherorder models comes with the cost of expensive computation and sometimes it requires aggressive pruning in the pre-processing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 236, |
|
"text": "(Koo and Collins, 2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Another line of research that explores complex feature representations is parse reranking. In its general framework, a K-best list of parse tree candidates is first produced from the base parser; a reranker is then applied to pick up the best parse among these candidates. For constituent parsing, successful results has been reported in (Collins, 2000; Charniak and Johnson, 2005; Huang, 2008) . For dependency parsing, the efficient algorithms for produce K-best list for graph-based parsers have been proposed in (Huang and Chiang, 2005) for projective parsing and in (Hall, 2007) for nonprojective parsing; Improvements on dependency accuracy has been achieved in (Hall, 2007; Hayashi et al., 2011) . However, the feature sets in these studies explored a relatively small context, either by emulating the feature set in the constituent parse reranking, or by factorizing the search space. A desirable approach for the K-best list reranking is to encode features on subtrees extracted from the candidate parse with arbitrary orders and structures, as long as the extraction process is tractable. It is an open question how to design this subtree extraction process that is able to selects a set of subtrees which provides reliable and concrete linguistic evidence. Another related challenge is to design a proper back-off strategy for any structures extracted, since large subtree instances are always sparse in the training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 353, |
|
"text": "(Collins, 2000;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 381, |
|
"text": "Charniak and Johnson, 2005;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 394, |
|
"text": "Huang, 2008)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 540, |
|
"text": "(Huang and Chiang, 2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 583, |
|
"text": "(Hall, 2007)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 668, |
|
"end": 680, |
|
"text": "(Hall, 2007;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 702, |
|
"text": "Hayashi et al., 2011)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, we explore a feature set that makes fully use of dependency grammar, can capture global information with less restriction in the structure and the size of the subtrees, and can be encoded efficiently. It exhaustively explores a candidate parse tree for features from the most simple to the most expressive while maintaining the efficiency in the sense that it does not add additional complexities over the K-best parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We choose the K-best list reranking framework rather than the forest reranking in (Huang, 2008) because an explicit representation of parse trees is needed in order to compute the features for reranking. We implemented an edge-factored parser and a second-order sibling-factored parser which emulate models in the MSTParser described in (McDonald et al., 2005; McDonald and Pereira, 2006) as our base parsers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 95, |
|
"text": "(Huang, 2008)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 360, |
|
"text": "(McDonald et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 388, |
|
"text": "McDonald and Pereira, 2006)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In the rest part of this paper, we first give a brief description of the dependency parsing, then we describe the feature set for reranking, which is the major contribution of this paper. Finally, we present a set of experiment for the evaluation of our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The task of dependency parsing is to find a tree structure for a sentence in which edges represent the head-modifier relationship between words: each word is linked to a unique \"head\" such that the link forms a semantic dependency while the main predicate of the sentence is linked to a dummy \"root\". An example of dependency parsing is illustrated in Figure 1 . A dependency tree is called projective if the links can be drawn on the linearly ordered words without any crossover. We will focus on projective trees throughout this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 360, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We formally define the dependency parsing task. Give a sentence , the best parse tree is obtained by searching for the tree with highest score: Figure 1 . A dependency parse tree of the sentence \"the man there in coat saw John.\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 152, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u0303 ( ) ( ) ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "where ( ) is the search space of possible parse trees for , and is a parse tree in ( ) . A problem in solving equation 1is that the number of candidates in the search space grows exponentially with the length of the sentence which makes the searching infeasible. A common remedy for this problem is to factorize a parse tree into small subtrees, called factors, which are scored independently. The score of parse tree under a factorization is the summation of scores of factors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) \u2211 ( ) ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "where is a factor of . The search space can be therefore encoded in a compact form which allows dynamic programming algorithms to perform efficient exact inference. The score function for each factor is assigned as an inner product of a feature vector and a weight vector :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) ( ) .", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The feature vector is defined on the factor which means it is only able to capture tree-structure information from a small context. This can be seen as the off-set for performing exact inference. The goal of training a parser is to learn a weight vector that assigns scores to effectively discriminate good parses from bad parses. We use the edge factorization and the sibling factorization models described in (McDonald et al., 2005; McDonald and Pereira, 2006) to construct our base parsers. We learn the weight vector by applying the averaged perceptron algorithm (Collins, 2002) for its efficiency and stable performance. An illustration for generic perceptron algorithm is shown in Pseudocode 1. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 411, |
|
"end": 434, |
|
"text": "(McDonald et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 462, |
|
"text": "McDonald and Pereira, 2006)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 582, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In this section, we describe our reranking approach and introduce the feature set consists of three different types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parse Reranking", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The task of reranking is similar with that of parsing instead of that the searching of parse tree is performed on a K-best list with selected parse candidates rather than the entire search space:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of Parse Reranking", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u0303 ( ) ( )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Overview of Parse Reranking", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The scoring function is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of Parse Reranking", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) ( ) ( )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Overview of Parse Reranking", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Where ( ) is the score of output by the base parser. We define the oracle parse to be the parse in the K-best list with highest accuracy compared with the gold-standard parse. The goal of reranking is to learn the weight vector so that the reranker can pick up the oracle parse as many times as possible. Note that in the reranking framework, the feature is defined on the entire parse tree which enables the encoding of global information. We learn the weight vector of the reranker also by the averaged perceptron algorithm shown in Pseudocode 1 with slight modification that only substitute the search space ( ) with the K-best output Kbest( ), and gold parse with oracle parse .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of Parse Reranking", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Benefit from the K-best list obtained in the parsing stage, we are able to perform discriminative learning in order to select a good parse among candidates in a shrunk search space, which allows utilization of global features. We define three types of features below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Trimmed subtree: For each node in a given parse tree, we check its dominated subtrees to see whether they are likely to appear in a good parse tree or not. To efficiently obtain these subtrees, we set a local window that bound a node from its left side, right side and bottom. We then extract the maximum subtree inside this window, means that we cut off those nodes that are too distant in sequential order or too deep in a tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The above subtree extraction often results in very large instances which are extremely sparse in the training data, therefore it is necessary to keep smaller subtrees as back-offs. In most cases, however, it is prohibitively expensive to enumerate all the smaller subtrees. Instead of enumeration, we design a back-off strategy that select subtrees by attempting to leave out nodes that are far away from the subtree's root and keeps those that are nearby. Precisely, after extracted the first subtree of a node, we vary the three boundaries (the left, the right and the bottom boundary respectively) from their original positions to positions that are closer to the root of the subtree, such that it tightens up the local window. For each possible combination of the variable boundaries, we extract the largest subtree from the new local window and add it to the set of the so called \"trimmed subtrees\" set of the node. This back-off strategy comes from our observation that nodes that are close to the root may provide more reliable information than those that are distant. As it is infeasible to enumerate all small subtrees as back-offs, throwing away the redundant nodes from the outer part of a large subtree is a reasonable choice. Figure 2 illustrates the construction of the \"trimmed subtrees\" set of the node \"saw\", for the sentence in Figure 1 . The initial boundary parameters are set large enough so the local window contains the entire parse tree 1 . #LEFT, #RIGHT and #BOTTOM represents the three boundary variables, which range from -6 to -1, from 3 to 1 and from 3 to 0 respectively. Context (\"s w\") { \u2026 } Figure 2 . Extraction of trimmed subtrees from the node \"saw\". \"#LEFT\", \"#RIGHT\" and \"#BOTTOM\" represents the three boundaries that can vary along possible positions on the corresponding axis. Contexts , and represnt three instances of possible combinations of boundary positions. , and are resulted subtrees that are elements in the trimmed subtrees set of the node \"saw\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1239, |
|
"end": 1247, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1346, |
|
"end": 1354, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1623, |
|
"end": 1631, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", and represent three different combinations of boundary positions. Subtree , and are the extracted subtrees in the correspond context. They and other similarly extracted subtrees together consist in the set (\"s w\") , the trimmed subtrees set of the node \"saw\". We use this set in two ways. First, for each element in this set, we encode a series of features. Second, this set is kept for reuse in another type of feature, which we describe latter. We repeat this extraction process for all nodes in a parse tree and keep their trimmed subtrees set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 215, |
|
"text": "(\"s w\")", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In Figure 3 we show some of the extracted subtrees in the set (\"s w\") , among which the subtree (c) can be regard as a grand sibling factor and the subtree (d) is similar with a tri-sibling factor in (Koo and Collins, 2010) , but the siblings are located in both sides of the head node. The subtree (a) and subtree (b) are subtrees we extracted that cannot be represented in common factorization methods, which confirmed the ability of this feature set to capture a large variety of structures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 69, |
|
"text": "(\"s w\")", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 223, |
|
"text": "(Koo and Collins, 2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It should be noted that, while in a direct calculation there are 72 (6-by-3-by-4) possible combinations for boundary positions in the example in Figure 2 , this number can almost always be reduced in practice. In this example, when #LEFT reached the position at index -4, the entire left branch of the root node is in fact cut so no further movement for #LEFT is allowed. Moreover, after #BOTTOM moved to the position Figure 3 . Some of the extracted trimmed subtrees by the process described in Figure 2 . (c) is identical with a grand-sibling factor in a thirdorder parsing model and (d) is similar to a trisibling factor but siblings are on both sides of the head.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 153, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 426, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 504, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "at index 1, the sequential order distance between \"man\" and \"saw\" is updated and reduced to 1, which restricts #LEFT to only two possible positions, either to the left or to the right of the word \"man\". Therefore one can verify that the true number of combinations of boundary positions is actually 25. Briefly, for a node we are focusing on, we decompose the extracted subtree from the initial local window into three parts: the node itself, the sequence of its left descendants and the sequence of its right descendants. The two sequences of descendants are in a preordering of depth-first search, during which we mark \"anchor\" nodes as the next-possible cut-in positions for the left/right boundary variables. Furthermore, the list of anchor nodes will keep updating whenever the bottom boundary variable moved to a new position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As a result, we are able to minimize the number of boundary combinations to speed up the subtrees extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For each extracted subtree, we encode features as follow. A trimmed subtree feature is represented as an n-tuple: \u2329 \u2026 \u232a where is the root of the subtree, and are nodes in the subtree in preordering through a depth-first search from . For we encode its word form, Part-of-Speech tag, and the combination of them. For any non-root node, we encode its Part-of-Speech tag, a binary value indicating the branch direction from its head, and its depth from . We also encode features that omit the Part-of-Speech tags of the sequence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets for Reranking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", so that only the structural preference of the subtree's root is retained. An example is shown below which illustrates a feature for the subtree in Figure 3(a) :", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 160, |
|
"text": "Figure 3(a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2329(s w ) ( ) ( ) ( ) ( )\u232a ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where V, N and P are Part-of-Speech tags of corresponding nodes; we use simplified tags for illustration purpose. The preordering of nodes together with their branch direction and depth information guarantees that the mapping from a given subtree structure to its corresponding feature string is injective. Another example below shows a feature that omits all the Part-of-Speech tags except on the root of the subtree:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2329(s w ) ( ) ( ) ( ) ( )\u232a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we associate the list of features encoded for a subtree rooted on a node a with the corresponding element in the set ( ) . We make use of this set in the next type of features to avoid repeated computation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sibling subtree: The trimmed subtree features consider the preference of a node toward its dominated subtree-whether the subtree is likely to appear in a good parse. In the reranking framework, however, as we do not factorize a parse tree, we may suffer from a problem that the information we got among candidates are unbalanced. Typically, when computing the trimmed subtree features, a candidate parse with most nodes being leaves will provide little information except on the root node, while on another parse that has fewer leaves and more depth we can have a bunch of features that give more information. This defect makes the comparison between candidates be \"unfair\" and thus less reliable. Therefore, it is natural to raise the question the other way round-whether a node is a good head for a subtree. To answer this question, we consider a dynamic programming structure called complete span introduced in (Eisner, 1996) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 914, |
|
"end": 928, |
|
"text": "(Eisner, 1996)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A complete span consists of a head node and all its descendants on one side, which can also be considered as a head node and sibling subtrees shown in Figure 4 . In our observation, a complete span functions as a relatively independent and complete semantic structure in the parse tree, we thus believe that it can provide sufficient information to decide the head of a subtree without looking at any larger context. Specifically, for each node in a candidate parse, its sibling subtree features is the collection of all 3-tuples:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 159, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2329 ( ) ( )\u232a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where h represents the word form, the Part-of-Speech tag, or the combination of the word form and the Part-of-Speech tag of the head node of m; s is the nearest sibling node of m in-between h and m; and the expression ( ) represents the feature encoded on a trimmed subtree in the set ( ), such that the trimmed subtree is the one extracted within the local window . Here an important point is that we make use of trimmed subtrees extracted in the previous phase. As mentioned before, since we keep the history of trimmed subtree extraction, it eliminates the need to re-compute any subtree structures on the sibling nodes and hence is efficient to encode.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The way we define our sibling subtree features for reranking can also be seen as the natural extension of the sibling factorization in (McDonald and Pereira, 2006) from the word-based case to the subtree-based case, while the original sibling factor can be represented as a 3-tuple \u2329 \u232a using the same notation. Chain: A chain type feature encodes information for a subtree that each node has exactly one incoming edge and one outgoing edge, except on the two ends (hence a \"chain\"). We extract all these kind of subtrees from a parse tree in the candidates list with a parameter set to limit the number of edges in the subtree. This type of features emulates the common grandparentgrandchildren structure in dependency parsing, while we loosen the restriction on the order of the subtree. It functions as a complementary for other types of features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 163, |
|
"text": "(McDonald and Pereira, 2006)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From the parse tree of the sentence in Figure 1 , we extract all chains whose order is larger than 2, since otherwise features defined on edges have already been utilized in our base parsers which are edge-factored and sibling factored. We show these chain type subtrees in Figure 5 . For a consideration of efficiency, a proper value of the order limit should be set no larger than 5 according to our experience. Figure 5 . All chain type subtrees extracted from the gold-standard parse tree of the sentence \"the man there in coat saw John.\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 47, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 282, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 422, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The information encoded from extracted subtrees includes word form, Part-of-Speech tag and relative position in the subtree for each node. When dealing with long subtrees, however, encoding lexical information suffers from data sparsity. We therefore encode lexical information only on one of the two ends of the subtree in each time, while for all nodes we encode their grammatical and positional information. Thus for the subtree (e) in Figure 5 , a feature can appear as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 447, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2329( s w ) ( ) ( ) ( )\u232a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A binary value, here we denote as \"left\" and \"right\", is used to indicate the direction of branch of a node from its head.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We present our experimental results on two languages, English and Chinese. For English experiment, we use the Penn Treebank WSJ part. We convert the constituent structure in the Treebank into dependency structure with the tool Penn2Malt and the head-extraction rule identical with that in (Yamada and Matsumoto, 2003) . To align with previous work, we use the standard data division: section 02-21 for training, section 24 for development, and section 23 for testing. As our system assumes Part-of-Speech tags as input, we use MXPOST, a MaxEnt tagger (Ratnaparkhi, 1996) to automatically tag the test data. The tagger is trained on the same training data. For Chinese, we use the Chinese Treebank 5.0 with the following data division: files 1-270 and files 400-931 for training, files 271-300 for testing, and files 301-325 for development. We use Penn2Malt to convert the Treebank into dependency structure and the set of head-extraction rules for Chinese is identical with the one in (Zhang and Clark, 2008) . Moreover, for Chinese we use the gold standard Part-of-Speech tags in evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 317, |
|
"text": "(Yamada and Matsumoto, 2003)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 570, |
|
"text": "(Ratnaparkhi, 1996)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 986, |
|
"end": 1009, |
|
"text": "(Zhang and Clark, 2008)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We apply unlabeled attachment score (UAS) to measure the effectiveness of our method, which is the percentage of words that correctly identified their heads. For all experiments conducted, we use the parameters tuned in the development set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We train two base parsers which are the reimplementation of the first-order and second-order parsers in the MSTParser (McDonald et al., 2005; McDonald and Pereira, 2006) with 10 iterations on English and Chinese training dataset. We use 30way cross-validation on the identical training dataset to provide training data for the rerankers. We use the following parameter setting for the feature sets throughout the experiments: for chaintype features, the maximum order of chains is set to 5; the left, right and bottom boundary for the 93.79 Table 1 . English UAS of previous work, our base parsers, and reranked results. \" + \": semisupervised parsers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 141, |
|
"text": "(McDonald et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 169, |
|
"text": "McDonald and Pereira, 2006)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 541, |
|
"end": 548, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "trimmed subtree features are 10, 10 and 5 respectively. For the main experiments we use K=50, the capacity of the list of parse tree candidates, in the training of the rerankers. Moreover, as it is not necessary to use identical value of K in the training and the test, we also conduct an experiment using miss-matching K values on Chinese dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We show the experimental results for English in Table 1 . Each row in this table shows the UAS of the corresponding system. \"McDonald05\" and \"McDonald06\" stand for the first-order and second-order models in the MSTParser (McDonald et al., 2005; McDonald and Pereira, 2006) . \"Zhang11\" stands for the transition-based parser proposed in (Zhang and Nivre, 2011) . \"Koo10\" stands for the Model 1 in (Koo and Collins, 2010) which is a third-order model. \"Martins10\" stands for the turbo parser proposed in (Martins et al., 2010) . \"Order 1\" and \"Order 2\" are our reimplementation of MSTParser and are used as the base parsers for our reranking experiments. \"Order 1 reranked\" and \"Order 2 reranked\" are rerankers pipelined on the two base parsers. \"Koo08\", \"Chen09\" and \"Suzuki09\" are parsers using semisupervised methods (Koo et al., 2008; Chen et al., 2009; Suzuki et al., 2009) . In Table 2 we show the results for Chinese. \"Duan07\" and \"Yu08\" stands for the two probabilistic parsers in (Duan et al., 2007; Yu et al., 2008) . \"Chen09\" stands for the same system in Table 1 . Table 2 . Chinese UAS of previous work, our baseline parsers, and reranked results. \" + \": semi-supervised parsers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 244, |
|
"text": "(McDonald et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 272, |
|
"text": "McDonald and Pereira, 2006)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 359, |
|
"text": "(Zhang and Nivre, 2011)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 419, |
|
"text": "(Koo and Collins, 2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 524, |
|
"text": "(Martins et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 836, |
|
"text": "(Koo et al., 2008;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 837, |
|
"end": 855, |
|
"text": "Chen et al., 2009;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 876, |
|
"text": "Suzuki et al., 2009)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 987, |
|
"end": 1006, |
|
"text": "(Duan et al., 2007;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1007, |
|
"end": 1023, |
|
"text": "Yu et al., 2008)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 889, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1065, |
|
"end": 1072, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1075, |
|
"end": 1082, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As we can see from the results, for English, the accuracy increased from 90.91% (\"Order 1\") to 92.50% (\"Order 1 reranked\") for the first-order parse reranker and from 91.88%(\"Order 2\") to 93.37%(\"Order 2 reranked\") for the second-order parse reranker. For Chinese, the accuracy increased from 85.44% to 87.63% for the first-order parse reranker, and for the second order case it increased from 87.39% to 89.16%. It shows that our reranking systems obtain the highest accuracy among supervised systems. For English, the reranker \"Order 2 reranked\" even slightly outperforms \"Martins10\", the turbo parser which to the best of our knowledge achieved the highest accuracy in Penn Treebank. Although our rerankers are beaten by the semi-supervised systems \"Suzuki09\" and \"Chen09\", but as our method is orthogonal with semi-supervising methods, it is possible to further improve the accuracy by combing these techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We investigate the effects of the three feature types we proposed in this paper. We in turn activate each feature type and their combinations in the evaluation, while during the training we keep all types of feature due to the limitation of Table 3 . Influence of activated feature types on English test data. \"Ch\": chain-type features activated; \"Trim\": trimmed subtree features activated; \"Sib\": sibling subtree features activated.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 248, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "time. We conduct this experiment based on the system \"Order 2 reranked\" for English. The result is shown in Table 3 . The first row represents the system with all feature types activated; others are systems with corresponding feature sets activated in the evaluation phase. Here \"Ch\" stands for the chain-type feature set, \"Trim\" stands for the trimmed subtree feature set, and \"Sib\" stands for the sibling subtree feature set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 115, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In Table 4 we investigate the influence of missmatched K values for the training and the evaluation. We traine a separate system for the Chinese dataset using \"Order 1\" with K=10 in the reranker's training and variant K values in the evaluation. The row \"Rerank\" shows that even for a small K used in the training, a better accuracy can be achieved with relatively larger K: the highest accuracy for this system is achieved when K=20 in the evaluation. We also show the oracle accuracies among the top-K candidates in the last row. In Table 5 we show the oracle accuracies among top-K candidates using the \"Order 2\" parser. The oracle accuracies can increase as much as absolutely 5.14% for English and absolutely 5.15% for Chinese compared with the 1-best accuracies. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 542, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We show the training time and the parsing time of the base parser \"Order 2\" and the pipelined reranking system \"Order 2 reranked\" in Table 6 . Both systems run on a Xeon 2.4GHz CPU. We calculated the parsing time by running the systems on the first 100 sentences on the development data of the two languages. The reranking system takes twice the time than the base parser in the training. It is much slower than the base parser in parsing new sentences, which is mainly due to the time required for outputting the 50-best candidates list; this can be seen as an unavoidable trade-off to obtain high accuracy in the reranking framework.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 140, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Efficiency", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "McDonald (2005, 2006) proposed an edge-factored parser and a second-order parser that both trained by discriminative online learning methods. Huang (2005) proposed the efficient algorithm for produce K-best list for graph-based parsers, which add a factor of to the parsing complexity of the base parser. Sangati (2009) has shown that a discriminative parser is very effective at filtering out bad parses from a factorized search space which agreed with the conclusion in (Hall, 2007) that an edge-factored model can reach good oracle performance when generating relatively small Kbest list. Successful results have been reported for constituent parse reranking in (Collins, 2000; Charniak and Johnson, 2005; Huang, 2008) , in which feature sets defined on constituent parses have been proposed that are able to capture rich non-local information. These feature sets, however, cannot be directly applied to parse tree under dependency grammar. Attempts have been made to use similar feature sets in dependency parse reranking, which include the work in (Hall, 2007) that defined a feature set similar with the one in (Charniak and Johnson, 2005) . Hayashi in (Hayashi et al., 2011) presented a forest reranking model which applied third-order factorizations emulating Model 1 and Model 2 in (Koo and Collins, 2010) on the search space of the reranker.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 15, |
|
"text": "(2005,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 16, |
|
"end": 21, |
|
"text": "2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 154, |
|
"text": "Huang (2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 319, |
|
"text": "Sangati (2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 484, |
|
"text": "(Hall, 2007)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 680, |
|
"text": "(Collins, 2000;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 708, |
|
"text": "Charniak and Johnson, 2005;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 721, |
|
"text": "Huang, 2008)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1053, |
|
"end": 1065, |
|
"text": "(Hall, 2007)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1117, |
|
"end": 1145, |
|
"text": "(Charniak and Johnson, 2005)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1159, |
|
"end": 1181, |
|
"text": "(Hayashi et al., 2011)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1291, |
|
"end": 1314, |
|
"text": "(Koo and Collins, 2010)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "We have proposed a novel feature set for dependency parse reranking that successfully extracts complex structures for collecting linguistic evidence, and efficient feature back-off strategy is proposed to relieve data sparsity. Through experiment we confirmed the effectiveness and efficiency of our method, and observed significant improvement over the base system as well as other known systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "To further improve the proposed method, we mention several possibilities for our future work. An advantage of the reranking framework we used is that it has no overlap with many of the semisupervised parsing methods, such as word clustering (Koo et al., 2008) and subtree features integration using auto-parsed data (Chen et al., 2009) . We are interested in the performance of our system when combining with these methods. Another interesting approach is to incorporate information from large-scale structured data, such as case frame (Kawahara and Kurohashi, 2006) , which provides lexical predicate-argument selection preference and is an effective way to help to overcome data sparse problem in discriminative learning. While the relatively complex data structure in the case frame prohibits its incorporation in any existing factorization methods, it can be well utilized in the reranking framework with the proposed feature set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 259, |
|
"text": "(Koo et al., 2008)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 335, |
|
"text": "(Chen et al., 2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 566, |
|
"text": "(Kawahara and Kurohashi, 2006)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Copyright 2012 by Mo Shen, Daisuke Kawahara, and Sadao Kurohashi", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In practice we use smaller local window with fixed size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Coarse-to-fine Nbest Parsing and MaxEnt Discriminative Reranking", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Charniak and M. Johnson. 2005. Coarse-to-fine N- best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd ACL.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Discriminative Reranking for Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 2000. Discriminative Reranking for Natural Language Parsing. In Proceedings of the ICML.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 7th EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the 7th EMNLP, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Improving Dependency Parsing with Subtrees from Auto-Parsed Data", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kazama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Uchimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Torisawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EMNLP2009", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "570--579", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Chen, J. Kazama, K. Uchimoto and K. Torisawa. 2009. Improving Dependency Parsing with Subtrees from Auto-Parsed Data, In Proceedings of EMNLP2009, pages 570-579.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Probabilistic Models for Action-based Chinese Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of ECML/ECPPKDD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X. Duan, J. Zhao, and B. Xu. 2007. Probabilistic Models for Action-based Chinese Dependency Parsing. In Proceedings of ECML/ECPPKDD.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Three New Probabilistic Models for Dependency Parsing: An Exploration", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "340--345", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proceedings of the 16th COLING, pages 340-345.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "K-best Spanning Tree Parsing", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Hall. 2007. K-best Spanning Tree Parsing. In Proceedings of ACL 2007.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Third-order Variational Reranking on Packed-Shared Dependency Forests", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Hayashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP 2011", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1479--1488", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Hayashi, T. Watanabe, M. Asahara and Y. Matsumoto. 2011. Third-order Variational Reranking on Packed-Shared Dependency Forests. In Proceedings of EMNLP 2011, pages 1479-1488.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Better K-best Parsing", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the IWPT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Huang and D. Chiang. 2005. Better K-best Parsing. In Proceedings of the IWPT, pages 53-64.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Forest reranking: Discriminative Parsing with Non-local Features", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "586--594", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Huang. 2008. Forest reranking: Discriminative Parsing with Non-local Features. In Proceedings of the 46th ACL, pages 586-594.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Case Frame Compilation from the Web Using High performance Computing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Kawahara and S. Kurohashi. 2006. Case Frame Compilation from the Web Using High performance Computing. In Proceedings of the 5th International Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Simple Semi-supervised Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Koo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "595--603", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Koo, X. Carreras, and M. Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of the 46th ACL, pages 595-603.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Efficient Third-order Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Koo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Koo and M. Collins. 2010. Efficient Third-order Dependency Parsers. In Proceedings of the 48th ACL, pages 1-11.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Turbo Parsers: Dependency Parsing by Approximate Variational Inference", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"F T" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of EMNLP 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "34--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. F. T. Martins, N. A. Smith, and E. P. Xing. 2010. Turbo Parsers: Dependency Parsing by Approximate Variational Inference. In Proceedings of EMNLP 2010, pages 34-44.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Online Large-Margin Training of Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005. Online Large-Margin Training of Dependency Parsers. In Proceedings of the 43rd ACL, pages 91- 98.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Online Learning of Approximate Dependency Parsing Algorithms", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 11th EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. McDonald and F. Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. In Proceedings of the 11th EACL, pages 81-88.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A Maximum Entropy Model for Part-Of-Speech Tagging", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 1st EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Ratnaparkhi. 1996. A Maximum Entropy Model for Part-Of-Speech Tagging. In Proceedings of the 1st EMNLP, pages 133-142.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A Generative Re-ranking Model for Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Sangati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 11th IWPT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "238--241", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Sangati, W. Zuidema, and R. Bod. 2009. A Generative Re-ranking Model for Dependency Parsing. In Proceedings of the 11th IWPT, pages 238-241.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "An Empirical Study of Semi-supervised Structured Conditional Models for Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Isozaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EMNLP 2009", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "551--560", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Suzuki, H. Isozaki, X. Carreras, and M. Collins. 2009. An Empirical Study of Semi-supervised Structured Conditional Models for Dependency Parsing. In Proceedings of EMNLP 2009, pages 551-560.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Statistical Dependency Analysis with Support Vector Machines", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the IWPT 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "195--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Yamada and Y. Matsumoto. 2003. Statistical Dependency Analysis with Support Vector Machines. In Proceedings of the IWPT 2003, pages 195-206.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Chinese Dependency Parsing with Large Scale Automatically Constructed Case Structures", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1049--1056", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Yu, D. Kawahara, and S. Kurohashi. 2008. Chinese Dependency Parsing with Large Scale Automatically Constructed Case Structures. In Proceedings of Coling 2008, pages 1049-1056.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A Tale of Two Parsers: Investigating and Combining Graph-based and Transition-based Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of EMNLP 2008", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "562--571", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Zhang and S. Clark. 2008. A Tale of Two Parsers: Investigating and Combining Graph-based and Transition-based Dependency Parsing. In Proceedings of EMNLP 2008, pages 562-571.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Transition-based Dependency Parsing with Rich Non-local Features", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL 2011", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Zhang and J. Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proceedings of ACL 2011, page 188-193.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "A complete span for the clause \"transfer money from the new funds to other investment funds\" where we omitted some of the details. This structure functions as a relatively independent and complete component in the entire parse tree. Features are encoded over the tuples: <transfer, -,s 2 >, <transfer, s 2 ,s 1 >, <transfer, s 1 ,s 0 >, <transfer, s 0 ,->." |
|
} |
|
} |
|
} |
|
} |