{ "paper_id": "P14-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:06:48.082437Z" }, "title": "Text-level Discourse Dependency Parsing", "authors": [ { "first": "Sujian", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "MOE", "institution": "Peking University", "location": { "country": "China" } }, "email": "lisujian@pku.edu.cn" }, { "first": "Liang", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "MOE", "institution": "Peking University", "location": { "country": "China" } }, "email": "" }, { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "", "affiliation": { "laboratory": "MOE", "institution": "Peking University", "location": { "country": "China" } }, "email": "ziqiangyeah@pku.edu.cn" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": { "settlement": "HongKong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.", "pdf_parse": { "paper_id": "P14-1003", "_pdf_hash": "", "abstract": [ { "text": "Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It is widely agreed that no units of the text can be understood in isolation, but in relation to their context. Researches in discourse parsing aim to acquire such relations in text, which is fundamental to many natural language processing applications such as question answering, automatic summarization and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One important issue behind discourse parsing is the representation of discourse structure. Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) , one of the most influential discourse theories, posits a hierarchical generative tree representation, as illustrated in Figure 1 . The leaves of a tree correspond to contiguous text spans called Elementary Discourse Units (EDUs) 1 . The adjacent EDUs are combined into 1 EDU segmentation is a relatively trivial step in discourse parsing. Since our work focus here is not EDU segmentation but discourse parsing. We assume EDUs are already known.", "cite_spans": [ { "start": 125, "end": 150, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 273, "end": 281, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the larger text spans by rhetorical relations (e.g., Contrast and Elaboration) and the larger text spans continue to be combined until the whole text constitutes a parse tree. The text spans linked by rhetorical relations are annotated as either nucleus or satellite depending on how salient they are for interpretation. It is attractive and challenging to parse the whole text into one tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since such a hierarchical discourse tree is analogous to a constituency based syntactic tree except that the constituents in the discourse trees are text spans, previous researches have explored different constituency based syntactic parsing techniques (eg. CKY and chart parsing) and various features (eg. length, position et al.) for discourse parsing (Soricut and Marcu, 2003; Joty et al., 2012; Reitter, 2003; LeThanh et al., 2004; Baldridge and Lascarides, 2005; Subba and Di Eugenio, 2009; Sagae, 2009; Hernault et al., 2010b; Feng and Hirst, 2012) . However, the existing approaches suffer from at least one of the following three problems. First, it is difficult to design a set of production rules as in syntactic parsing, since there are no determinate generative rules for the interior text spans. Second, the different levels of discourse units (e.g. EDUs or larger text spans) occurring in the generative process are better represented with different features, and thus a uniform framework for discourse analysis is hard to develop. Third, to reduce the time complexity of the state-of-the-art constituency based parsing techniques, the approximate parsing approaches are prone to trap in local maximum.", "cite_spans": [ { "start": 302, "end": 331, "text": "(eg. length, position et al.)", "ref_id": null }, { "start": 354, "end": 379, "text": "(Soricut and Marcu, 2003;", "ref_id": "BIBREF22" }, { "start": 380, "end": 398, "text": "Joty et al., 2012;", "ref_id": null }, { "start": 399, "end": 413, "text": "Reitter, 2003;", "ref_id": "BIBREF20" }, { "start": 414, "end": 435, "text": "LeThanh et al., 2004;", "ref_id": "BIBREF11" }, { "start": 436, "end": 467, "text": "Baldridge and Lascarides, 2005;", "ref_id": "BIBREF0" }, { "start": 468, "end": 495, "text": "Subba and Di Eugenio, 2009;", "ref_id": "BIBREF23" }, { "start": 496, "end": 508, "text": "Sagae, 2009;", "ref_id": "BIBREF21" }, { "start": 509, "end": 532, "text": "Hernault et al., 2010b;", "ref_id": "BIBREF9" }, { "start": 533, "end": 554, "text": "Feng and Hirst, 2012)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose to adopt the dependency structure in discourse representation to overcome the limitations mentioned above. Here is the basic idea: the discourse structure consists of EDUs which are linked by the binary, asymmetrical relations called dependency relations. A dependency relation holds between a subordinate EDU called the dependent, and another EDU on which it depends called the head, as illustrated in Figure 2 . Each EDU has one head. So, the dependency structure can be seen as a set of headdependent links, which are labeled by functional relations. Now, we can analyze the relations between EDUs directly, without worrying about any interior text spans. Since dependency trees contain much fewer nodes and on average they are simpler than constituency based trees, the current dependency parsers can have a relatively low computational complexity. Moreover, concerning linearization, it is well known that dependency structures can deal with non-projective relations, while constituency-based models need the addition of complex mechanisms like transformations, movements and so on. In our work, we adopt the graph based dependency parsing techniques learned from large sets of annotated dependency trees. The Eisner (1996) algorithm and maximum spanning tree (MST) algorithm are used respectively to parse the optimal projective and non-projective dependency trees with the large-margin learning technique (Crammer and Singer, 2003) . To the best of our knowledge, we are the first to apply the dependency structure and introduce the dependency parsing techniques into discourse analysis.", "cite_spans": [ { "start": 1241, "end": 1254, "text": "Eisner (1996)", "ref_id": "BIBREF6" }, { "start": 1438, "end": 1464, "text": "(Crammer and Singer, 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 429, "end": 437, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 formally defines discourse dependency structure and introduces how to build a discourse dependency treebank from the existing RST corpus. Section 3 presents the discourse parsing approach based on the Eisner and MST algorithms. Section 4 elaborates on the large-margin learning technique as well as the features we use. Section 5 discusses the experimental results. Section 6 introduces the related work and Section 7 concludes the paper. e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 1' 2' 3' 4' 5' 6' 7' 8' 9' e 1 e 2 e 3 e 0 e 0 e 0 e 0 e 0 e 0 e 0 e 0 e 0 Figure 2: Discourse Dependency Tree Structures (e 1 ,e 2 and e 3 denote three EDUS, and the directed arcs denote one dependency relations. The artificial e 0 is also displayed here. )", "cite_spans": [], "ref_spans": [ { "start": 497, "end": 623, "text": "e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 1 e 2 e 3 1' 2' 3' 4'", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Discourse Dependency Structure and Tree Bank", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Similar to the syntactic dependency structure defined by McDonald (2005a McDonald ( , 2005b , we insert an artificial EDU e 0 in the beginning for each document and label the dependency relation linking from e 0 as ROOT. This treatment will sim-plify both formal definitions and computational implementations. Normally, we assume that each EDU should have one and only one head except for e 0 . A labeled directed arc is used to represent the dependency relation from one head to its dependent. Then, discourse dependency structure can be formalized as the labeled directed graph, where nodes correspond to EDUs and labeled arcs correspond to labeled dependency relations.", "cite_spans": [ { "start": 57, "end": 72, "text": "McDonald (2005a", "ref_id": "BIBREF15" }, { "start": 73, "end": 91, "text": "McDonald ( , 2005b", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Dependency Structure", "sec_num": "2.1" }, { "text": "We assume that the text 2 T is composed of n+1 EDUs including the artificial e 0 . That is T=e 0 e 1 e 2 \u2026 e n . Let R={r 1 ,r 2 , \u2026 ,r m } denote a finite set of functional relations that hold between two EDUs. Then a discourse dependency graph can be denoted by G= where V denotes a set of nodes and A denotes a set of labeled directed arcs, such that for the text T=e 0 e 1 e 2 \u2026 e n and the label set R the following holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Dependency Structure", "sec_num": "2.1" }, { "text": "(1) V = { e 0 , e 1 , e 2 , \u2026 e n } (2) A \uf0cd V\uf0b4 R \uf0b4 V, where \uf0ceA represents an arc from the head e i to the dependent e j labeled with the relation r. (3) If \uf0ceA then \uf0cfA for all k\uf0b9i (4) If \uf0ceA then \uf0cfA for all r'\uf0b9r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Dependency Structure", "sec_num": "2.1" }, { "text": "The third condition assures that each EDU has one and only one head and the fourth tells that only one kind of dependency relation holds between two EDUs. According to the definition, we illustrate all the 9 possible unlabeled dependency trees for a text containing three EDUs in Figure 2 . The dependency trees 1' to 7' are projective while 8' and 9' are non-projective with crossing arcs.", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 288, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Discourse Dependency Structure", "sec_num": "2.1" }, { "text": "To automatically conduct discourse dependency parsing, constructing a discourse dependency treebank is fundamental. It is costly to manually construct such a treebank from scratch. Fortunately, RST Discourse Treebank (RST-DT) (Carlson et al., 2001 ) is an available resource to help with.", "cite_spans": [ { "start": 226, "end": 247, "text": "(Carlson et al., 2001", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Our Discourse Dependency Treebank", "sec_num": "2.2" }, { "text": "A RST tree constitutes a hierarchical structure for one document through rhetorical relations. A total of 110 fine-grained relations (e.g. Elaboration-part-whole and List) were used for tagging RST-DT. They can be categorized into 18 classes (e.g. Elaboration and Joint). All these relations can be hypotactic (\"mononuclear\") or paratactic (\"multi-nuclear\"). A hypotactic relation holds between a nucleus span and an adjacent satellite span, while a paratactic relation connects two or more equally important adjacent nucleus spans. For convenience of computation, we convert the n-ary (n>2) RST trees 3 to binary trees through adding a new node for the latter n-1 nodes and assume each relation is connected to only one nucleus 4 . This departure from the original theory is not such a major step as it may appear, since any nucleus is known to contribute to the essential meaning. Now, each RST tree can be seen as a headed constituency based binary tree where the nuclei are heads and the children of each node are linearly ordered. Given three EDUs 5 , Figure 1 shows the possible 8 headed constituency based trees where the superscript * denotes the heads (nuclei). We use dependency trees to simulate the headed constituency based trees.", "cite_spans": [], "ref_spans": [ { "start": 1057, "end": 1065, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Our Discourse Dependency Treebank", "sec_num": "2.2" }, { "text": "Contrasting Figure 1 with Figure 2 , we use dependency tree 1' to simulate binary trees 1 and 8, and dependency tress 2'-7' to simulate binary trees 2-7 correspondingly. The rhetorical relations in RST trees are kept as the functional relations which link the two EDUs in dependency trees. With this kind of conversion, we can get our discourse dependency treebank. It is worth noting that the non-projective trees like 8' and 9' do not exist in our dependency treebank, though they are eligible according to the definition of discourse dependency graph.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null }, { "start": 26, "end": 34, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Our Discourse Dependency Treebank", "sec_num": "2.2" }, { "text": "As stated above, T=e 0 e 1 \u2026e n represents an input text (document) where e i denotes the i th EDU of T. We use V to denote all the EDU nodes and V\uf0b4R\uf0b4V -0 (V -0 =V-{e 0 }) denote all the possible discourse dependency arcs. The goal of discourse dependency parsing is to parse an optimal spanning tree from V\uf0b4R\uf0b4V -0 . Here we follow the arc factored method and define the score of a dependency tree as the sum of the scores of all the arcs in the tree. Thus, the optimal dependency tree for T is a spanning tree with the highest score and obtained through the function DT(T,w): ) denotes the score of the arc which is calculated according to its feature representation f(e i ,r,e j ) and a weight vector w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "3.1" }, { "text": "0 0 0 ,, ,, ( , ) ( , ) ( , , ) ( , , ) f T T i j T T i j T G V R V G V R V i j e r e G G V R V i j e r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "3.1" }, { "text": "Next, two basic problems need to be solved: how to find the dependency tree with the highest score for T given all the arc scores (i.e. a parsing problem), and how to learn and compute the scores of arcs according to a set of arc features (i.e. a learning problem).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "3.1" }, { "text": "The following of this section addresses the first problem. Given the text T, we first reduce the multi-digraph composed of all possible arcs to the digraph. The digraph keeps only one arc between two nodes which satisfies \uf06c(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "3.1" }, { "text": ". Thus, we can proceed with a reduction from labeled parsing to unlabeled parsing. Next, two algorithms, i.e. the Eisner algorithm and MST algorithm, are presented to parse the projective and non-projective unlabeled dependency trees respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ")", "sec_num": null }, { "text": "It is well known that projective dependency parsing can be handled with the Eisner algorithm (1996) which is based on the bottom-up dynamic programming techniques with the time complexity of O(n 3 ). The basic idea of the Eisner algorithm is to parse the left and right dependents of an EDU independently and combine them at a later stage. This reduces the overhead of indexing heads. Only two binary variables, i.e. c and d, are required to specify whether the heads occur leftmost or rightmost and whether an item is complete.", "cite_spans": [ { "start": 76, "end": 99, "text": "Eisner algorithm (1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Eisner Algorithm", "sec_num": "3.2" }, { "text": "Input: Text T=e 0 e 1 \u2026 e n ; Arc scores \uf06c(e i ,e j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eisner(T, \uf06c)", "sec_num": null }, { "text": "1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eisner(T, \uf06c)", "sec_num": null }, { "text": "Instantiate E[i, i, d, c]=0.0 for all i, d, c 2 For m := 1 to n 3 For i := 1 to n 4 j = i + m 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eisner(T, \uf06c)", "sec_num": null }, { "text": "if j> n then break; 6 # Create subgraphs with c=0 by adding arcs 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eisner(T, \uf06c)", "sec_num": null }, { "text": "E[i, j, 0, 0]=max i\uf0a3q\uf0a3j (E[i,q,1,1]+E[q+1,j,0,1]+\uf06c(e j ,e i )) 8 E[i, j, 1, 0]=max i\uf0a3q\uf0a3j (E[i,q,1,1]+E[q+1,j,0,1]+\uf06c(e i ,e j )) 9 # Add corresponding left/right subgraphs 10 E[i, j, 0, 1]=max i\uf0a3q\uf0a3j (E[i,q,0,1]+E[q,j,0,0] 11 E[i, j, 1, 1]=max i\uf0a3q\uf0a3j (E[i,q,1,0]+E[q,j,1,1])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eisner(T, \uf06c)", "sec_num": null }, { "text": "Figure 3: Eisner Algorithm Figure 3 shows the pseudo-code of the Eisner algorithm. A dynamic programming table E[i,j,d,c] is used to represent the highest scored subtree spanning e i to e j . d indicates whether e i is the head (d=1) or e j is head (d=0). c indicates whether the subtree will not take any more dependents (c=1) or it needs to be completed (c=0). The algorithm begins by initializing all lengthone subtrees to a score of 0.0. In the inner loop, the first two steps (Lines 7 and 8) are to construct the new dependency arcs by taking the maximum over all the internal indices (i\uf0a3q\uf0a3j) in the span, and calculating the value of merging the two subtrees and adding one new arc. The last two steps (Lines 10 and 11) attempt to achieve an optimal left/right subtree in the span by adding the corresponding left/right subtree to the arcs that have been added previously. This algorithm considers all the possible subtrees. We can then get the optimal dependency tree with the score E[0,n,1,1] .", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 35, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Eisner(T, \uf06c)", "sec_num": null }, { "text": "As the bottom-up Eisner Algorithm must maintain the nested structural constraint, it cannot parse the non-projective dependency trees like 8' and 9' in Figure 2 . However, the non-projective dependency does exist in real discourse. For example, the earlier text mainly talks about the topic A with mentioning the topic B, while the latter text gives a supplementary explanation for the topic B. This example can constitute a nonprojective tree and its pictorial diagram is exhibited in Figure 4 . Following the work of McDonald (2005b), we formalize discourse dependency parsing as searching for a maximum spanning tree (MST) in a directed graph. Chu and Liu (1965) and Edmonds (1967) independently proposed the virtually identical algorithm named the Chu-Liu/Edmonds algorithm, for finding MSTs on directed graphs (McDonald et al. 2005b) . Figure 5 shows the details of the Chu-Liu/Edmonds algorithm for discourse parsing. Each node in the graph greedily selects the incoming arc with the highest score. If one tree results, the algorithm ends. Otherwise, there must exist a cycle. The algorithm contracts the identified cycle into a single node and recalculates the scores of the arcs which go in and out of the cycle. Next, the algorithm recursively call itself on the contracted graph. Finally, those arcs which go in or out of one cycle will recover themselves to connect with the original nodes in V. Like McDonald et al. (2005b) , we adopt an efficient implementation of the Chu-Liu/Edmonds algorithm that is proposed by Tarjan (1997) with O(n 2 ) time complexity.", "cite_spans": [ { "start": 647, "end": 665, "text": "Chu and Liu (1965)", "ref_id": "BIBREF3" }, { "start": 670, "end": 684, "text": "Edmonds (1967)", "ref_id": null }, { "start": 815, "end": 838, "text": "(McDonald et al. 2005b)", "ref_id": "BIBREF16" }, { "start": 1412, "end": 1435, "text": "McDonald et al. (2005b)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 2", "ref_id": null }, { "start": 486, "end": 494, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 841, "end": 849, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Maximum Spanning Tree Algorithm", "sec_num": "3.3" }, { "text": "Input: Text T=e 0 e 1 \u2026 e n ; Arc scores \uf06c(e i ,e j ) 1 A' = {| e i = argmax \uf06c(e i ,e j ); 1\uf0a3j\uf0a3|V|}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "2 G' = (V, A') 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "If G' has no cycles, then return G' 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "Find an arc set A C that is a cycle in G' 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": " = contract(G, A C , \uf06c) 6 G = (V, A)=Chu-Liu-Edmonds(G C , \uf06c) 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "For the arc where ep(e i ,e C )=e j : 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "A=A\uf0c8A C \uf0c8{, } 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "For the arc where ep(e C ,e i )=e j : 10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "A=A\uf0c8{}-{} 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "V = V 12 Return G Contract(G=(V,A), A C , \uf06c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "1 Let G C be the subgraph of G excluding nodes in C 2 Add a node e C to G C denoting the cycle C 3 For e j \uf0ceV-C :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "\uf024e i \uf0ceC \uf0ceA 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "Add arc to G C with ep(e C ,e j )= \uf06c(e i ,e j ) 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "\uf06c(e C ,e j ) = \uf06c(ep(e C ,e j ),e j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "6 For e i \uf0ceV-C: \uf024e j \uf0ceC (e i ,e j )\uf0ceA 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "Add arc to G C with ep(e i ,e C )= = [\uf06c(e i ,e j )-\uf06c(a(e i ),e j )] 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "\uf06c(e i ,e C ) =\uf06c(e i ,e j )-\uf06c(a(e i ),e j )+score(C) 9 Return ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chu-Liu-Edmonds(G, \uf06c)", "sec_num": null }, { "text": "In Section 3, we assume that the arc scores are available. In fact, the score of each arc is calculated as a linear combination of feature weights. Thus, we need to determine the features for arc representation first. With referring to McDonald et al. (2005a; 2005b) , we use the Margin Infused Relaxed Algorithm (MIRA) to learn the feature weights based on a training set of documents", "cite_spans": [ { "start": 236, "end": 259, "text": "McDonald et al. (2005a;", "ref_id": "BIBREF15" }, { "start": 260, "end": 266, "text": "2005b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4" }, { "text": "annotated with dependency structures \uf028 \uf029 \uf07b \uf07d 1 , N i i T \uf03d i y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4" }, { "text": "where y i denotes the correct dependency tree for the text T i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": "4" }, { "text": "Following (Feng and Hirst, 2012; Lin et al., 2009; Hernault et al., 2010b) , we explore the following 6 feature types combined with relations to represent each labeled arc .", "cite_spans": [ { "start": 10, "end": 32, "text": "(Feng and Hirst, 2012;", "ref_id": "BIBREF7" }, { "start": 33, "end": 50, "text": "Lin et al., 2009;", "ref_id": "BIBREF12" }, { "start": 51, "end": 74, "text": "Hernault et al., 2010b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "(1) WORD: The first one word, the last one word, and the first bigrams in each EDU, the pair of the two first words and the pair of the two last words in the two EDUs are extracted as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "(2) POS: The first one and two POS tags in each EDU, and the pair of the two first POS tags in the two EDUs are extracted as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "(3) Position: These features concern whether the two EDUs are included in the same sentence, and the positions where the two EDUs are located in one sentence, one paragraph, or one document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "(4) Length: The length of each EDU. (5) Syntactic: POS tags of the dominating nodes as defined in Soricut and Marcu (2003) are extracted as features. We use the syntactic trees from the Penn Treebank to find the dominating nodes,. (6) Semantic similarity: We compute the semantic relatedness between the two EDUs based on WordNet. The word pairs are extracted from (e i , e j ) and their similarity is calculated. Then, we can get a weighted complete bipartite graph where words are deemed as nodes and similarity as weights. From this bipartite graph, we get the maximum weighted matching and use the averaged weight of the matches as the similarity between e i and e j . In particular, we use path_similarity, wup_similarity, res_similarity, jcn_similarity and lin_similarity provided by the nltk.wordnet.similarity (Bird et. al., 2009) package for calculating word similarity.", "cite_spans": [ { "start": 98, "end": 122, "text": "Soricut and Marcu (2003)", "ref_id": "BIBREF22" }, { "start": 818, "end": 838, "text": "(Bird et. al., 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "As for relations, we experiment two sets of relation labels from RST-DT. One is composed of 19 coarse-grained relations and the other 111 fine-grained relations 6 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "Margin Infused Relaxed Algorithm (MIRA) is an online algorithm for multiclass classification and is extended by Taskar et al. (2003) to cope with structured classification.", "cite_spans": [ { "start": 112, "end": 132, "text": "Taskar et al. (2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "MIRA Input: a training set \uf028 \uf029 \uf07b \uf07d 1 , N i i T \uf03d i y 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "w 0 = 0; v = 0; j = 0 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "For iter := 1 to K 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "For i := 1 to N 4 update w according to \uf028 \uf029", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": ", i T i y : 1 min jj \uf02b \uf02d ww s.t. ( , ) ( , ') ( , ')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "where ' ( , )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "i i i i i i j ii s T s T L DT T \uf02d\uf0b3 \uf03d y y y y yw 5 v = v + w j ; 6 j = j+1 7 w = v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MIRA based Learning", "sec_num": "4.2" }, { "text": "Figure 6: MIRA based Learning Figure 6 gives the pseudo-code of the MIRA algorithm (McDonld et al., 2005b) . This algorithm is designed to update the parameters w using a single training instance \uf028 \uf029 , i T i y in each iteration. On each update, MIRA attempts to keep the norm of the change to the weight vector as small as possible, which is subject to constructing the correct dependency tree under consideration with a margin at least as large as the loss of the incorrect dependency trees. We define the loss of a discourse dependency tree ' i y (denoted by ( , ') ii L yy ) as the number of the EDUs that have incorrect heads. Since there are exponentially many possible incorrect dependency trees and thus exponentially many margin constraints, here we relax the optimization and stay with a single best dependency tree ' ( , )", "cite_spans": [ { "start": 83, "end": 106, "text": "(McDonld et al., 2005b)", "ref_id": null }, { "start": 561, "end": 567, "text": "( , ')", "ref_id": null }, { "start": 825, "end": 826, "text": "'", "ref_id": null } ], "ref_spans": [ { "start": 30, "end": 38, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "/(K*N)", "sec_num": null }, { "text": "j ii DT T \uf03d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "/(K*N)", "sec_num": null }, { "text": "yw which is parsed under the weight vector w j . In this algorithm, the successive updated values of w are accumulated and averaged to avoid overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "/(K*N)", "sec_num": null }, { "text": "We test our methods experimentally using the discourse dependency treebank which is built as in Section 2. The training part of the corpus is composed of 342 documents and contains 18,765 EDUs, while the test part consists of 38 documents and 2,346 EDUs. The number of EDUs in each document ranges between 2 and 304. Two sets of relations are adopted. One is composed of 19 relations and Table 1 shows the number of each relation in the training and test corpus. The other is composed of 111 relations. Due to space limitation, Table 2 only lists the 10 highestdistributed relations with regard to their frequency in the training corpus.", "cite_spans": [], "ref_spans": [ { "start": 388, "end": 395, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 528, "end": 535, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Preparation", "sec_num": "5.1" }, { "text": "The following experiments are conducted: (1) to measure the parsing performance with different relation sets and different feature types; (2) to compare our parsing methods with the state-ofthe-art discourse parsing methods. Based on the MIRA leaning algorithm, the Eisner algorithm and MST algorithm are used to parse the test documents respectively. Referring to the evaluation of syntactic dependency parsing, we use unlabeled accuracy to calculate the ratio of EDUs that correctly identify their heads, labeled accuracy the ratio of EDUs that have both correct heads and correct relations. Table 3 and Table 4 show the performance on two relation sets. The numbers (1-6) represent the corresponding feature types described in Section 4.1.", "cite_spans": [], "ref_spans": [ { "start": 594, "end": 614, "text": "Table 3 and Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Preparation", "sec_num": "5.1" }, { "text": "From Table 3 and Table 4 , we can see that the addition of more feature types, except the 6 th feature type (semantic similarity), can promote the performance of relation labeling, whether using the coarse-grained 19 relations and the finegrained 111 relations. As expected, the first and second types of features (WORD and POS) are the ones which play an important role in building and labeling the discourse dependency trees. These two types of features attain similar performance on two relation sets. The Eisner algorithm can achieve unlabeled accuracy around 0.36 and labeled accuracy around 0.26, while MST algorithm achieves unlabeled accuracy around 0.20 and labeled accuracy around 0.14.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 24, "text": "Table 3 and Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Relations", "sec_num": null }, { "text": "The third feature type (Position) is also very helpful to discourse parsing. With the addition of this feature type, both unlabeled accuracy and labeled accuracy exhibit a marked increase. Especially, when applying MST algorithm on discourse parsing, unlabeled accuracy rises from around 0.20 to around 0.73. This result is consistent with Hernault's work (2010b) whose experiments have exhibited the usefulness of those position-related features. The other two types of features which are related to length and syntactic parsing, only promote the performance slightly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relations", "sec_num": null }, { "text": "As we employed the MIRA learning algorithm, it is possible to identify which specific features are useful, by looking at the weights learned to each feature using the training data. Table 5 selects 10 features with the highest weights in absolute value for the parser which uses the coarsegrained relations, while Table 6 selects the top 10 features for the parser using the fine-grained relations. Each row denotes one feature: the left part before the symbol \"&\" is from one of the 6 feature types and the right part denotes a specific relation. From Table 5 and Table 6 , we can see that some features are reasonable. For example, The sixth feature in Table 5 represents that the dependency relation is preferred to be labeled Explanation with the fact that \"because\" is the first word of the dependent EDU. From these two tables, we also observe that most of the heavily weighted features are usually related to those highly distributed relations. When using the coarse-grained relations, the popular relations (eg. Elaboration, Attribution and Joint) are always preferred to be labeled. When using the fine-grained relations, the large relations including List and Elaboration-object-attribute-e are given the precedence of labeling. This phenomenon is mainly caused by the sparseness of the training corpus and the imbalance of relations. To solve this problem, the augment of training corpus is necessary. Unlike previous discourse parsing approaches, our methods combine tree building and relation labeling into a uniform framework naturally. This means that relations play a role in building the dependency tree structure. From Table 3 and Table 4 , we can see that fine-grained relations are more helpful to building unlabeled discourse trees more than the coarse-grained relations. The best result of unlabeled accuracy using 111 relations is 0.7506, better than the best performance (0.7447) using 19 relations. We can also see that the labeled accuracy using the fine-grained relations can achieve 0.4309, only 0.06 lower than the best labeled accuracy (0.4915) using the coarse-grained relations.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 314, "end": 321, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 553, "end": 572, "text": "Table 5 and Table 6", "ref_id": "TABREF7" }, { "start": 655, "end": 662, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 1637, "end": 1657, "text": "Table 3 and Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Relations", "sec_num": null }, { "text": "In addition, comparing the MST algorithm with the Eisner algorithm, Table 3 and Table 4 show that their performances are not significantly different from each other. But we think that MST algorithm has more potential in discourse dependency parsing, because our converted discourse dependency treebank contains only projective trees and somewhat suppresses the MST algorithm to exhibit its advantage of parsing nonprojective trees. In fact, we observe that some non-projective dependencies produced by the MST algorithm are even reasonable than what they are in the dependency treebank. Thus, it is important to build a manually labeled discourse dependency treebank, which will be our future work.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 87, "text": "Table 3 and Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Feature description Weight", "sec_num": null }, { "text": "The state-of-the-art discourse parsing methods normally produce the constituency based discourse trees. To comprehensively evaluate the performance of a labeled constituency tree, the blank tree structure ('S'), the tree structure with nuclearity indication ('N'), and the tree structure with rhetorical relation indication but no nuclearity indication ('R') are evaluated respectively using the F measure (Marcu 2000) .", "cite_spans": [ { "start": 406, "end": 418, "text": "(Marcu 2000)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Other Systems", "sec_num": "5.3" }, { "text": "To compare our discourse parsers with others, we adopt MIRA and Eisner algorithm to conduct discourse parsing with all the 6 types of features and then convert the produced projective dependency trees to constituency based trees through their correspondence as stated in Section 2. Our parsers using two relation sets are named Our-coarse and Our-fine respectively. The inputted EDUs of our parsers are from the standard segmentation of RST-DT. Other text-level discourse parsing methods include: (1) Percepcoarse: we replace MIRA with the averaged perceptron learning algorithm and the other settings are the same with Our-coarse; (2) HILDAmanual and HILDA-seg are from Hernault (2010b)'s work, and their inputted EDUs are from RST-DT and their own EDU segmenter respectively; (3) LeThanh indicates the results given by LeThanh el al. (2004) , which built a multi-level rule based parser and used 14 rela-tions evaluated on 21 documents from RST-DT; (4) Marcu denotes the results given by Marcu(2000) 's decision-tree based parser which used 15 relations evaluated on unspecified documents. Table 7 shows the performance comparison for all the parsers mentioned above. Human denotes the manual agreement between two human annotators. From this table, we can see that both our parsers perform better than all the other parsers as a whole, though our parsers are not developed directly for constituency based trees. Our parsers do not exhibit obvious advantage than HILDA-manual on labeling the blank tree structure, because our parsers and HILDAmanual all perform over 94% of Human and this performance level somewhat reaches a bottleneck to promote more. However, our parsers outperform the other parsers on both nuclearity and relation labeling. Our-coarse achieves 94.2% and 91.8% of the human F-scores, on labeling nuclearity and relation respectively, while Ourfine achieves 95.2% and 87.6%. We can also see that the averaged perceptron learning algorithm, though simple, can achieve a comparable performance, better than HILDA-manual. The parsers HILDA-seg, LeThanh and Marcu use their own automatic EDU segmenters and exhibit a relatively low performance. This means that EDU segmentation is important to a practical discourse parser and worth further investigation. To further compare the performance of relation labeling, we follow Hernault el al. (2010a) and use Macro-averaged F-score (MAFS) to evaluate each relation. Due to space limitation, we do not list the F scores for each relation. Macro-averaged F-score is not influenced by the number of instances that are contained in each relation. Weight-averaged F-score (WAFS) weights the performance of each relation by the number of its existing instances. Table 8 compares our parser Our-coarse with other parsers HILDA-manual, Feng (Feng and Hirst, 2012) and Baseline. Feng (Feng and Hirst, 2012) can be seen as a strengthened version of HILDA which adopts more features and conducts feature selection. Baseline always picks the most frequent relation (i.e. Elaboration). From the results, we find that Our-coarse consistently provides superior performance for most relations over other parsers, and therefore results in higher MAFS and WAFS.", "cite_spans": [ { "start": 836, "end": 842, "text": "(2004)", "ref_id": null }, { "start": 990, "end": 1001, "text": "Marcu(2000)", "ref_id": "BIBREF14" }, { "start": 2341, "end": 2364, "text": "Hernault el al. (2010a)", "ref_id": null }, { "start": 2778, "end": 2819, "text": "HILDA-manual, Feng (Feng and Hirst, 2012)", "ref_id": null } ], "ref_spans": [ { "start": 1092, "end": 1099, "text": "Table 7", "ref_id": "TABREF10" }, { "start": 2720, "end": 2727, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Comparison with Other Systems", "sec_num": "5.3" }, { "text": "So far, the existing discourse parsing techniques are mainly based on two well-known treebanks. One is the Penn Discourse TreeBank (PDTB) (Prasad et al., 2007) and the other is RST-DT.", "cite_spans": [ { "start": 138, "end": 159, "text": "(Prasad et al., 2007)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "PDTB adopts the predicate-arguments representation by taking an implicit/explicit connective as a predication of two adjacent sentences (arguments). Then the discourse relation between each pair of sentences is annotated independently to characterize its predication. A majority of researches regard discourse parsing as a classification task and mainly focus on exploiting various linguistic features and classifiers when using PDTB (Wellner et al., 2006; Pitler et al., 2009; Wang et al., 2010) . However, the predicatearguments annotation scheme itself has such a limitation that one can only obtain the local discourse relations without knowing the rich context.", "cite_spans": [ { "start": 434, "end": 456, "text": "(Wellner et al., 2006;", "ref_id": "BIBREF28" }, { "start": 457, "end": 477, "text": "Pitler et al., 2009;", "ref_id": "BIBREF17" }, { "start": 478, "end": 496, "text": "Wang et al., 2010)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In contrast, RST and its treebank enable people to derive a complete representation of the whole discourse. Researches have begun to investigate how to construct a RST tree for the given text. Since the RST tree is similar to the constituency based syntactic tree except that the constituent nodes are different, the syntactic parsing techniques have been borrowed for discourse parsing (Soricut and Marcu, 2003; Baldridge and Lascarides, 2005; Sagae, 2009; Hernault et al., 2010b; Feng and Hirst, 2012) . Soricut and Marcu (2003) use a standard bottomup chart parsing algorithm to determine the discourse structure of sentences. Baldridge and Lascarides (2005) model the process of discourse parsing with the probabilistic head driven parsing techniques. Sagae (2009) apply a transition based constituent parsing approach to construct a RST tree for a document. Hernault et al. (2010b) develop a greedy bottom-up tree building strategy for discourse parsing. The two adjacent text spans with the closest relations are combined in each iteration. As the extension of Hernault's work, Feng and Hirst (2012) further explore various features aiming to achieve better performance. However, as analyzed in Section 1, there exist three limitations with the constituency based discourse representation and parsing. We innovatively adopt the dependency structure, which can be benefited from the existing RST-DT, to represent the discourse. To the best of our knowledge, this work is the first to apply dependency structure and dependency parsing techniques in discourse analysis.", "cite_spans": [ { "start": 387, "end": 412, "text": "(Soricut and Marcu, 2003;", "ref_id": "BIBREF22" }, { "start": 413, "end": 444, "text": "Baldridge and Lascarides, 2005;", "ref_id": "BIBREF0" }, { "start": 445, "end": 457, "text": "Sagae, 2009;", "ref_id": "BIBREF21" }, { "start": 458, "end": 481, "text": "Hernault et al., 2010b;", "ref_id": "BIBREF9" }, { "start": 482, "end": 503, "text": "Feng and Hirst, 2012)", "ref_id": "BIBREF7" }, { "start": 506, "end": 530, "text": "Soricut and Marcu (2003)", "ref_id": "BIBREF22" }, { "start": 630, "end": 661, "text": "Baldridge and Lascarides (2005)", "ref_id": "BIBREF0" }, { "start": 756, "end": 768, "text": "Sagae (2009)", "ref_id": "BIBREF21" }, { "start": 863, "end": 886, "text": "Hernault et al. (2010b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we present the benefits and feasibility of applying dependency structure in textlevel discourse parsing. Through the correspondence between constituency-based trees and dependency trees, we build a discourse dependency treebank by converting the existing RST-DT. Based on dependency structure, we are able to directly analyze the relations between the EDUs without worrying about the additional interior text spans, and apply the existing state-of-the-art dependency parsing techniques which have a relatively low time complexity. In our work, we use the graph based dependency parsing techniques learned from the annotated dependency trees. The Eisner algorithm and the MST algorithm are applied to parse the optimal projective and non-projective dependency trees respectively based on the arc-factored model. To calculate the score for each arc, six types of features are explored to represent the arcs and the feature weights are learned based on the MIRA learning technique. Experimental results exhibit the effectiveness of the proposed approaches. In the future, we will focus on non-projective discourse dependency parsing and explore more effective features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "The two terms \"text\" and \"document\" are used interchangeably and represent the same meaning.3 According to our statistics, there are totally 381 n-ary relations in RST-DT.4 We set the first nucleus as the only nucleus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We can easily get all possible headed binary trees for one more complex text containing more than three EDUs, by extending the 8 possible situations for three EDUs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "19 relations include the original 18 relation in RST-DT plus one artificial ROOT relation. The 111 relations also include the ROOT relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " Program (No: 2011BAH10B04-03). We also thank the three anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Probabilistic Head-driven Parsing for Discourse Structure", "authors": [ { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "96--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Baldridge and Alex Lascarides. 2005. Probabil- istic Head-driven Parsing for Discourse Structure. In Proceedings of the Ninth Conference on Com- putational Natural Language Learning, pages 96- 103.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Natural Language Processing with Python -Analyzing Text with the Natural Language Toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python -Ana- lyzing Text with the Natural Language Toolkit. O'Reilly.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory", "authors": [ { "first": "Lynn", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Mary", "middle": [ "E" ], "last": "Okurowski", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Second SIGdial Workshop on Dis", "volume": "16", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lynn Carlson, Daniel Marcu, and Mary E. Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. Pro- ceedings of the Second SIGdial Workshop on Dis- course and Dialogue-Volume 16, pages 1-10.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the Shortest Arborescence of a Directed Graph, Science Sinica", "authors": [ { "first": "Yoeng-Jin", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Tseng-Hong", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "1396--1400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the Shortest Arborescence of a Directed Graph, Sci- ence Sinica, v.14, pp.1396-1400.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ultraconservative Online Algorithms for Multiclass Problems", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer and Yoram Singer. 2003. Ultracon- servative Online Algorithms for Multiclass Prob- lems. JMLR.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Optimum Branchings, J. Research of the National Bureau of Standards", "authors": [], "year": 1967, "venue": "", "volume": "71", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Edmonds. 1967. Optimum Branchings, J. Re- search of the National Bureau of Standards, 71B, pp.233-240.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Three New Probabilistic Models for Dependency Parsing: An Exploration", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 1996, "venue": "Proc. COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proc. COLING.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Text-level Discourse Parsing with Rich Linguistic Features", "authors": [ { "first": "Vanessa", "middle": [], "last": "Wei Feng", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanessa Wei Feng and Graeme Hirst. Text-level Dis- course Parsing with Rich Linguistic Features, Pro- ceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 60-68, Jeju, Republic of Korea, 8-14 July 2012.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Semi-supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension", "authors": [ { "first": "Hugo", "middle": [], "last": "Hernault", "suffix": "" }, { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "399--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Hernault, Danushka Bollegala, and Mitsuru Ishizuka. 2010a. A Semi-supervised Approach to Improve Classification of Infrequent Discourse Re- lations Using Feature Vector Extension. In Pro- ceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 399-409, Cambridge, MA, October. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "HILDA: A Discourse Parser Using Support Vector Machine Classification", "authors": [ { "first": "Hugo", "middle": [], "last": "Hernault", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Duverle", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2010, "venue": "Dialogue and Discourse", "volume": "1", "issue": "3", "pages": "1--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010b. HILDA: A Discourse Parser Using Support Vector Machine Classifica- tion. Dialogue and Discourse, 1(3):1-33.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A Novel Discriminative Framework for Sentencelevel Discourse Analysis", "authors": [ { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Raymond", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": null, "venue": "EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shafiq Joty, Giuseppe Carenini and Raymond T. Ng. A Novel Discriminative Framework for Sentence- level Discourse Analysis. EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning Stroudsburg, PA, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generating Discourse Structures for Written Texts", "authors": [ { "first": "Huong", "middle": [], "last": "Lethanh", "suffix": "" }, { "first": "Geetha", "middle": [], "last": "Abeysinghe", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Huyck", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "329--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huong LeThanh, Geetha Abeysinghe, and Christian Huyck. 2004. Generating Discourse Structures for Written Texts. In Proceedings of the 20th Interna- tional Conference on Computational Linguistics, pages 329-335.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Recognizing Implicit Discourse Relations in the Penn Discourse Treebank", "authors": [ { "first": "Ziheng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Method in Natural Language Processing", "volume": "1", "issue": "", "pages": "343--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Method in Natural Language Processing, Vol. 1, EMNLP'09, pages 343-351.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Rhetorical Structure Theory: Toward a Functional Theory of Text Organization", "authors": [ { "first": "William", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "Text", "volume": "8", "issue": "3", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Mann and Sandra Thompson. 1988. Rhetori- cal Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243-281.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Theory and Practice of Discourse Parsing and Summarization", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization. MIT Press, Cambridge, MA, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Online Large-Margin Training of Dependency Parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pe- reira. 2005a. Online Large-Margin Training of De- pendency Parsers, 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005) .", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Non-projective Dependency Parsing using Spanning Tree Algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Kiril", "middle": [], "last": "Ribarov", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective Dependency Parsing using Spanning Tree Algorithms, Proceed- ings of HLT/EMNLP 2005.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic Sense Prediction for Implicit Discourse Relations in Text", "authors": [ { "first": "Emily", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2009, "venue": "Proc. of the 47th ACL", "volume": "", "issue": "", "pages": "683--691", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic Sense Prediction for Implicit Discourse Relations in Text, In Proc. of the 47th ACL. pages 683-691.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Penn Discourse Treebank", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Livio", "middle": [], "last": "Robaldo", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Eleni Miltsakaki, Nikhil Dinesh, Alan Lee, Aravind Joshi, Livio Robaldo, and Bonnie Webber. 2007. The Penn Discourse Treebank 2.0", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Annotation Manual. The PDTB Research Group", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annotation Manual. The PDTB Research Group, December.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Simple Signals for Complex Rhetorics: On Rhetorical Analysis with Richfeature Support Vector Models. LDV Forum", "authors": [ { "first": "David", "middle": [], "last": "Reitter", "suffix": "" } ], "year": 2003, "venue": "", "volume": "18", "issue": "", "pages": "38--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Reitter. 2003. Simple Signals for Complex Rhetorics: On Rhetorical Analysis with Rich- feature Support Vector Models. LDV Forum, 18(1/2):38-52.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Analysis of discourse structure with syntactic dependencies and data-driven shiftreduce parsing", "authors": [ { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 11th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "81--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Sagae. 2009. Analysis of discourse structure with syntactic dependencies and data-driven shift- reduce parsing. In Proceedings of the 11th Interna- tional Conference on Parsing Technologies, pages 81-84.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentence level discourse parsing using syntactic and lexical information", "authors": [ { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "149--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical in- formation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Lan- guage Technology, Volume 1, pages 149-156.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An effective discourse parser that uses rich linguistic information", "authors": [ { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" }, { "first": "Barbara", "middle": [ "Di" ], "last": "Eugenio", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "566--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajen Subba and Barbara Di Eugenio. 2009. An effec- tive discourse parser that uses rich linguistic in- formation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 566-574.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Finding Optimum Branchings, Networks", "authors": [ { "first": "", "middle": [], "last": "Robert Endre Tarjan", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "25--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Endre Tarjan, 1977. Finding Optimum Branchings, Networks, v.7, pp.25-35.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Max-margin Markov Networks", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2003, "venue": "Proc. NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Carlos Guestrin and Daphne Koller. 2003. Max-margin Markov Networks. In Proc. NIPS.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "D-LTAG: Extending Lexicalized TAG to Discourse", "authors": [ { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2004, "venue": "Cognitive Science", "volume": "28", "issue": "5", "pages": "751--779", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Webber. 2004. D-LTAG: Extending Lexical- ized TAG to Discourse. Cognitive Science, 28(5):751-779.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Kernel based Discourse Relation Recognition with Temporal Ordering Information", "authors": [ { "first": "Jian", "middle": [], "last": "Wen Ting Wang", "suffix": "" }, { "first": "Chew", "middle": [], "last": "Su", "suffix": "" }, { "first": "", "middle": [], "last": "Lim Tan", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL'10", "volume": "", "issue": "", "pages": "710--719", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen Ting Wang, Jian Su and Chew Lim Tan. 2010. Kernel based Discourse Relation Recognition with Temporal Ordering Information, In Proc. of ACL'10. pages 710-719.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Classification of Discourse Coherence Relations: an Exploratory Study Using Multiple Knowledge Sources", "authors": [ { "first": "Ben", "middle": [], "last": "Wellner", "suffix": "" }, { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2006, "venue": "Proc.of the 7th SIGDIAL Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "117--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Wellner, James Pustejovsky, Catherine Havasi, Anna Rumshisky and Roser Sauri. 2006. Classifi- cation of Discourse Coherence Relations: an Ex- ploratory Study Using Multiple Knowledge Sources. In Proc.of the 7th SIGDIAL Workshop on Discourse and Dialogue. pages 117-125.", "links": null } }, "ref_entries": { "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Pictorial Diagram of Non-projective Trees" }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Chu-Liu/Edmonds MST Algorithm" }, "TABREF1": { "content": "
Elaboration6879796Temporal42673
Attribution2641343ROOT34238
Joint1711212Compari.27329
Same-unit1230127Condition25848
Contrast944146Manner.19127
Explanation 849110Summary18832
Background 786111Topic-Cha. 18713
Cause78582Textual1479
Evaluation50280TopicCom. 12624
Enablement 50046Total18765 2346
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF2": { "content": "
RelationsTrain Test
Elaboration-additional2912312
Attribution2474329
Elaboration-object-attribute-e2274250
List1690206
Same-unit1230127
Elaboration-additional-e74769
Circumstance54580
Explanation-argumentative52470
Purpose43043
Contrast35864
", "type_str": "table", "html": null, "num": null, "text": "Coarse-grained Relation Distribution" }, "TABREF3": { "content": "
: 10 Highest Distributed Fine-grained
Relations
5.2 Feature Influence on Two Relation Sets
So far, researches on discourse parsing avoid
adopting too fine-grained relations and the rela-
tion sets containing around 20 labels are widely
used. In our experiments, we observe that adopt-
ing a fine-grained relation set can even be helpful
to building the discourse trees. Here, we conduct
experiments on two relation sets that contain 19
and 111 labels respectively. At the same time,
different feature types are tested their effects on
discourse parsing.
Method FeaturesUnlabeledLabeled
Acc.Acc.
Eisner1+20.36020.2651
1+2+30.73100.4855
1+2+3+40.73700.4868
1+2+3+4+50.74470.4957
1+2+3+4+5+6 0.74550.4983
MST1+20.19570.1479
1+2+30.72460.4783
1+2+3+40.72800.4795
1+2+3+4+50.73400.4915
1+2+3+4+5+6 0.73310.4851
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF4": { "content": "
Method Feature typesUnlabeledLabeled
Acc.Acc.
Eisner1+20.37430.2421
1+2+30.74510.4079
1+2+3+40.74720.4041
1+2+3+4+50.75060.4254
1+2+3+4+5+6 0.74850.4288
MST1+20.20800.1300
1+2+30.73660.4054
1+2+3+40.74680.4071
1+2+3+4+50.74940.4288
1+2+3+4+5+6 0.74600.4309
", "type_str": "table", "html": null, "num": null, "text": "Performance Using Coarse-grained Relations." }, "TABREF5": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Performance Using Fine-grained Relations." }, "TABREF7": { "content": "
FeaturesWeight
1 Last two words in dependent EDU are \"ap-peals court\" & List0.576
2 First two words in head EDU are \"I 'd\" & Attribution0.385
3 First two words in dependent EDU is \"that the\" & Elaboration-object-attribute-e0.348
4 First POS in head EDU is \"DT\" & List-0.323
5 Last word in dependent EDU is \"in\" & List-0.286
6 First word in dependent EDU is \"racked\" & Elaboration-object-attribute-e0.445
7 First two word pairs are <\"In an\",\"But even\"> & List-0.252
8 Dependent EDU has a dominating node tagged \"CD\"& Elaboration-object-attribute-e-0.244
9 First two words in dependent EDU are \"pa-tents disputes\" & Purpose0.231
10 First word in dependent EDU is \"to\" & Purpose0.230
", "type_str": "table", "html": null, "num": null, "text": "Top 10 Feature Weights for Coarsegrained Relation Labeling(Eisner Algorithm)" }, "TABREF8": { "content": "
: Top 10 Feature Weights for Coarse-
grained Relation Labeling (Eisner Algorithm)
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF10": { "content": "
Our-coarseMAFS WAFS Acc 0.454 0.643 66.84
Percep-coarse Feng0.438 0.633 65.37 0.440 0.607 65.30
HILDA-manual 0.428 0.604 64.18 Baseline --35.82
Table 8: Relation Labeling Performance
", "type_str": "table", "html": null, "num": null, "text": "Full Parser Evaluation" } } } }