{ "paper_id": "P14-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:06:28.502643Z" }, "title": "Representation Learning for Text-level Discourse Parsing", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "", "institution": "Georgia Institute of Technology", "location": {} }, "email": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "", "affiliation": {}, "email": "jacobe@gatech.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text-level discourse parsing is notoriously difficult, as distinctions between discourse relations require subtle semantic judgments that are not easily captured using standard features. In this paper, we present a representation learning approach, in which we transform surface features into a latent space that facilitates RST discourse parsing. By combining the machinery of large-margin transition-based structured prediction with representation learning, our method jointly learns to parse discourse while at the same time learning a discourse-driven projection of surface features. The resulting shift-reduce discourse parser obtains substantial improvements over the previous state-of-the-art in predicting relations and nuclearity on the RST Treebank.", "pdf_parse": { "paper_id": "P14-1002", "_pdf_hash": "", "abstract": [ { "text": "Text-level discourse parsing is notoriously difficult, as distinctions between discourse relations require subtle semantic judgments that are not easily captured using standard features. In this paper, we present a representation learning approach, in which we transform surface features into a latent space that facilitates RST discourse parsing. By combining the machinery of large-margin transition-based structured prediction with representation learning, our method jointly learns to parse discourse while at the same time learning a discourse-driven projection of surface features. The resulting shift-reduce discourse parser obtains substantial improvements over the previous state-of-the-art in predicting relations and nuclearity on the RST Treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Discourse structure describes the high-level organization of text or speech. It is central to a number of high-impact applications, such as text summarization (Louis et al., 2010) , sentiment analysis (Voll and Taboada, 2007; Somasundaran et al., 2009) , question answering (Ferrucci et al., 2010) , and automatic evaluation of student writing (Miltsakaki and Kukich, 2004; Burstein et al., 2013) . Hierarchical discourse representations such as Rhetorical Structure Theory (RST) are particularly useful because of the computational applicability of tree-shaped discourse structures (Taboada and Mann, 2006) , as shown in Figure 1. Unfortunately, the performance of discourse parsing is still relatively weak: the state-of-the-art F-measure for text-level relation detection in the RST Treebank is only slightly above 55% (Joty when profit was $107.8 million on sales of $435.5 million.", "cite_spans": [ { "start": 159, "end": 179, "text": "(Louis et al., 2010)", "ref_id": "BIBREF23" }, { "start": 201, "end": 225, "text": "(Voll and Taboada, 2007;", "ref_id": "BIBREF51" }, { "start": 226, "end": 252, "text": "Somasundaran et al., 2009)", "ref_id": "BIBREF42" }, { "start": 274, "end": 297, "text": "(Ferrucci et al., 2010)", "ref_id": "BIBREF12" }, { "start": 344, "end": 373, "text": "(Miltsakaki and Kukich, 2004;", "ref_id": "BIBREF30" }, { "start": 374, "end": 396, "text": "Burstein et al., 2013)", "ref_id": "BIBREF3" }, { "start": 583, "end": 607, "text": "(Taboada and Mann, 2006)", "ref_id": "BIBREF46" }, { "start": 622, "end": 631, "text": "Figure 1.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The projections are in the neighborhood of 50 cents a share to 75 cents, compared with a restated $1.65 a share a year earlier, CIRCUMSTANCE COMPARISON Figure 1 : An example of RST discourse structure. et al., 2013) . While recent work has introduced increasingly powerful features (Feng and Hirst, 2012) and inference techniques (Joty et al., 2013) , discourse relations remain hard to detect, due in part to a long tail of \"alternative lexicalizations\" that can be used to realize each relation (Prasad et al., 2010) . Surface and syntactic features are not capable of capturing what are fundamentally semantic distinctions, particularly in the face of relatively small annotated training sets.", "cite_spans": [ { "start": 177, "end": 215, "text": "RST discourse structure. et al., 2013)", "ref_id": null }, { "start": 282, "end": 304, "text": "(Feng and Hirst, 2012)", "ref_id": "BIBREF11" }, { "start": 330, "end": 349, "text": "(Joty et al., 2013)", "ref_id": "BIBREF17" }, { "start": 497, "end": 518, "text": "(Prasad et al., 2010)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a representation learning approach to discourse parsing. The core idea of our work is to learn a transformation from a bag-of-words surface representation into a latent space in which discourse relations are easily identifiable. The latent representation for each discourse unit can be viewed as a discriminativelytrained vector-space representation of its meaning. Alternatively, our approach can be seen as a nonlinear learning algorithm for incremental structure prediction, which overcomes feature sparsity through effective parameter tying. We consider several alternative methods for transforming the original features, corresponding to different ideas of the meaning and role of the latent representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our method is implemented as a shift-reduce discourse parser (Marcu, 1999; Sagae, 2009) . Learning is performed as large-margin transitionbased structure prediction (Taskar et al., 2003) , while at the same time jointly learning to project the surface representation into latent space. The resulting system strongly outperforms the prior state-of-the-art at labeled F-measure, obtaining raw improvements of roughly 6% on relation labels and 2.5% on nuclearity. In addition, we show that the latent representation coheres well with the characterization of discourse connectives in the Penn Discourse Treebank (Prasad et al., 2008) .", "cite_spans": [ { "start": 61, "end": 74, "text": "(Marcu, 1999;", "ref_id": "BIBREF26" }, { "start": 75, "end": 87, "text": "Sagae, 2009)", "ref_id": "BIBREF37" }, { "start": 165, "end": 186, "text": "(Taskar et al., 2003)", "ref_id": "BIBREF47" }, { "start": 608, "end": 629, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The core idea of this paper is to project lexical features into a latent space that facilitates discourse parsing. In this way, we can capture the meaning of each discourse unit, without suffering from the very high dimensionality of a lexical representation. While such feature learning approaches have proven to increase robustness for parsing, POS tagging, and NER (Miller et al., 2004; Koo et al., 2008; Turian et al., 2010) , they would seem to have an especially promising role for discourse, where training data is relatively sparse and ambiguity is considerable. Prasad et al. (2010) show that there is a long tail of alternative lexicalizations for discourse relations in the Penn Discourse Treebank, posing obvious challenges for approaches based on directly matching lexical features observed in the training data.", "cite_spans": [ { "start": 368, "end": 389, "text": "(Miller et al., 2004;", "ref_id": "BIBREF29" }, { "start": 390, "end": 407, "text": "Koo et al., 2008;", "ref_id": "BIBREF19" }, { "start": 408, "end": 428, "text": "Turian et al., 2010)", "ref_id": "BIBREF48" }, { "start": 571, "end": 591, "text": "Prasad et al. (2010)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "Based on this observation, our goal is to learn a function that transforms lexical features into a much lower-dimensional latent representation, while simultaneously learning to predict discourse structure based on this latent representation. In this paper, we consider a simple transformation function, linear projection. Thus, we name the approach DPLP: Discourse Parsing from Linear Projection. We apply transition-based (incremental) structured prediction to obtain a discourse parse, training a predictor to make the correct incremental moves to match the annotations of training data in the RST Treebank. This supervision signal is then used to learn both the weights and the projection matrix in a large-margin framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "We construct RST Trees using shift-reduce parsing, as first proposed by Marcu (1999) . At each point in the parsing process, we maintain a stack and a queue; initially the stack is empty and the first elementary discourse unit (EDU) in the document is at the front of the queue. 1 The parser can 1 We do not address segmentation of text into elementary discourse units in this paper. Standard classification-", "cite_spans": [ { "start": 72, "end": 84, "text": "Marcu (1999)", "ref_id": "BIBREF26" }, { "start": 279, "end": 280, "text": "1", "ref_id": null }, { "start": 296, "end": 297, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce discourse parsing", "sec_num": "2.1" }, { "text": "V Vocabulary for surface features V Size of V K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "Dimension of latent space wm Classification weights for class m C Total number of classes, which correspond to possible shift-reduce operations A Parameter of the representation function (also the projection matrix in the linear representation function)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "v i Word count vector of discourse unit i v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "Vertical concatenation of word count vectors for the three discourse units currently being considered by the parser \u03bb Regularization for classification weights \u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "Regularization for projection matrix \u03bei Slack variable for sample i \u03b7i,m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "Dual variable for sample i and class m \u03b1t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "Learning rate at iteration t Table 1 : Summary of mathematical notation then choose either to shift the front of the queue onto the top of the stack, or to reduce the top two elements on the stack in a discourse relation. The reduction operation must choose both the type of relation and which element will be the nucleus. So, overall there are multiple reduce operations with specific relation types and nucleus positions. Shift-reduce parsing can be learned as a classification task, where the classifier uses features of the elements in the stack and queue to decide what move to take. Previous work has employed decision trees (Marcu, 1999) and the averaged perceptron (Collins and Roark, 2004; Sagae, 2009) for this purpose. Instead, we employ a large-margin classifier, because we can compute derivatives of the margin-based objective function with respect to both the classifier weights as well as the projection matrix.", "cite_spans": [ { "start": 631, "end": 644, "text": "(Marcu, 1999)", "ref_id": "BIBREF26" }, { "start": 673, "end": 698, "text": "(Collins and Roark, 2004;", "ref_id": "BIBREF5" }, { "start": 699, "end": 711, "text": "Sagae, 2009)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Notation Explanation", "sec_num": null }, { "text": "More formally, we denote the surface feature vocabulary V, and represent each EDU as the numeric vector v \u2208 N V , where V = #|V| and the nth element of v is the count of the n-th surface feature in this EDU (see Table 1 for a summary of notation). During shift-reduce parsing, we consider features of three EDUs: 2 the top two elements on based approaches can achieve a segmentation F-measure of 94% (Hernault et al., 2010) ; a more complex reranking model does slightly better, at 95% F-Measure with automatically-generated parse trees, and 96.6% with gold annotated trees (Xuan Bach et al., 2012) . Human agreement reaches 98% F-Measure. 2 After applying a reduce operation, the stack will include a span that contains multiple EDUs. We follow the strong the stack (v 1 and v 2 ), and the front of the queue (v 3 ). The vertical concatenation of these vectors is denoted", "cite_spans": [ { "start": 400, "end": 423, "text": "(Hernault et al., 2010)", "ref_id": "BIBREF15" }, { "start": 574, "end": 598, "text": "(Xuan Bach et al., 2012)", "ref_id": "BIBREF52" }, { "start": 640, "end": 641, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "v = [v 1 ; v 2 ; v 3 ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "In general, we can formulate the decision function for the multi-class shift-reduce classifier a\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m = arg max m\u2208{1,...,C} w m f (v; A)", "eq_num": "(1)" } ], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "where w m is the weight for the m-th class and f (v; A) is the representation function parametrized by A. The score for class m (in our case, the value of taking the m-th shiftreduce operation) is computed by the inner product w m f (v; A). The specific shift-reduce operation is chosen by maximizing the decision value in Equation 1. The representation function f (v; A) can be defined in any form; for example, it could be a nonlinear function defined by a neural network model parametrized by A. We focus on the linear projection,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (v; A) = Av,", "eq_num": "(2)" } ], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "where A \u2208 R K\u00d73V is projects the surface representation v of three EDUs into a latent space of size K V . Note that by settingw m = w m A, the decision scoring function can be rewritten asw m v, which is linear in the original surface features. Therefore, the expressiveness of DPLP is identical to a linear separator in the original feature space. However, the learning problem is considerably different. If there are C total classes (possible shift-reduce operations), then a linear classifier must learn 3V C parameters, while DPLP must learn (3V + C)K parameters, which will be smaller under the assumption that K < C V . This can be seen as a form of parameter tying on the linear weights w m , which allows statistical strength to be shared across training instances. We will consider special cases of A that reduce the parameter space still further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse parsing with projected features", "sec_num": "2.2" }, { "text": "We consider three different constructions for the projection matrix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "\u2022 General form: In the general case, we place compositionality criterion of Marcu (1996) and consider only the nuclear EDU of the span. Later work may explore the composition of features between the nucleus and satellite. no special constraint on the form of A.", "cite_spans": [ { "start": 76, "end": 88, "text": "Marcu (1996)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "f (v; A) = A \uf8ee \uf8f0 v 1 v 2 v 3 \uf8f9 \uf8fb (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "This form is shown in Figure 2 (a).", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "\u2022 Concatenation form: In the concatenation form, we choose a block structure for A, in which a single projection matrix B is applied to each EDU:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (v; A) = B 0 0 0 B 0 0 0 B v 1 v 2 v 3", "eq_num": "(4)" } ], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "In this form, we transform the representation of each EDU separately, but do not attempt to represent interrelationships between the EDUs in the latent space. The number of parameters in A is 1 3 KV . Then, the total number of parameters, including the decision weights {w m }, in this form is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "( V 3 + C)K. \u2022 Difference form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "In the difference form, we explicitly represent the differences between adjacent EDUs, by constructing A as a block difference matrix,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (v; A) = C \u2212C 0 C 0 \u2212C 0 0 0 v 1 v 2 v 3 ,", "eq_num": "(5)" } ], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "The result of this projection is that the latent representation has the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "[C(v 1 \u2212 v 2 ); C(v 1 \u2212 v 3 )],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "representing the difference between the top two EDUs on the stack, and between the top EDU on the stack and the first EDU in the queue. This is intended to capture semantic similarity, so that reductions between related EDUs will be preferred. Similarly, the total number of parameters to estimate in this form is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "(V + 2C) K 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "3 Large-Margin Learning Framework", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "We apply a large margin structure prediction approach to train the model. There are two parameters that need to be learned: the classification weights {w m }, and the projection matrix A. As we will see, it is possible to learn {w m } using standard support vector machine (SVM) training (holding A fixed), and then make a simple gradient-based update to A (holding {w m } fixed). By interleaving these two operations, we arrive at a saddle point of the objective function. Specifically, we formulate the following constrained optimization problem,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "A W y v 1 from stack v 2 from stack v 3 from queue (a) General form A W y v 1 from stack v 2 from stack v 3 from queue (b) Concatenation form A W y v 1 from stack v 2 from stack v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "min", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{w 1:C ,\u03be 1:l ,A} \u03bb 2 C m=1 wm 2 2 + l i=1 \u03bei + \u03c4 2 A 2 F s.t. (wy i \u2212wm) f (vi; A) \u2265 1 \u2212 \u03b4y i =m \u2212 \u03bei, \u2200 i, m", "eq_num": "(6)" } ], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "where m \u2208 {1, . . . , C} is the index of the shift-reduce decision taken by the classifier (e.g., SHIFT, REDUCE-CONTRAST-RIGHT, etc), i \u2208 {1, \u2022 \u2022 \u2022 , l} is the index of the training sample, and w m is the vector of classification weights for class m. The slack variables \u03be i permit the margin constraint to be violated in exchange for a penalty, and the delta function \u03b4 y i =m is unity if y i = m, and zero otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "As is standard in the multi-class linear SVM (Crammer and Singer, 2001) , we can solve the problem defined in Equation 6 via Lagrangian optimization:", "cite_spans": [ { "start": 45, "end": 71, "text": "(Crammer and Singer, 2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L({w1:C , \u03be 1:l , A, \u03b7 1:l,1:C }) = \u03bb 2 C m=1 wm 2 2 + l i=1 \u03bei + \u03c4 2 A 2 F + i,m \u03b7i,m (w m \u2212 w y i )f (vi; A) + 1 \u2212 \u03b4y i =m \u2212 \u03bei s.t. \u03b7i,m \u2265 0 \u2200i, m", "eq_num": "(7)" } ], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "Then, to optimize L, we need to find a saddle point, which would be the minimum for the variables {w 1:C , \u03be 1:l } and the projection matrix A, and the maximum for the dual variables {\u03b7 1:l,1:C }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "If A is fixed, then the optimization problem is equivalent to a standard multi-class SVM, in the transformed feature space f (v i ; A). We can obtain the weights {w 1:C } and dual variables {\u03b7 1:l,1:C } from a standard dual-form SVM solver. We then update A, recompute {w 1:C } and {\u03b7 1:l,1:C }, and iterate until convergence. This iterative procedure is similar to the latent variable structural SVM (Yu and Joachims, 2009) , although the specific details of our learning algorithm are different.", "cite_spans": [ { "start": 401, "end": 424, "text": "(Yu and Joachims, 2009)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Special forms of the projection matrix", "sec_num": "2.3" }, { "text": "We update A while holding fixed the weights and dual variables. The derivative of L with respect to A is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202L \u2202A = \u03c4 A + i,m \u03b7i,m(w m \u2212 w y i ) \u2202f (vi; A) \u2202A = \u03c4 A + i,m \u03b7i,m(wm \u2212 wy i )vi", "eq_num": "(8)" } ], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "Setting \u2202L \u2202A = 0, we have the closed-form solution,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A = 1 \u03c4 i,m \u03b7i,m(wm \u2212 wy i )vi = 1 \u03c4 i,j (wy i \u2212 m \u03b7i,mwm)vi ,", "eq_num": "(9)" } ], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "because the dual variables for each instance must sum to one, m \u03b7 i,m = 1. Note that for a given i, the matrix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "(w y i \u2212 m \u03b7 i,m w m )v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "i is of (at most) rank-1. Therefore, the solution of A can be viewed as the linear combination of a sequence of rank-1 matrices, where each rank-1 matrix is defined by distributional representation v i and the weight difference between the weight of true label w y i and the \"expected\" weight m \u03b7 i,m w m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "One property of the dual variables is that f (v i ; A) is a support vector only if the dual variable \u03b7 i,y i < 1. Since the dual variables for each instance are guaranteed to sum to one, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "w y i \u2212 m \u03b7 i,m w m = 0 if \u03b7 i,y i = 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "In other words, the contribution from non support vectors to the projection matrix A is 0. Then, we can further simplify the updating equation as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "A = 1 \u03c4 v i \u2208SV (w y i \u2212 m \u03b7 i,m w m )v i (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "This is computationally advantageous since many instances are not support vectors, and it shows that the discriminatively-trained projection matrix only incorporates information from each instance to the extent that the correct classification receives low confidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "Algorithm 1 Mini-batch learning algorithm Input: Training set D, Regularization parameters \u03bb and \u03c4 , Number of iteration T , Initialization matrix A 0 , and Threshold \u03b5 while t = 1, . . . , T do Randomly choose a subset of training sam-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "ples D t from D Train SVM with A t\u22121 to obtain {w (t) m } and {\u03b7 (t) i,m } Update A t using Equation 11 with \u03b1 t = 1 t if At\u2212A t\u22121 F A 2 \u2212A 1 F < \u03b5 then Return end if end while", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "Re-train SVM with D and the final A Output: Projection matrix A, SVM classifier with weights w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Projection Matrix A", "sec_num": "3.1" }, { "text": "Solving the quadratic programming defined by the dual form of the SVM is time-consuming, especially on a large-scale dataset. But if we focus on learning the projection matrix A, we can speed up learning by sampling only a small proportion of the training data to compute an approximate optimum for {w 1:C , \u03b7 1:l,1:C }, before each update of A. This idea is similar to the mini-batch learning, which has been used in large-scale SVM problem (Nelakanti et al., 2013) and deep learning models (Le et al., 2011).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradient-based Learning for A", "sec_num": "3.2" }, { "text": "Specifically, in iteration t, the algorithm randomly chooses a subset of training samples D t to train the model. We cannot make a closed-form update to A based on this small sample, but we can take an approximate gradient step,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradient-based Learning for A", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "At = (1 \u2212 \u03b1t\u03c4 )At\u22121+ \u03b1t v i \u2208SV(D t ) w (t) y i \u2212 m \u03b7 (t) i,m w (t) m vi ,", "eq_num": "(11)" } ], "section": "Gradient-based Learning for A", "sec_num": "3.2" }, { "text": "where \u03b1 t is a learning rate. In iteration t, we choose \u03b1 t = 1 t . After convergence, we obtain the weights w by applying the SVM over the entire dataset, using the final A. The algorithm is summarized in Algorithm 1 and more details about implementation will be clarified in Section 4. While minibatch learning requires more iterations, the SVM training is much faster in each batch, and the overall algorithm is several times faster than using the entire training set for each update.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradient-based Learning for A", "sec_num": "3.2" }, { "text": "The learning algorithm is applied in a shift-reduce parser, where the training data consists of the (unique) list of shift and reduce operations required to produce the gold RST parses. On test data, we choose parsing operations in an online fashion -at each step, the parsing algorithm changes the status of the stack and the queue according the selected transition, then creates the next sample with the updated status.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4" }, { "text": "There are three free parameters in our approach: the latent dimension K, and regularization parameters \u03bb and \u03c4 . We consider the values K \u2208 {30, 60, 90, 150}, \u03bb \u2208 {1, 10, 50, 100} and \u03c4 \u2208 {1.0, 0.1, 0.01, 0.001}, and search over this space using a development set of thirty document randomly selected from within the RST Treebank training data. We initialize each element of A 0 to a uniform random value in the range [0, 1]. For mini-batch learning, we fixed the batch size to be 500 training samples (shift-reduce operations) in each iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameters and Initialization", "sec_num": "4.1" }, { "text": "As described thus far, our model considers only the projected representation of each EDU in its parsing decisions. But prior work has shown that other, structural features can provide useful information (Joty et al., 2013) . We therefore augment our classifier with a set of simple feature templates. These templates are applied to individual EDUs, as well as pairs of EDUs: (1) the two EDUs on top of the stack, and (2) the EDU on top of the stack and the EDU in front of the queue. The features are shown in Table 2 . In computing these features, all tokens are downcased, and numerical features are not binned. The dependency structure and POS tags are obtained from MALT-Parser (Nivre et al., 2007) .", "cite_spans": [ { "start": 203, "end": 222, "text": "(Joty et al., 2013)", "ref_id": "BIBREF17" }, { "start": 682, "end": 702, "text": "(Nivre et al., 2007)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 510, "end": 517, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Additional features", "sec_num": "4.2" }, { "text": "We evaluate DPLP on the RST Discourse Treebank (Carlson et al., 2001) , comparing against state-of-the-art results. We also investigate the information encoded by the projection matrix.", "cite_spans": [ { "start": 47, "end": 69, "text": "(Carlson et al., 2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Dataset The RST Discourse Treebank (RST-DT) consists of 385 documents, with 347 for train-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "Words at beginning and end of the EDU BEGIN-WORD-STACK1 = but BEGIN-WORD-STACK1-QUEUE1 = but, the POS tag at beginning and end of the EDU BEGIN-TAG-STACK1 = CC BEGIN-TAG-STACK1-QUEUE1 = CC, DT Head word set from each EDU. The set includes words whose parent in the depenency graph is ROOT or is not within the EDU (Sagae, 2009) .", "cite_spans": [ { "start": 314, "end": 327, "text": "(Sagae, 2009)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Examples", "sec_num": null }, { "text": "Length of EDU in tokens LEN-STACK1-STACK2 = 7, 8 Distance between EDUs DIST-STACK1-QUEUE1 = 2 Distance from the EDU to the beginning of the document DIST-FROM-START-QUEUE1 = 3 Distance from the EDU to the end of the document DIST-FROM-END-STACK1 = 1 Whether two EDUs are in the same sentence SAME-SENT-STACK1-QUEUE1 = True Fixed projection matrix baselines Instead of learning from data, a simple way to obtain a projection matrix is to use matrix factorization. Recent work has demonstrated the effectiveness of non-negative matrix factorization (NMF) for measuring distributional similarity (Dinu and Lapata, 2010; Van de Cruys and Apidianaki, 2011) . We can construct B nmf in the concatenation form of the projection matrix by applying NMF to the EDU-feature matrix, M \u2248 WH. As a result, W describes each EDU with a K-dimensional vector, and H describes each word with a K-dimensional vector. We can then construct B nmf by taking the pseudo-inverse of H, which then projects from word-count vectors into the latent space. Another way to construct B is to use neural word embeddings (Collobert and Weston, 2008) . In this case, we can view the product Bv as a composition of the word embeddings, using the simple additive composition model proposed by Mitchell and Lapata (2010) . We used the word embeddings from Collobert and Weston (2008) with dimension {25, 50, 100}. Grid search over heldout training data was used to select the optimum latent dimension for both the NMF and word embedding baselines. Note that the size K of the resulting projection matrix is three times the size of the embedding (or NMF representation) due to the concatenate construction.", "cite_spans": [ { "start": 593, "end": 616, "text": "(Dinu and Lapata, 2010;", "ref_id": "BIBREF10" }, { "start": 617, "end": 651, "text": "Van de Cruys and Apidianaki, 2011)", "ref_id": "BIBREF49" }, { "start": 1087, "end": 1115, "text": "(Collobert and Weston, 2008)", "ref_id": "BIBREF6" }, { "start": 1256, "end": 1282, "text": "Mitchell and Lapata (2010)", "ref_id": "BIBREF31" }, { "start": 1318, "end": 1345, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "HEAD-WORDS-STACK2 = working", "sec_num": null }, { "text": "We also consider the special case where A = I.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HEAD-WORDS-STACK2 = working", "sec_num": null }, { "text": "Competitive systems We compare our approach with HILDA (Hernault et al., 2010) and TSP (Joty et al., 2013) . Joty et al. (2013) proposed two different approaches to combine sentence-level parsing models: sliding windows (TSP SW) and 1 sentence-1 subtree (TSP 1-1). In the comparison, we report the results of both approaches. All results are based on the same gold standard EDU segmentation. We cannot compare with the results of Feng and Hirst (2012), because they do not evaluate on the overall discourse structure, but rather treat each relation as an individual classification problem.", "cite_spans": [ { "start": 55, "end": 78, "text": "(Hernault et al., 2010)", "ref_id": "BIBREF15" }, { "start": 87, "end": 106, "text": "(Joty et al., 2013)", "ref_id": "BIBREF17" }, { "start": 109, "end": 127, "text": "Joty et al. (2013)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "HEAD-WORDS-STACK2 = working", "sec_num": null }, { "text": "Metrics To evaluate the parsing performance, we use the three standard ways to measure the performance: unlabeled (i.e., hierarchical spans) and labeled (i.e., nuclearity and relation) F-score, as defined by Black et al. (1991) . The application of this approach to RST parsing is described by Marcu (2000b) . 3 To compare with previous works on RST-DT, we use the 18 coarse-grained relations defined in (Carlson et al., 2001 ).", "cite_spans": [ { "start": 208, "end": 227, "text": "Black et al. (1991)", "ref_id": "BIBREF2" }, { "start": 294, "end": 307, "text": "Marcu (2000b)", "ref_id": "BIBREF28" }, { "start": 310, "end": 311, "text": "3", "ref_id": null }, { "start": 404, "end": 425, "text": "(Carlson et al., 2001", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "HEAD-WORDS-STACK2 = working", "sec_num": null }, { "text": "Matrix Form +Features K Span Nuclearity Relation Prior work 1. HILDA (Hernault et al., 2010) 83.0 68.4 54.8 2. TSP 1-1 (Joty et al., 2013) 82.47 68.43 55.73 3. TSP SW (Joty et al., 2013) 82 (Joty et al., 2013; Hernault et al., 2010) . Table 3 presents RST parsing results for DPLP and some alternative systems. All versions of DPLP outperform the prior state-of-the-art on nuclearity and relation detection. This includes relatively simple systems whose features are simply a projection of the word count vectors for each EDU (lines 7 and 8). The addition of the features from Table 2 improves performance further, leading to absolute F-score improvement of around 2.5% in nuclearity and 6% in relation prediction (lines 9 and 10).", "cite_spans": [ { "start": 69, "end": 92, "text": "(Hernault et al., 2010)", "ref_id": "BIBREF15" }, { "start": 119, "end": 138, "text": "(Joty et al., 2013)", "ref_id": "BIBREF17" }, { "start": 167, "end": 186, "text": "(Joty et al., 2013)", "ref_id": "BIBREF17" }, { "start": 190, "end": 209, "text": "(Joty et al., 2013;", "ref_id": "BIBREF17" }, { "start": 210, "end": 232, "text": "Hernault et al., 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 235, "end": 242, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 577, "end": 584, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "On span detection, DPLP performs slightly worse than the prior state-of-the-art. These systems employ richer syntactic and contextual features, which might be especially helpful for span identification. As shown by line 4 of the results table, the basic features from Table 2 provide most of the predictive power for spans; however, these features are inadequate at the more semantically-oriented tasks of nuclearity and relation prediction, which benefit substantially from the projected features. Since correctly identifying spans is a precondition for nuclearity and relation prediction, we might obtain still better results by combining features from HILDA and TSP with the representation learning approach described here.", "cite_spans": [], "ref_spans": [ { "start": 268, "end": 275, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "Lines 5 and 6 show that discriminative learning of the projection matrix is crucial, as fixed projections obtained from NMF or neural word embeddings perform substantially worse. Line 7 shows that the original bag-of-words representation together with basic features could give us some benefit on discourse parsing, but still not as good as results from DPLP. From lines 8 and 9, we see that the concatenation construction is superior to the difference construction, but the comparison between lines 10 and 11 is inconclusive on the merits of the general form of A. This suggests that using the projection matrix to model interrelationships between EDUs does not substantially improve performance, and the simpler concatenation construction may be preferred. Figure 3 shows how performance changes for different latent dimensions K. At each value of K, we employ grid search over a development set to identify the optimal regularizers \u03bb and \u03c4 . For the concatenation construction, performance is not overly sensitive to K. For the general form of A, performance decreases with large K. Recall from Section 2.3 that this construction has nine times as many parameters as the concatenation form; with large values of K, it is likely to overfit.", "cite_spans": [], "ref_spans": [ { "start": 759, "end": 767, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "Why does projection of the surface features improve discourse parsing? To answer this question, we examine what information the projection matrix is learning to encoded. We take the projection matrix from the concatenation construction and K = 60 as an example for case study. Recalling the definition in equation 4, the projection matrix A will be composed of three identical submatrices B \u2208 R 20\u00d7V . The columns of the B matrix can be viewed as 20-dimensional descriptors of the words in the vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Projection Matrix", "sec_num": "5.3" }, { "text": "For the purpose of visualization, we further reduce the dimension of latent representation from K = 20 to 2 dimensions using t-SNE (van der Maaten and Hinton, 2008) . One further simpli- Concatenation DPLP General DPLP TSP 1-1 (Joty, et al., 2013) HILDA (Hernault, et al., 2010) Concatenation DPLP General DPLP TSP 1-1 (Joty, et al., 2013) HILDA (Hernault, et al., 2010) Concatenation DPLP General DPLP TSP 1-1 (Joty, et al., 2013) HILDA (Hernault, et al., 2010) (c) Relation Figure 3 : The performance of our parser over different latent dimension K. Results for DPLP include the additional features from Table 3 fication for visualization is we consider only the top 1000 frequent unigrams in the RST-DT training set. For comparison, we also apply t-SNE to the projection matrix B nmf recovered from nonnegative matrix factorization. Figure 4 highlights words that are related to discourse analysis. Among the top 1000 words, we highlight the words from 5 major discourse connective categories provided in Appendix B of the PDTB annotation manual (Prasad et al., 2008) : CONJUNCTION, CONTRAST, PRECEDENCE, RE-SULT, and SUCCESSION. In addition, we also highlighted two verb categories from the top 1000 words: modal verbs and reporting verbs, with their inflections (Krestel et al., 2008) .", "cite_spans": [ { "start": 140, "end": 164, "text": "Maaten and Hinton, 2008)", "ref_id": "BIBREF50" }, { "start": 227, "end": 247, "text": "(Joty, et al., 2013)", "ref_id": "BIBREF17" }, { "start": 254, "end": 278, "text": "(Hernault, et al., 2010)", "ref_id": "BIBREF15" }, { "start": 319, "end": 339, "text": "(Joty, et al., 2013)", "ref_id": "BIBREF17" }, { "start": 346, "end": 370, "text": "(Hernault, et al., 2010)", "ref_id": "BIBREF15" }, { "start": 411, "end": 431, "text": "(Joty, et al., 2013)", "ref_id": "BIBREF17" }, { "start": 438, "end": 462, "text": "(Hernault, et al., 2010)", "ref_id": "BIBREF15" }, { "start": 1049, "end": 1070, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF35" }, { "start": 1267, "end": 1289, "text": "(Krestel et al., 2008)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 476, "end": 484, "text": "Figure 3", "ref_id": null }, { "start": 606, "end": 613, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 836, "end": 844, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Analysis of Projection Matrix", "sec_num": "5.3" }, { "text": "From the figure, it is clear DPLP has learned a projection matrix that successfully groups several major discourse-related word classes: particularly modal and reporting verbs; it has also grouped succession and precedence connectives with some success. In contrast, while NMF does obtain compact clusters of words, these clusters appear to be completely unrelated to discourse function of the words that they include. This demonstrates the value of using discriminative training to obtain the transformed representation of the discourse units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Projection Matrix", "sec_num": "5.3" }, { "text": "Early work on document-level discourse parsing applied hand-crafted rules and heuristics to build trees in the framework of Rhetorical Structure Theory (Sumita et al., 1992; Corston-Oliver, 1998; Marcu, 2000a ). An early data-driven approach was offered by Schilder (2002) , who used distributional techniques to rate the topicality of each discourse unit, and then chose among underspecified discourse structures by placing more topical sen-tences near the root. Learning-based approaches were first applied to identify within-sentence discourse relations (Soricut and Marcu, 2003) , and only later to cross-sentence relations at the document level (Baldridge and Lascarides, 2005) . Of particular relevance to our inference technique are incremental discourse parsing approaches, such as shift-reduce (Sagae, 2009) and A* (Muller et al., 2012) . Prior learning-based work has largely focused on lexical, syntactic, and structural features, but the close relationship between discourse structure and semantics (Forbes-Riley et al., 2006) suggests that shallow feature sets may struggle to capture the long tail of alternative lexicalizations that can be used to realize discourse relations (Prasad et al., 2010; Marcu and Echihabi, 2002) . Only Subba and Di Eugenio (2009) incorporate rich compositional semantics into discourse parsing, but due to the ambiguity of their semantic parser, they must manually select the correct semantic parse from a forest of possiblities.", "cite_spans": [ { "start": 152, "end": 173, "text": "(Sumita et al., 1992;", "ref_id": "BIBREF45" }, { "start": 174, "end": 195, "text": "Corston-Oliver, 1998;", "ref_id": "BIBREF8" }, { "start": 196, "end": 208, "text": "Marcu, 2000a", "ref_id": "BIBREF27" }, { "start": 257, "end": 272, "text": "Schilder (2002)", "ref_id": "BIBREF38" }, { "start": 557, "end": 582, "text": "(Soricut and Marcu, 2003)", "ref_id": "BIBREF43" }, { "start": 650, "end": 682, "text": "(Baldridge and Lascarides, 2005)", "ref_id": "BIBREF0" }, { "start": 803, "end": 816, "text": "(Sagae, 2009)", "ref_id": "BIBREF37" }, { "start": 824, "end": 845, "text": "(Muller et al., 2012)", "ref_id": "BIBREF32" }, { "start": 1011, "end": 1038, "text": "(Forbes-Riley et al., 2006)", "ref_id": "BIBREF13" }, { "start": 1191, "end": 1212, "text": "(Prasad et al., 2010;", "ref_id": "BIBREF36" }, { "start": 1213, "end": 1238, "text": "Marcu and Echihabi, 2002)", "ref_id": "BIBREF24" }, { "start": 1246, "end": 1273, "text": "Subba and Di Eugenio (2009)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Recent work has succeeded in pushing the stateof-the-art in RST parsing by innovating on several fronts. Feng and Hirst (2012) explore rich linguistic linguistic features, including lexical semantics and discourse production rules suggested by Lin et al. (2009) in the context of the Penn Discourse Treebank (Prasad et al., 2008) . Muller et al. (2012) show that A* decoding can outperform both greedy and graph-based decoding algorithms. Joty et al. (2013) achieve the best prior results on RST relation detection by (i) jointly performing relation detection and classification, (ii) performing bottom-up rather than greedy decoding, and (iii) distinguishing between intra-sentence and inter-sentence relations. Our approach is largely orthogonal to this prior work: we focus on trans- forming the lexical representation of discourse units into a latent space to facilitate learning. As shown in Figure 4 (a), this projection succeeds at grouping words with similar discourse functions. We might expect to obtain further improvements by augmenting this representation learning approach with rich syntactic features (particularly for span identification), more accurate decoding, and special treatment of intra-sentence relations; this is a direction for future research. Discriminative learning of latent features for discourse processing can be viewed as a form of representation learning (Bengio et al., 2013) . Also called Deep Learning, such approaches have recently been applied in a number of NLP tasks (Collobert et al., 2011; Socher et al., 2012) . Of particular relevance are applications to the detection of semantic or discourse relations, such as paraphrase, by comparing sentences in an induced latent space (Socher et al., 2011; Guo and Diab, 2012; Ji and Eisenstein, 2013) . In this work, we show how discourse structure annotations can function as a supervision signal to discriminatively learn a transformation from lexical features to a latent space that is well-suited for discourse parsing. Unlike much of the prior work on representation learning, we induce a simple linear transformation. Extension of our approach by incorporating a non-linear activation function is a natural topic for future research.", "cite_spans": [ { "start": 244, "end": 261, "text": "Lin et al. (2009)", "ref_id": "BIBREF22" }, { "start": 308, "end": 329, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF35" }, { "start": 332, "end": 352, "text": "Muller et al. (2012)", "ref_id": "BIBREF32" }, { "start": 439, "end": 457, "text": "Joty et al. (2013)", "ref_id": "BIBREF17" }, { "start": 1391, "end": 1412, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF1" }, { "start": 1510, "end": 1534, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF7" }, { "start": 1535, "end": 1555, "text": "Socher et al., 2012)", "ref_id": "BIBREF40" }, { "start": 1722, "end": 1743, "text": "(Socher et al., 2011;", "ref_id": "BIBREF39" }, { "start": 1744, "end": 1763, "text": "Guo and Diab, 2012;", "ref_id": "BIBREF14" }, { "start": 1764, "end": 1788, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 897, "end": 905, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We have presented a framework to perform discourse parsing while jointly learning to project to a low-dimensional representation of the discourse units. Using the vector-space representation of EDUs, our shift-reduce parsing system substantially outperforms existing systems on nuclearity detection and discourse relation identification. By adding some additional surface features, we obtain further improvements. The low dimensional representation also captures basic intuitions about discourse connectives and verbs, as shown in Figure 4(a) .", "cite_spans": [], "ref_spans": [ { "start": 531, "end": 542, "text": "Figure 4(a)", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Deep learning approaches typically apply a non-linear transformation such as the sigmoid function (Bengio et al., 2013) . We have conducted a few unsuccessful experiments with the \"hard tanh\" function proposed by Collobert and Weston (2008) , but a more complete exploration of non-linear transformations must wait for future work. Another direction would be more sophisticated composition of the surface features within each elementary discourse unit, such as the hierarchical convolutional neural network (Kalchbrenner and Blunsom, 2013) or the recursive tensor network (Socher et al., 2013) . It seems likely that a better accounting for syntax could improve the latent representations that our method induces.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF1" }, { "start": 213, "end": 240, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF6" }, { "start": 572, "end": 593, "text": "(Socher et al., 2013)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We implemented the evaluation metrics by ourselves. Together with the DPLP system, all codes are published on https://github.com/jiyfeng/DPLP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the reviewers for their helpful feedback, particularly for the connection to multitask learning. We also want to thank Kenji Sagae and Vanessa Wei Feng for the helpful discussion via email communication. This research was supported by Google Faculty Research Awards to the second author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Probabilistic head-driven parsing for discourse structure", "authors": [ { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "96--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Baldridge and Alex Lascarides. 2005. Proba- bilistic head-driven parsing for discourse structure. In Proceedings of the Ninth Conference on Compu- tational Natural Language Learning, pages 96-103.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "", "volume": "35", "issue": "", "pages": "1798--1828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 35(8):1798-1828.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars", "authors": [ { "first": "Ezra", "middle": [], "last": "Black", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Abney", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Gdaniec", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Don", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "Fred", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Klavans", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at", "volume": "", "issue": "", "pages": "306--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ezra Black, Steve Abney, Dan Flickinger, Claudia Gdaniec, Ralph Grishman, Phil Harrison, Don Hin- dle, Robert Ingria, Fred Jelinek, Judith Klavans, Mark Liberman, Mitchell Marcus, Salim Roukos, Beatrice Santorini, and Tomek Strzalkowski. 1991. A Procedure for Quantitatively Comparing the Syn- tactic Coverage of English Grammars. In Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991, pages 306-311.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Holistic discourse coherence annotation for noisy essay writing", "authors": [ { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 2013, "venue": "Dialogue & Discourse", "volume": "4", "issue": "2", "pages": "34--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jill Burstein, Joel Tetreault, and Martin Chodorow. 2013. Holistic discourse coherence annotation for noisy essay writing. Dialogue & Discourse, 4(2):34-52.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory", "authors": [ { "first": "Lynn", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Mary", "middle": [ "Ellen" ], "last": "Okurowski", "suffix": "" } ], "year": 2001, "venue": "Proceedings of Second SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. In Proceedings of Second SIGdial Work- shop on Discourse and Dialogue.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL, page 111. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceed- ings of ACL, page 111. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert and J. Weston. 2008. A Unified Architec- ture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In ICML.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural Language Processing (Almost) from Scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural Lan- guage Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Beyond string matching and cue phrases: Improving efficiency and coverage in discourse analysis", "authors": [ { "first": "", "middle": [], "last": "Simon Corston-Oliver", "suffix": "" } ], "year": 1998, "venue": "The AAAI Spring Symposium on Intelligent Text Summarization", "volume": "", "issue": "", "pages": "9--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Corston-Oliver. 1998. Beyond string matching and cue phrases: Improving efficiency and coverage in discourse analysis. In The AAAI Spring Sympo- sium on Intelligent Text Summarization, pages 9-15.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2001, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "265--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer and Yoram Singer. 2001. On the Algo- rithmic Implementation of Multiclass Kernel-based Vector Machines. Journal of Machine Learning Re- search, 2:265-292.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Measuring Distributional Similarity in Context", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1162--1172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu and Mirella Lapata. 2010. Measur- ing Distributional Similarity in Context. In EMNLP, pages 1162-1172.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Text-level Discourse Parsing with Rich Linguistic Features", "authors": [ { "first": "Vanessa", "middle": [], "last": "Wei Feng", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanessa Wei Feng and Graeme Hirst. 2012. Text-level Discourse Parsing with Rich Linguistic Features. In Proceedings of ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Building Watson: An overview of the DeepQA project", "authors": [ { "first": "David", "middle": [], "last": "Ferrucci", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "James", "middle": [], "last": "Fan", "suffix": "" }, { "first": "David", "middle": [], "last": "Gondek", "suffix": "" }, { "first": "Aditya", "middle": [ "A" ], "last": "Kalyanpur", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lally", "suffix": "" }, { "first": "William", "middle": [], "last": "Murdock", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" }, { "first": "John", "middle": [], "last": "Prager", "suffix": "" } ], "year": 2010, "venue": "AI magazine", "volume": "31", "issue": "3", "pages": "59--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine, 31(3):59-79.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Computing discourse semantics: The predicate-argument semantics of discourse connectives in D-LTAG", "authors": [ { "first": "Katherine", "middle": [], "last": "Forbes-Riley", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2006, "venue": "Journal of Semantics", "volume": "23", "issue": "1", "pages": "55--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katherine Forbes-Riley, Bonnie Webber, and Aravind Joshi. 2006. Computing discourse semantics: The predicate-argument semantics of discourse connec- tives in D-LTAG. Journal of Semantics, 23(1):55- 106.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Modeling Sentences in the Latent Space", "authors": [ { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "864--872", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Guo and Mona Diab. 2012. Modeling Sen- tences in the Latent Space. In Proceedings of ACL, pages 864-872, Jeju Island, Korea, July. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "HILDA: A Discourse Parser Using Support Vector Machine Classification", "authors": [ { "first": "Hugo", "middle": [], "last": "Hernault", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2010, "venue": "Dialogue and Discourse", "volume": "1", "issue": "3", "pages": "1--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue and Discourse, 1(3):1-33.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Discriminative Improvements to Distributional Sentence Similarity", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "891--896", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangfeng Ji and Jacob Eisenstein. 2013. Discrimina- tive Improvements to Distributional Sentence Simi- larity. In EMNLP, pages 891-896, Seattle, Washing- ton, USA, October. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Combining Intra-and Multi-sentential Rhetorical Parsing for Documentlevel Discourse Analysis", "authors": [ { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining Intra-and Multi-sentential Rhetorical Parsing for Document- level Discourse Analysis. In Proceedings of ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recurrent convolutional neural networks for discourse compositionality", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality", "volume": "", "issue": "", "pages": "119--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compo- sitionality. In Proceedings of the Workshop on Con- tinuous Vector Space Models and their Composition- ality, pages 119-126, Sofia, Bulgaria, August. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Simple Semi-supervised Dependency Parsing", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2008, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "595--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Pars- ing. In Proceedings of ACL-HLT, pages 595-603, Columbus, Ohio, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles", "authors": [ { "first": "Ralf", "middle": [], "last": "Krestel", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Bergler", "suffix": "" }, { "first": "Ren\u00e9", "middle": [], "last": "Witte", "suffix": "" } ], "year": 2008, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Krestel, Sabine Bergler, and Ren\u00e9 Witte. 2008. Minding the Source: Automatic Tagging of Re- ported Speech in Newspaper Articles. In LREC, Marrakech, Morocco, May. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "On Optimization Methods for Deep Learning", "authors": [ { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Jiquan", "middle": [], "last": "Le", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Ngiam", "suffix": "" }, { "first": "Abhik", "middle": [], "last": "Coates", "suffix": "" }, { "first": "Bobby", "middle": [], "last": "Lahiri", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Prochnow", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc V. Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y. Ng. 2011. On Optimization Methods for Deep Learning. In ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Recognizing Implicit Discourse Relations in the Penn Discourse Treebank", "authors": [ { "first": "Ziheng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In EMNLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Discourse indicators for content selection in summarization", "authors": [ { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "147--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summa- rization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Di- alogue, pages 147-156. Association for Computa- tional Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An Unsupervised Approach to Recognizing Discourse Relations", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "368--375", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An Unsupervised Approach to Recognizing Discourse Relations. In Proceedings of ACL, pages 368-375, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Building Up Rhetorical Structure Trees", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 1996, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 1996. Building Up Rhetorical Structure Trees. In Proceedings of AAAI.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Decision-Based Approach to Rhetorical Parsing", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "365--372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 1999. A Decision-Based Approach to Rhetorical Parsing. In Proceedings of ACL, pages 365-372, College Park, Maryland, USA, June. As- sociation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Rhetorical Parsing of Unrestricted Texts: A Surface-based Approach", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "", "pages": "395--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000a. The Rhetorical Parsing of Un- restricted Texts: A Surface-based Approach. Com- putational Linguistics, 26:395-448.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The Theory and Practice of Discourse Parsing and Summarization", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000b. The Theory and Practice of Dis- course Parsing and Summarization. MIT Press.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Name Tagging with Word Clusters and Discriminative Training", "authors": [ { "first": "Scott", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Jethran", "middle": [], "last": "Guinness", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Zamanian", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "337--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name Tagging with Word Clusters and Dis- criminative Training. In Daniel Marcu Susan Du- mais and Salim Roukos, editors, HLT-NAACL, pages 337-342, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Evaluation of text coherence for electronic essay scoring systems", "authors": [ { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Kukich", "suffix": "" } ], "year": 2004, "venue": "Natural Language Engineering", "volume": "10", "issue": "1", "pages": "25--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring sys- tems. Natural Language Engineering, 10(1):25-55.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Composition in distributional models of semantics", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "Cognitive Science", "volume": "34", "issue": "8", "pages": "1388--1429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Sci- ence, 34(8):1388-1429.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Constrained Decoding for Text-Level Discourse Parsing", "authors": [ { "first": "Philippe", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Stergos", "middle": [], "last": "Afantenos", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Denis", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Asher", "suffix": "" } ], "year": 2012, "venue": "The COL-ING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "1883--1900", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philippe Muller, Stergos Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained Decoding for Text-Level Discourse Parsing. In Coling, pages 1883-1900, Mumbai, India, December. The COL- ING 2012 Organizing Committee.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Structured Penalties for Log-Linear Language Models", "authors": [ { "first": "Anil", "middle": [], "last": "Kumar Nelakanti", "suffix": "" }, { "first": "Cedric", "middle": [], "last": "Archambeau", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Mairal", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "233--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anil Kumar Nelakanti, Cedric Archambeau, Julien Mairal, Francis Bach, and Guillaume Bouchard. 2013. Structured Penalties for Log-Linear Lan- guage Models. In EMNLP, pages 233-243, Seattle, Washington, USA, October. Association for Com- putational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "MaltParser: A language-independent system for data-driven dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Atanas", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven de- pendency parsing. Natural Language Engineering, 13(2):95-135.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The penn discourse treebank 2.0", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Livio", "middle": [], "last": "Robaldo", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2008, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In LREC.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Realization of discourse relations by other means: alternative lexicalizations", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", "volume": "", "issue": "", "pages": "1023--1031", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2010. Realization of discourse relations by other means: alternative lexicalizations. In Proceedings of the 23rd International Conference on Computa- tional Linguistics: Posters, pages 1023-1031. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Analysis of Discourse Structure with Syntactic Dependencies and Data-Driven Shift-Reduce Parsing", "authors": [ { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT)", "volume": "", "issue": "", "pages": "81--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Sagae. 2009. Analysis of Discourse Structure with Syntactic Dependencies and Data-Driven Shift- Reduce Parsing. In Proceedings of the 11th Interna- tional Conference on Parsing Technologies (IWPT), pages 81-84, Paris, France, October. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Robust discourse parsing via discourse markers, topicality and position", "authors": [ { "first": "Frank", "middle": [], "last": "Schilder", "suffix": "" } ], "year": 2002, "venue": "Natural Language Engineering", "volume": "8", "issue": "3", "pages": "235--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Schilder. 2002. Robust discourse parsing via discourse markers, topicality and position. Natural Language Engineering, 8(3):235-255.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic Pooling and Unfolding Recursive Autoen- coders for Paraphrase Detection. In NIPS.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Semantic Compositionality Through Recursive Matrix-Vector Spaces", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Brody", "middle": [], "last": "Huval", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Brody Huval, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Semantic Composi- tionality Through Recursive Matrix-Vector Spaces. In EMNLP.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification", "authors": [ { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Galileo", "middle": [], "last": "Namata", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse rela- tions for improving opinion polarity classification. In Proceedings of EMNLP.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Sentence Level Discourse Parsing using Syntactic and Lexical Information", "authors": [ { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radu Soricut and Daniel Marcu. 2003. Sentence Level Discourse Parsing using Syntactic and Lexical Infor- mation. In NAACL.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "An effective Discourse Parser that uses Rich Linguistic Information", "authors": [ { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" }, { "first": "Barbara", "middle": [ "Di" ], "last": "Eugenio", "suffix": "" } ], "year": 2009, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "566--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajen Subba and Barbara Di Eugenio. 2009. An effec- tive Discourse Parser that uses Rich Linguistic In- formation. In NAACL-HLT, pages 566-574, Boul- der, Colorado, June. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A discourse structure analyzer for Japanese text", "authors": [ { "first": "K", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "K", "middle": [], "last": "Ono", "suffix": "" }, { "first": "T", "middle": [], "last": "Chino", "suffix": "" }, { "first": "T", "middle": [], "last": "Ukita", "suffix": "" }, { "first": "S", "middle": [], "last": "Amano", "suffix": "" } ], "year": 1992, "venue": "Proceedings International Conference on Fifth Generation Computer Systems", "volume": "", "issue": "", "pages": "1133--1140", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sumita, K. Ono, T. Chino, T. Ukita, and S. Amano. 1992. A discourse structure analyzer for Japanese text. In Proceedings International Conference on Fifth Generation Computer Systems, pages 1133- 1140.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Applications of rhetorical structure theory. Discourse studies", "authors": [ { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" }, { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "", "middle": [], "last": "Mann", "suffix": "" } ], "year": 2006, "venue": "", "volume": "8", "issue": "", "pages": "567--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maite Taboada and William C Mann. 2006. Applica- tions of rhetorical structure theory. Discourse stud- ies, 8(4):567-588.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Max-margin markov networks", "authors": [ { "first": "Benjamin", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2003, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-margin markov networks. In NIPS.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Word Representation: A Simple and General Method for Semi-Supervised Learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word Representation: A Simple and General Method for Semi-Supervised Learning. In Proceed- ings of ACL, pages 384-394.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Latent Semantic Word Sense Induction and Disambiguation", "authors": [ { "first": "Tim", "middle": [], "last": "Van De Cruys", "suffix": "" }, { "first": "Marianna", "middle": [], "last": "Apidianaki", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1476--1485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Van de Cruys and Marianna Apidianaki. 2011. Latent Semantic Word Sense Induction and Disam- biguation. In Proceedings of ACL, pages 1476- 1485, Portland, Oregon, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Visualizing Data using t-SNE", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of Machine Learning Research", "volume": "9", "issue": "", "pages": "2759--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2759-2605, November.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Not all words are created equal: Extracting semantic orientation as a function of adjective relevance", "authors": [ { "first": "Kimberly", "middle": [], "last": "Voll", "suffix": "" }, { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Australian Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimberly Voll and Maite Taboada. 2007. Not all words are created equal: Extracting semantic orien- tation as a function of adjective relevance. In Pro- ceedings of Australian Conference on Artificial In- telligence.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A Reranking Model for Discourse Segmentation using Subtree Features", "authors": [ { "first": "", "middle": [], "last": "Ngo Xuan", "suffix": "" }, { "first": "Nguyen", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Akira", "middle": [], "last": "Le Minh", "suffix": "" }, { "first": "", "middle": [], "last": "Shimazu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "160--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngo Xuan Bach, Nguyen Le Minh, and Akira Shimazu. 2012. A Reranking Model for Discourse Segmenta- tion using Subtree Features. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 160-168.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Learning structural SVMs with latent variables", "authors": [ { "first": "Chun-Nam John", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th Annual International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1169--1176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural SVMs with latent variables. In Proceedings of the 26th Annual International Con- ference on Machine Learning, pages 1169-1176. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Decision problem with different representation functions", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Latent representation of words from non-negative matrix factorization with K = 20.", "num": null }, "FIGREF5": { "uris": null, "type_str": "figure", "text": "t-SNE Visualization on latent representations of words.", "num": null }, "TABREF0": { "html": null, "content": "
: Additional features for RST parsing
ing and 38 for testing in the standard split. As
we focus on relational discourse parsing, we fol-
low prior work (Feng and Hirst, 2012; Joty et al.,
2013), and use gold EDU segmentations. The
strongest automated RST segmentation methods
currently attain 95% accuracy (Xuan Bach et al.,
2012).
af-
ter down-casing. No other preprocessing is per-
formed. In total, there are 16250 unique unigrams
in V.
", "type_str": "table", "num": null, "text": "" }, "TABREF2": { "html": null, "content": "", "type_str": "table", "num": null, "text": "Parsing results of different models on the RST-DT test set. The results of TSP and HILDA are reprinted from prior work" } } } }