{ "paper_id": "D16-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:37:48.332511Z" }, "title": "Discourse Parsing with Attention-based Hierarchical Neural Networks", "authors": [ { "first": "Qi", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "qi.li@pku.edu.cn" }, { "first": "Tianshi", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "RST-style document-level discourse parsing remains a difficult task and efficient deep learning models on this task have rarely been presented. In this paper, we propose an attention-based hierarchical neural network model for discourse parsing. We also incorporate tensor-based transformation function to model complicated feature interactions. Experimental results show that our approach obtains comparable performance to the contemporary state-of-the-art systems with little manual feature engineering.", "pdf_parse": { "paper_id": "D16-1035", "_pdf_hash": "", "abstract": [ { "text": "RST-style document-level discourse parsing remains a difficult task and efficient deep learning models on this task have rarely been presented. In this paper, we propose an attention-based hierarchical neural network model for discourse parsing. We also incorporate tensor-based transformation function to model complicated feature interactions. Experimental results show that our approach obtains comparable performance to the contemporary state-of-the-art systems with little manual feature engineering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A document is formed by a series of coherent text units. Document-level discourse parsing is a task to identify the relations between the text units and to determine the structure of the whole document the text units form. Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is one of the most influential discourse theories. According to RST, the discourse structure of a document can be represented by a Discourse Tree (DT). Each leaf of a DT denotes a text unit referred to as an Elementary Discourse Unit (EDU) and an inner node of a DT represents a text span which is constituted by several adjacent EDUs. DTs can be utilized by many NLP tasks including automatic document summarization (Louis et al., 2010; Marcu, 2000) , question-answering (Verberne et al., 2007) and sentiment analysis (Somasundaran, 2010) etc.", "cite_spans": [ { "start": 257, "end": 282, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF16" }, { "start": 700, "end": 720, "text": "(Louis et al., 2010;", "ref_id": "BIBREF15" }, { "start": 721, "end": 733, "text": "Marcu, 2000)", "ref_id": "BIBREF18" }, { "start": 755, "end": 778, "text": "(Verberne et al., 2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Much work has been devoted to the task of RSTstyle discourse parsing and most state-of-the-art ap-proaches heavily rely on manual feature engineering (Joty et al., 2013; Feng and Hirst, 2014; Ji and Eisenstein, 2014) . While neural network models have been increasingly focused on for their ability to automatically extract efficient features which reduces the burden of feature engineering, there is little neural network based work for RST-style discourse parsing except the work of Li et al. (2014a) . Li et al. (2014a) propose a recursive neural network model to compute the representation for each text span based on the representations of its subtrees. However, vanilla recursive neural networks suffer from gradient vanishing for long sequences and the normal transformation function they use is weak at modeling complicated interactions which has been stated by Socher et al. (2013) . As many documents contain more than a hundred EDUs which form quite a long sequence, those weaknesses may lead to inferior results on this task.", "cite_spans": [ { "start": 150, "end": 169, "text": "(Joty et al., 2013;", "ref_id": "BIBREF9" }, { "start": 170, "end": 191, "text": "Feng and Hirst, 2014;", "ref_id": "BIBREF2" }, { "start": 192, "end": 216, "text": "Ji and Eisenstein, 2014)", "ref_id": "BIBREF8" }, { "start": 485, "end": 502, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" }, { "start": 505, "end": 522, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" }, { "start": 870, "end": 890, "text": "Socher et al. (2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose to use a hierarchical bidirectional Long Short-Term Memory (bi-LSTM) network to learn representations of text spans. Comparing with vanilla recursive/recurrent neural networks, LSTM-based networks can store information for a long period of time and don't suffer from gradient vanishing problem. We apply a hierarchical bi-LSTM network because the way words form an EDU and EDUs form a text span is different and thus they should be modeled separately and hierarchically. On top of that, we apply attention mechanism to attend over all EDUs to pick up prominent semantic information of a text span. Besides, we use tensor-based transformation function to model complicated feature interactions and thus it can produce combinatorial features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize contributions of our work as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose to use a hierarchical bidirectional LSTM network to learn the compositional semantic representations of text spans, which naturally matches and models the intrinsic hierarchical structure of text spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We extend our hierarchical bi-LSTM network with attention mechanism to allow the network to focus on the parts of input containing prominent semantic information for the compositional representations of text spans and thus alleviate the problem caused by the limited memory of LSTM for long text spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We adopt a tensor-based transformation function to allow explicit feature interactions and apply tensor factorization to reduce the parameters and computations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We use two level caches to intensively accelerate our probabilistic CKY-like parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows: Section 2 gives the details of our parsing model. Section 3 describes our parsing algorithm. Section 4 gives our training criterion. Section 5 reports the experimental results of our approach. Section 6 introduces the related work. Conclusions are given in section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given two successive text spans, our parsing model evaluates the probability to combine them into a larger span, identifies which one is the nucleus and determines what is the relation between them. As with the work of Ji and Eisenstein (2014), we set three classifiers which share the same features as input to deal with those problems. The whole parsing model is shown in Figure 1 . Three classifiers are on the top. The semantic representations of the two given text spans which come from the output of attention-based hierarchical bi-LSTM network with tensor-based transformation function is the main part of input to the classifiers. Additionally, following the previous practice of Li et al. (2014a) , a small set of handcrafted features is introduced to enhance the model. ", "cite_spans": [ { "start": 688, "end": 705, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 374, "end": 382, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parsing Model", "sec_num": "2" }, { "text": "Long Short-Term Memory (LSTM) networks have been successfully applied to a wide range of NLP tasks for the ability to handle long-term dependencies and to mitigate the curse of gradient vanishing (Hochreiter and Schmidhuber, 1997; Bahdanau et al., 2014; Rockt\u00e4schel et al., 2015; Hermann et al., 2015) . A basic LSTM can be described as follows. A sequence {x 1 , x 2 , ..., x n } is given as input. At each time-step, the LSTM computation unit takes in one token x t as input and it keeps some information in a cell state C t and gives an output h t . They are calculated in this way:", "cite_spans": [ { "start": 196, "end": 230, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF7" }, { "start": 231, "end": 253, "text": "Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 254, "end": 279, "text": "Rockt\u00e4schel et al., 2015;", "ref_id": null }, { "start": 280, "end": 301, "text": "Hermann et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i t = \u03c3(W i [h t\u22121 ; x t ] + b i ) (1) f t = \u03c3(W f [h t\u22121 ; x t ] + b f )", "eq_num": "(2)" } ], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C t = tanh(W C [h t\u22121 ; x t ] + b C )", "eq_num": "(3)" } ], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C t = f t C t\u22121 + i t C t (4) o t = \u03c3(W o [h t\u22121 ; x t ] + b o ) (5) h t = o t tanh(C t )", "eq_num": "(6)" } ], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "W i , b i , W f , b f , W c , b C , W o , b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "o are LSTM parameters, denotes element-wise product and \u03c3 denotes sigmoid function. The output at the last token, i.e., h n is taken as the representation of the whole sequence. Since an EDU is a sequence of words, we derive the representation of an EDU from the sequence constituted by concatenation of word embeddings and the POS tag embeddings of the words as Figure 2 shows. Previous work on discourse parsing tends to extract some features from the beginning and end of text units partly because discourse clues such as discourse markers(e.g., because, though) are often situated at the beginning or end of text units (Feng and Hirst, 2014; Ji and Eisenstein, 2014; Li et al., 2014a; Li et al., 2014b; Heilman and Sagae, 2015) . Considering the last few tokens of a sequence normally have more influence on the representation of the whole sequence learnt with LSTM because they get through less times of forget gate from the LSTM computation unit, to effectively capture the information from both beginning and end of an EDU, we use bidirectional LSTM to learn the representation of an EDU. In other words, one LSTM takes the word sequence in forward order as input, the other takes the word sequence in reversed order as input. The representation of a sequence is the concatenation of the two vector representations calculated by the two LSTMs.", "cite_spans": [ { "start": 624, "end": 646, "text": "(Feng and Hirst, 2014;", "ref_id": "BIBREF2" }, { "start": 647, "end": 671, "text": "Ji and Eisenstein, 2014;", "ref_id": "BIBREF8" }, { "start": 672, "end": 689, "text": "Li et al., 2014a;", "ref_id": "BIBREF13" }, { "start": 690, "end": 707, "text": "Li et al., 2014b;", "ref_id": "BIBREF14" }, { "start": 708, "end": 732, "text": "Heilman and Sagae, 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 363, "end": 372, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "Since a text span is a sequence of EDUs, its meaning can be computed from the meanings of the EDUs. So we use another bi-LSTM to derive the compositional semantic representation of a text span from the EDUs it contains. The two bi-LSTM networks form a hierarchical structure as Figure 1 shows.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Hierarchical Bi-LSTM Network for Text Span Representations", "sec_num": "2.1" }, { "text": "The representation of a sequence computed by bi-LSTMs is always a vector with fixed dimension despite the length of the sequence. Thus when dealing with a text span with hundreds of EDUs, bi-LSTM may not be enough to capture the whole semantic information with its limited output vector dimension. Attention mechanism can attend over the output at every EDU with global context and pick up prominent semantic information and drop the subordinate information for the compositional representation of the span, so we employ attention mechanism to alleviate the problem caused by the limited memory of LSTM networks. The attention mechanism is inspired by the work of Rockt\u00e4schel et al. (2015) . Our attention-based bi-LSTM network is shown in Figure 3 .", "cite_spans": [ { "start": 664, "end": 689, "text": "Rockt\u00e4schel et al. (2015)", "ref_id": null } ], "ref_spans": [ { "start": 740, "end": 748, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "We combine the last outputs of the span level bi-LSTM to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "h s = [ \u2212 \u2192 h en , \u2190 \u2212 h e 1 ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "We also combine the outputs of the two LSTM at every EDU of the span:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "h t = [ \u2212 \u2192 h t , \u2190 \u2212 h t ] and thus get a matrix H = [h 1 ; h 2 ; ...; h n ] T . Taking H \u2208 R d\u00d7n and h s \u2208 R d as inputs,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "we get a vector \u03b1 \u2208 R n standing for weights of EDUs to the text span and use it to get a weighted representation of the span r \u2208 R d :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M = tanh(W y H + W l h s \u2297 e n )", "eq_num": "(7)" } ], "section": "Attention", "sec_num": "2.2" }, { "text": "\u03b1 = sof tmax(w T \u03b1 M ) (8) r = H\u03b1 (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "where \u2297 denotes Cartesian product , M \u2208 R k\u00d7n , e n is a n dimensional vector of all 1s and we use the Cartesian product W l h s \u2297 e n to repeat the result of W l h s n times in column to form a matrix and W y \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "R k\u00d7d , W l , \u2208 R k\u00d7d , w \u03b1 \u2208 R k are parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "We synthesize the information of r and h s to get the final representation of the span:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "w h = \u03c3(W hr r + W hh h s ) (10) h = w h h s + (1 \u2212 w h ) r (11) where W hr , W hh \u2208 R d\u00d7d are parameters, w h \u2208 R d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "is a computed vector representing the element-wise weight of h s and the element-wise weighted summation h \u2208 R d is the final representation of the text span computed by the attention-based bidirectional LSTM network. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention", "sec_num": "2.2" }, { "text": "We concatenate the representations of the two given spans: h = [h s1 , h s2 ] and feed h into a full connection hidden layer to obtain a higher level representation v which is the input to the three classifiers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v = Relu(W h [h s1 , h s2 ] + b h )", "eq_num": "(12)" } ], "section": "Classifiers", "sec_num": "2.3" }, { "text": "For each classifier, we firstly transform v \u2208 R l into a hidden layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v sp = Relu(W hs v + b hs ) (13) v nu = Relu(W hn v + b hn ) (14) v rel = Relu(W hr v + b hr )", "eq_num": "(15)" } ], "section": "Classifiers", "sec_num": "2.3" }, { "text": "where W hs , W hn , W hr \u2208 R h\u00d7l are transformation matrices and b hs , b hn , b hr \u2208 R h are bias vectors. Then we feed these vectors into the respective output layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y sp = \u03c3(w s v sp + b s )", "eq_num": "(16)" } ], "section": "Classifiers", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y nu = sof tmax(W n v nu + b n )", "eq_num": "(17)" } ], "section": "Classifiers", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y rel = sof tmax(W r v rel + b r )", "eq_num": "(18)" } ], "section": "Classifiers", "sec_num": "2.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "w s \u2208 R h , b s \u2208 R, W n \u2208 R 3\u00d7h , W n \u2208 R 3\u00d7h , b n \u2208 R 3 , W r \u2208 R nr\u00d7h , b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "n \u2208 R nr are parameters and n r is the number of different discourse relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "The first classifier is a binary classifier which outputs the probability the two spans should be combined. The second classifier is a multiclass classifier which identifies the nucleus to be span 1, span 2 or both. The third classifier is also a multiclass classifier which determines the relation between the two spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers", "sec_num": "2.3" }, { "text": "Tensor-based transformation function has been successfully utilized in many tasks to allow complicated interaction between features (Sutskever et al., 2009; Socher et al., 2013; Pei et al., 2014) . Based on the intuition that allowing complicated interaction between the features of the two spans may help to identify how they are related, we adopt tensor-based transformation function to strengthen our model.", "cite_spans": [ { "start": 132, "end": 156, "text": "(Sutskever et al., 2009;", "ref_id": "BIBREF25" }, { "start": 157, "end": 177, "text": "Socher et al., 2013;", "ref_id": "BIBREF23" }, { "start": 178, "end": 195, "text": "Pei et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "A tensor-based transformation function on x \u2208 R d 1 is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "y = W x + x T T [1:d 2 ] x + b (19) y i = j W ij x j + j,k T [i] j,k x j x k + b i (20)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "y \u2208 R d 2 is the output vector, y i \u2208 R is the ith element of y, W \u2208 R d 2 \u00d7d 1 is the transformation matrix, T [1:d 2 ] \u2208 R d 1 \u00d7d 1 \u00d7d 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "is a 3rd-order transformation tensor. A normal transformation function in neural network models only has the first term W x with the bias term. It means for normal transformation function each unit of the output vector is the weighted summation of the input vector and this only allows additive interaction between the units of the input vector. With the tensor multiplication term, each unit of the output vector is augmented with the weighted summation of the multiplication of the input vector units and thus we incorporate multiplicative interaction between the units of the input vector. Inevitably, the incorporation of tensor leads to side effects which include the increase in parameter number and computational complexity. To remedy this, we adopt tensor factorization in the same way as Pei et al. (2014): we use two low rank matrices to approximate each tensor slice T [i] ", "cite_spans": [ { "start": 880, "end": 883, "text": "[i]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "\u2208 R d 1 \u00d7d 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "We apply the factorized tensor-based transformation function to the combined text span representation h = [h s1 , h s2 ] to make the features of the two spans explicitly interact with each other:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "v = Relu(W h [h s1 , h s2 ] + [h s1 , h s2 ] T P [1:d] h Q [1:d] h [h s1 , h s2 ] + b h ) (22)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "Comparing with Eq. 12, the transformation function is added with a tensor term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tensor-based Transformation", "sec_num": "2.4" }, { "text": "Most previously proposed state-of-the-art systems heavily rely on handcrafted features (Hernault et al., 2010; Feng and Hirst, 2014; Joty et al., 2013; Ji and Eisenstein, 2014; Heilman and Sagae, 2015) . Li et al. (2014a) show that some basic features are still necessary to get a satisfactory result for their recursive deep model. Following their practice, we adopt minimal basic features which are utilized by most systems to further strengthen our model. We list these features in Table 1 . We apply the factorized tensor-based transformation function to Word/POS features to allow more complicated interaction between them.", "cite_spans": [ { "start": 87, "end": 110, "text": "(Hernault et al., 2010;", "ref_id": "BIBREF5" }, { "start": 111, "end": 132, "text": "Feng and Hirst, 2014;", "ref_id": "BIBREF2" }, { "start": 133, "end": 151, "text": "Joty et al., 2013;", "ref_id": "BIBREF9" }, { "start": 152, "end": 176, "text": "Ji and Eisenstein, 2014;", "ref_id": "BIBREF8" }, { "start": 177, "end": 201, "text": "Heilman and Sagae, 2015)", "ref_id": "BIBREF3" }, { "start": 204, "end": 221, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 485, "end": 492, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Handcrafted Features", "sec_num": "2.5" }, { "text": "In this section, we describe our parsing algorithm which utilizes the parsing model to produce the global optimal DT for a segmented document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": "3" }, { "text": "We adopt a probabilistic CKY-like bottom-up algorithm which is also adopted in (Joty et al., 2013; Li et al., 2014a) to produce a DT for a document. This parsing algorithm is a dynamic programming algorithm and produces the global optimal DT with our parsing model. Given a text span which is constituted by [e i , e i+1 , ..., e j ] and the possible subtrees of [e i , e i+1 , ..., e k ] and [e k+1 , e k+2 , ..., e j ] for all k \u2208 {i, i+1, ..., j\u22121} with their probabilities, we choose k and combine the corresponding subtrees to form a combined DT with the following recurrence formula:", "cite_spans": [ { "start": 79, "end": 98, "text": "(Joty et al., 2013;", "ref_id": "BIBREF9" }, { "start": 99, "end": 116, "text": "Li et al., 2014a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic CKY-like Algorithm", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k = arg max k {P sp (i, k, j)P i,k P k+1,j }", "eq_num": "(23)" } ], "section": "Probabilistic CKY-like Algorithm", "sec_num": "3.1" }, { "text": "where P i,k and P k+1,j are the probabilities of the most probable subtrees of [e i , e i+1 , ..., e k ] and [e k+1 , e k+2 , ..., e j ] respectively, P sp (i, k, j) is the probability which is predicted by our parsing model to combine those two subtrees to form a DT. The probability of the most probable DT of [e i , e i+1 , ..., e j ] is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic CKY-like Algorithm", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P i,j = max k {P sp (i, k, j)P i,k P k+1,j }", "eq_num": "(24)" } ], "section": "Probabilistic CKY-like Algorithm", "sec_num": "3.1" }, { "text": "Computational complexity of the original probabilistic CKY-like algorithm is O(n 3 ) where n is the number of EDUs of the document. But in this work, given each pair of text spans, we compute the representations of them with hierarchical bi-LSTM network at the expense of an additional O(n) computations. So the computational complexity of our parser becomes O(n 4 ) and it is unacceptable for long documents. However, most computations are duplicated, so we use two level caches to drastically accelerate parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Acceleration", "sec_num": "3.2" }, { "text": "Firstly, we cache the outputs of the EDU level bi-LSTM which are the semantic representations of EDUs. As for the forward span level LSTM, after we get the semantic representation of a span, we cache it too and use it to compute the representation of an extended span. For example, after we get the representation of span constituted by [e 1 , e 2 , e 3 ], we take it with semantic representation of e 4 to compute the representation of the span constituted by [e 1 , e 2 , e 3 , e 4 ] in one LSTM computation step. For the backward span level LSTM, we do it the same way just in reversed order. Thus we decrease the computational complexity of computing the semantic representations for all possible span pairs which is the most time-consuming part of the original parsing process from O(n 4 ) to O(n 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Acceleration", "sec_num": "3.2" }, { "text": "Secondly, it can be seen that before we apply Relu to the tensor-based transformation function, many calculations from the two spans which include a large part of tensor multiplication are independent. The multiplication between the elements of the representations of the two spans caused by the tensors and the element-wise non-linear activation function Relu terminate the independence between them. So we can further cache the independent calculation results before Relu operation for each span. Thus we decrease the computational complexity of a large part of tensor-based transformation from O(n 3 ) to Word/POS Features One-hot representation of the first two words and of the last word of each span. One-hot representation of POS tags of the first two words and of the last word of each span.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Acceleration", "sec_num": "3.2" }, { "text": "Number of EDUs of each span. Number of words of each span. Predicted relations of the two subtrees' roots. Whether each span is included in one sentence. Whether both spans are included in one sentence. O(n 2 ) which is the second time-consuming part of the original parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shallow Features", "sec_num": null }, { "text": "The remaining O(n 3 ) computations include a little part of tensor-based transformation computations, Relu operation and the computations from the three classifiers. These computations take up only a little part of the original parsing model computations and thus we greatly accelerate our parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shallow Features", "sec_num": null }, { "text": "We use Max-Margin criterion for our model training. We try to learn a function that maps: X \u2192 Y , where X is the set of documents and Y is the set of possible DTs. We define the loss function for predicting a DT\u0177 i given the correct DT y i as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "(y i ,\u0177 i ) = r\u2208\u0177 i \u03ba1{r \u2208 y i } (25)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "where r is a span specified with nucleus and relation in the predicted DT, \u03ba is a hyperparameter referred to as discount parameter and 1 is indicator function. We expect the probability of the correct DT to be a larger up to a margin to other possible DTs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P rob(x, y i ) \u2265 P rob(x i ,\u0177 i ) + (y i ,\u0177 i )", "eq_num": "(26)" } ], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "The objective function for m training examples is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J(\u03b8) = 1 m m i=1 l i (\u03b8), where", "eq_num": "(27)" } ], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l i (\u03b8) = max y i (P rob(x i ,\u0177 i ) + (y i ,\u0177 i )) \u2212P rob(x i , y i )", "eq_num": "(28)" } ], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "where \u03b8 denotes all the parameters including our neural network parameters and all embeddings. The probabilities of the correct DTs increase and the probabilities of the most probable incorrect DTs decrease during training. We adopt Adadelta (Zeiler, 2012) with mini-batch to minimize the objective function and set the initial learning rate to be 0.012.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Max-Margin Training", "sec_num": "4" }, { "text": "We evaluate our model on RST Discourse Treebank 1 (RST-DT) (Carlson et al., 2003) . It is partitioned into a set of 347 documents for training and a set of 38 documents for test. Non-binary relations are converted into a cascade of right-branching binary relations. The standard metrics of RST-style discourse parsing evaluation include blank tree structure referred to as span (S), tree structure with nuclearity (N) indication and tree structure with rhetorical relation (R) indication. Following other RSTstyle discourse parsing systems, we evaluate the relation metric in 18 coarse-grained relation classes. Since our work focus does not include EDU segmentation, we evaluate our system with gold-standard EDU segmentation and we apply the same setting on this to other discourse parsing systems for fair comparison.", "cite_spans": [ { "start": 59, "end": 81, "text": "(Carlson et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The dimension of word embeddings is set to be 50 and the dimension of POS embeddings is set to be 10. We pre-trained the word embeddings with GloVe (Pennington et al., 2014) on English Gigaword 2 and we fine-tune them during training. Considering some words are pretrained by GloVe but don't appear in the RST-DT training set, we want to use their embeddings if they appear in test set. Following Kiros et al. (2015) , we expand our vocabulary with those words using a matrix W \u2208 R 50\u00d750 that maps word embeddings from the pre-trained word embedding space to the fine-tuned word embedding space. The objective function for training the matrix W is as follows:", "cite_spans": [ { "start": 148, "end": 173, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" }, { "start": 397, "end": 416, "text": "Kiros et al. (2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "min W,b ||V tuned \u2212 V pretrained W \u2212 b|| 2 2 (29)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "where V tuned , V pretrained \u2208 R |V |\u00d750 contain finetuned and pre-trained embeddings of words appearing in training set respectively, |V | is the size of RST-DT training set vocabulary and b is the bias term also to be trained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We lemmatize all the words appeared and represent all numbers with a special token. We use Stanford CoreNLP toolkit to preprocess the text including lemmatization, POS tagging etc. We use Theano library (Bergstra et al., 2010) to implement our parsing model. We randomly initialize all parameters within (-0.012, 0.012) except word embeddings. We adopt dropout strategy (Hinton et al., 2012) to avoid overfitting and we set the dropout rate to be 0.3.", "cite_spans": [ { "start": 203, "end": 226, "text": "(Bergstra et al., 2010)", "ref_id": null }, { "start": 370, "end": 391, "text": "(Hinton et al., 2012)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "To show the effectiveness of the components incorporated into our model, we firstly test the performance of the basic hierarchical bidirectional LSTM network without attention mechanism (ATT), tensor-based transformation (TE) and handcrafted features (HF). Then we add them successively. The results are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 320, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.2" }, { "text": "The performance is improved by adding each component to our basic model and that shows the effectiveness of attention mechanism and tensor-based transformation function. Even without handcrafted features, the performance is still competitive. It indicates that the semantic representations of text spans produced by our attention-based hierarchical bi-LSTM network are effective and the handcrafted features are complementary to semantic representations produced by the network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.2" }, { "text": "We also experiment without mapping the OOV word embeddings and use the same embedding for all OOV words. The result is shown in 3. Without mapping the OOV word embeddings the performance decreases slightly, which demonstrates that the relation between pre-trained embedding space and the fine-tuned embedding space can be learnt and it is beneficial to train a matrix to transform OOV word embeddings from the pre-trained embedding space to the fine-tuned embedding space. We compare our system with other state-of-the-art systems including (Joty et al., 2013; Ji and Eisenstein, 2014; Feng and Hirst, 2014; Li et al., 2014a; Li et al., 2014b; Heilman and Sagae, 2015) . Systems proposed by Joty et al. 2013, Heilman (2015) and Feng and Hirst (2014) are all based on variants of CRFs. Ji and Eisenstein (2014) use a projection matrix acting on one-hot representations of features to learn representations of text spans and build Support Vector Machine (SVM) classifier on them. Li et al. (2014b) adopt dependency parsing methods to deal with this task. These systems are all based on handcrafted features. Li et al. (2014a) adopt a recursive deep model and use some basic handcrafted features to improve their performances which has been stated before. Table 4 shows the performance for our system and those systems. Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than System S N R Joty et al. 201382.7 68.4 55.7 Ji and Eisenstein (2014) 82.1 71.1 61.6 Feng and Hirst 201485.7 71.0 58.2 Li et al. (2014a) 84.0 70.8 58.6 Li et al. (2014b) 83.4 73.8 57.8 Heilman and Sagae (2015) most systems. No system achieves the best result on all three metrics. To further show the effectiveness of the deep learning model itself without handcrafted features, we compare the performance between our model and the model proposed by Li et al. (2014a) without handcrafted features and the results are shown in Table 5 . It shows our overall performance outperforms the model proposed by Li et al. (2014a) which illustrates our model is effective. Table 6 shows an example of the weights (W) of EDUs (see Eq. 8) derived from our attention model. For span1 the main semantic meaning is expressed in EDU32 under the condition described in EDU31. Besides, it is EDU32 that explicitly manifests the contrast relation between the two spans. As can be seen, our attention model assigns less weight to EDU30 and focuses more on EDU32 which is reasonable according to our analysis above.", "cite_spans": [ { "start": 541, "end": 560, "text": "(Joty et al., 2013;", "ref_id": "BIBREF9" }, { "start": 561, "end": 585, "text": "Ji and Eisenstein, 2014;", "ref_id": "BIBREF8" }, { "start": 586, "end": 607, "text": "Feng and Hirst, 2014;", "ref_id": "BIBREF2" }, { "start": 608, "end": 625, "text": "Li et al., 2014a;", "ref_id": "BIBREF13" }, { "start": 626, "end": 643, "text": "Li et al., 2014b;", "ref_id": "BIBREF14" }, { "start": 644, "end": 668, "text": "Heilman and Sagae, 2015)", "ref_id": "BIBREF3" }, { "start": 709, "end": 723, "text": "Heilman (2015)", "ref_id": "BIBREF3" }, { "start": 728, "end": 749, "text": "Feng and Hirst (2014)", "ref_id": "BIBREF2" }, { "start": 978, "end": 995, "text": "Li et al. (2014b)", "ref_id": "BIBREF14" }, { "start": 1106, "end": 1123, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" }, { "start": 1545, "end": 1569, "text": "Ji and Eisenstein (2014)", "ref_id": "BIBREF8" }, { "start": 1619, "end": 1636, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" }, { "start": 1652, "end": 1669, "text": "Li et al. (2014b)", "ref_id": "BIBREF14" }, { "start": 1685, "end": 1709, "text": "Heilman and Sagae (2015)", "ref_id": "BIBREF3" }, { "start": 1950, "end": 1967, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" }, { "start": 2103, "end": 2120, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 1253, "end": 1260, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 2026, "end": 2033, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 2163, "end": 2170, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.2" }, { "text": "Two most prevalent discourse parsing treebanks are RST Discourse Treebank (RST-DT) (Carlson et al., 2003) and Penn Discourse TreeBank (PDTB) (Prasad et al., 2008) . We evaluate our system on RST-DT which is annotated in the framework of Rhetorical Structure Theory (Mann and Thompson, 1988) . It consists of 385 Wall Street Journal articles and is partitioned into a set of 347 documents for training and a set of 38 documents for test. 110 fine-grained and 18 coarse-grained relations are defined on RST-DT. Parsing algorithms published on RST-DT can mainly be categorized as shift-reduce parsers and probabilistic CKY-like parsers. Shiftreduce parsers are widely used for their efficiency and effectiveness and probabilistic CKY-like parsers lead to the global optimal result for the parsing models. State-of-the-art systems belonging to shiftreduce parsers include (Heilman and Sagae, 2015; Ji and Eisenstein, 2014) . Those belonging to probabilistic CKY-like parsers include (Joty et al., 2013; Li et al., 2014a) . Besides, Feng and Hirst (2014) adopt a greedy bottom-up approach as their parsing algorithm. Lexical, syntactic, structural and semantic features are extracted in these systems. SVM and variants of Conditional Random Fields (CRFs) are mostly used in these models. Li et al. (2014b) distinctively propose to use dependency structure to represent the relations between EDUs. Recursive deep model proposed by Li et al. (2014a) has been the only proposed deep learning model on RST-DT.", "cite_spans": [ { "start": 83, "end": 105, "text": "(Carlson et al., 2003)", "ref_id": "BIBREF1" }, { "start": 141, "end": 162, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF21" }, { "start": 265, "end": 290, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF16" }, { "start": 868, "end": 893, "text": "(Heilman and Sagae, 2015;", "ref_id": "BIBREF3" }, { "start": 894, "end": 918, "text": "Ji and Eisenstein, 2014)", "ref_id": "BIBREF8" }, { "start": 979, "end": 998, "text": "(Joty et al., 2013;", "ref_id": "BIBREF9" }, { "start": 999, "end": 1016, "text": "Li et al., 2014a)", "ref_id": "BIBREF13" }, { "start": 1283, "end": 1300, "text": "Li et al. (2014b)", "ref_id": "BIBREF14" }, { "start": 1425, "end": 1442, "text": "Li et al. (2014a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Incorporating attention mechanism into RNN (e.g., LSTM, GRU) has been shown to learn better representation by attending over the output vectors and picking up important information from relevant positions of a sequence and this approach has been utilized in many tasks including neural machine translation (Kalchbrenner and Blunsom, 2013; Bahdanau et al., 2014; Hermann et al., 2015 ), text entailment recognition (Rockt\u00e4schel et al., 2015) etc. Some work also uses tensor-based transformation function to make stronger interaction between features and learn combinatorial features and they get performance boost in their tasks (Sutskever et al., 2009; Socher et al., 2013; Pei et al., 2014) .", "cite_spans": [ { "start": 306, "end": 338, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF11" }, { "start": 339, "end": 361, "text": "Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 362, "end": 382, "text": "Hermann et al., 2015", "ref_id": "BIBREF4" }, { "start": 414, "end": 440, "text": "(Rockt\u00e4schel et al., 2015)", "ref_id": null }, { "start": 628, "end": 652, "text": "(Sutskever et al., 2009;", "ref_id": "BIBREF25" }, { "start": 653, "end": 673, "text": "Socher et al., 2013;", "ref_id": "BIBREF23" }, { "start": 674, "end": 691, "text": "Pei et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we propose an attention-based hierarchical neural network for discourse parsing. Our attention-based hierarchical bi-LSTM network produces effective compositional semantic representations of text spans. We adopt tensor-based transformation function to allow complicated interaction between features. Our two level caches accelerate parsing process significantly and thus make it practical. Our proposed system achieves comparable results to state-of-the-art systems. We will try extending attention mechanism to obtain the representation of a text span by referring to another text span at minimal additional cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "T[i] \u21d2 P[i] Q[i] (21)whereP [i] \u2208 R d 1 \u00d7r , Q [i] \u2208 R r\u00d7d 1 and r d 1 .In this way, we drastically reduce parameter number and computational complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://catalog.ldc.upenn.edu/LDC2002T07 2 https://catalog.ldc.upenn.edu/LDC2011T07", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the reviewers for their instructive feedback. We also thank Jiwei ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate. CoRR, abs/1409", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Python for Scientific Computing Conference (SciPy)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. James Bergstra, Olivier Breuleux, Fr\u00e9d\u00e9ric Bastien, Pas- cal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Ben- gio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June. Oral Presenta- tion.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory", "authors": [ { "first": "Lynn", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Mary", "middle": [ "Ellen" ], "last": "Okurowski", "suffix": "" } ], "year": 2003, "venue": "Current and new directions in discourse and dialogue", "volume": "", "issue": "", "pages": "85--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Current and new directions in discourse and dialogue, pages 85-112. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A lineartime bottom-up discourse parser with constraints and post-editing", "authors": [ { "first": "Vanessa", "middle": [], "last": "Wei Feng", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "511--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanessa Wei Feng and Graeme Hirst. 2014. A linear- time bottom-up discourse parser with constraints and post-editing. In ACL (1), pages 511-521.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Fast rhetorical structure theory discourse parsing. CoRR", "authors": [ { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Heilman and Kenji Sagae. 2015. Fast rhetorical structure theory discourse parsing. CoRR, abs/1505.02425.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Tom\u00e1 S Kocisk\u00fd", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Will", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tom\u00e1 s Kocisk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. CoRR, abs/1506.03340.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hilda: a discourse parser using support vector machine classification", "authors": [ { "first": "Hugo", "middle": [], "last": "Hernault", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Duverle", "suffix": "" }, { "first": "", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2010, "venue": "Dialogue and Discourse", "volume": "1", "issue": "3", "pages": "1--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Hernault, Helmut Prendinger, David A DuVerle, and Mitsuru Ishizuka. 2010. Hilda: a discourse parser using support vector machine classification. Dialogue and Discourse, 1(3):1-33.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improving neural networks by preventing co-adaptation of feature detectors", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Im- proving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Representation learning for text-level discourse parsing", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "13--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In ACL (1), pages 13-24.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Combining intra-and multisentential rhetorical parsing for document-level discourse analysis", "authors": [ { "first": "R", "middle": [], "last": "Shafiq", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Raymond", "middle": [ "T" ], "last": "Carenini", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Mehdad", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shafiq R. Joty, Giuseppe Carenini, Raymond T. Ng, and Yashar Mehdad. 2013. Combining intra-and multi- sentential rhetorical parsing for document-level dis- course analysis. In ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Speech and language processing, chapter 14", "authors": [ { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "H", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Jurafsky and James H Martin. 2008. Speech and language processing, chapter 14. In Prentice Hall.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Recurrent continuous translation models", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. CoRR", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. CoRR, abs/1506.06726.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Recursive deep models for discourse parsing", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Rumeng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "2061--2069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Rumeng Li, and Eduard H Hovy. 2014a. Re- cursive deep models for discourse parsing. In EMNLP, pages 2061-2069.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Text-level discourse dependency parsing", "authors": [ { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "25--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014b. Text-level discourse dependency parsing. In ACL (1), pages 25-35.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Discourse indicators for content selection in summarization", "authors": [ { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "147--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summa- rization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 147-156. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "", "volume": "8", "issue": "", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243-281.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The theory and practice of discourse parsing and summarization", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000. The theory and practice of dis- course parsing and summarization. MIT press.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Maxmargin tensor neural network for chinese word segmentation", "authors": [ { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "293--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for chinese word seg- mentation. In ACL (1), pages 293-303.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The penn discourse treebank 2.0", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Livio", "middle": [], "last": "Robaldo", "suffix": "" }, { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Bonnie", "middle": [ "L" ], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Karl Moritz Hermann, Tom\u00e1 s Kocisk\u00fd, and Phil Blunsom. 2015. Reasoning about entailment with neural attention", "authors": [ { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Her- mann, Tom\u00e1 s Kocisk\u00fd, and Phil Blunsom. 2015. Rea- soning about entailment with neural attention. CoRR, abs/1509.06664.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "1631", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical meth- ods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Discourse-level relations for Opinion Analysis", "authors": [ { "first": "", "middle": [], "last": "Swapna Somasundaran", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapna Somasundaran. 2010. Discourse-level relations for Opinion Analysis. Ph.D. thesis, University of Pitts- burgh.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Modelling relational data using bayesian clustered tensor factorization", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Joshua", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2009, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. 2009. Modelling relational data using bayesian clustered tensor factorization. In NIPS.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Evaluating discourse-based answer extraction for why-question answering", "authors": [ { "first": "Suzan", "middle": [], "last": "Verberne", "suffix": "" }, { "first": "Lou", "middle": [], "last": "Boves", "suffix": "" }, { "first": "Nelleke", "middle": [], "last": "Oostdijk", "suffix": "" }, { "first": "Peter-Arno", "middle": [], "last": "Coppen", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "735--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzan Verberne, Lou Boves, Nelleke Oostdijk, and Peter- Arno Coppen. 2007. Evaluating discourse-based an- swer extraction for why-question answering. In Pro- ceedings of the 30th annual international ACM SIGIR conference on Research and development in informa- tion retrieval, pages 735-736. ACM.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Adadelta: An adaptive learning rate method", "authors": [ { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "", "middle": [], "last": "Zeiler", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew D. Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. CoRR, abs/1212.5701.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Schematic structure of our parsing model.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Bi-LSTM for computing the compositional semantic representation of an EDU.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Attention-based bi-LSTM for computing the compositional semantic representation of a text span.", "num": null }, "TABREF0": { "html": null, "type_str": "table", "content": "", "text": "Handcrafted features used in our parsing model.", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "
SettingSNR
Basic82.769.755.6
Basic+ATT83.6* 70.2* 56.0*
Basic+ATT+TE84.2* 70.4 56.3*
Basic+ATT+TE+HF 85.8* 71.1* 58.9*
", "text": "", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
our system on RST-DT. 'Basic' denotes the basic hierarchical
bidirectional LSTM network; '+ATT' denotes adding attention
mechanism; '+TE' denotes adopting tensor-based transforma-
tion; '+HF' denotes adding handcrafted features. * indicates
statistical significance in t-test compared to the result in the line
above (p < 0.05).
System SettingSNR
Without OOV mapping 85.170.758.2
Full version85.8* 71.1* 58.9*
", "text": "Performance comparison for different settings of", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "", "text": "Performance comparison for whether to map OOV embeddings.", "num": null }, "TABREF5": { "html": null, "type_str": "table", "content": "
SystemSNR
Li et al. (2014a) (no feature) 82.4 69.2 56.8
Ours (no feature)84.2 70.4 56.3
", "text": "Performance comparison with other state-of-the-art systems on RST-DT.", "num": null }, "TABREF6": { "html": null, "type_str": "table", "content": "", "text": "Performance comparison with the deep learning model proposed in Li et al. (2014a) without handcrafted features.", "num": null }, "TABREF8": { "html": null, "type_str": "table", "content": "
", "text": "An example of the weights derived from our attention model. The relation between span1 and span2 is Contrast.", "num": null } } } }