ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:54:35.980875Z"
},
"title": "Unsupervised Discourse Constituency Parsing Using Viterbi EM",
"authors": [
{
"first": "Noriki",
"middle": [],
"last": "Nishida",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "nishida@nlab.ci.i.u-tokyo.ac.jp"
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "nakayama@nlab.ci.i.u-tokyo.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we introduce an unsupervised discourse constituency parsing algorithm. We use Viterbi EM with a margin-based criterion to train a span-based discourse parser in an unsupervised manner. We also propose initialization methods for Viterbi training of discourse constituents based on our prior knowledge of text structures. Experimental results demonstrate that our unsupervised parser achieves comparable or even superior performance to fully supervised parsers. We also investigate discourse constituents that are learned by our method.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we introduce an unsupervised discourse constituency parsing algorithm. We use Viterbi EM with a margin-based criterion to train a span-based discourse parser in an unsupervised manner. We also propose initialization methods for Viterbi training of discourse constituents based on our prior knowledge of text structures. Experimental results demonstrate that our unsupervised parser achieves comparable or even superior performance to fully supervised parsers. We also investigate discourse constituents that are learned by our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language text is generally coherent (Halliday and Hasan, 1976) and can be analyzed as discourse structures, which formally describe how text is coherently organized. In discourse structure, linguistic units (e.g., clauses, sentences, or larger textual spans) are connected together semantically and pragmatically, and no unit is independent nor isolated. Discourse parsing aims to uncover discourse structures automatically for given text and has been proven to be useful in various NLP applications, such as document summarization (Marcu, 2000; Louis et al., 2010; Yoshida et al., 2014) , sentiment analysis (Polanyi and Van den Berg, 2011; Bhatia et al., 2015) , and automated essay scoring (Miltsakaki and Kukich, 2004) .",
"cite_spans": [
{
"start": 44,
"end": 70,
"text": "(Halliday and Hasan, 1976)",
"ref_id": null
},
{
"start": 540,
"end": 553,
"text": "(Marcu, 2000;",
"ref_id": "BIBREF37"
},
{
"start": 554,
"end": 573,
"text": "Louis et al., 2010;",
"ref_id": "BIBREF33"
},
{
"start": 574,
"end": 595,
"text": "Yoshida et al., 2014)",
"ref_id": "BIBREF58"
},
{
"start": 617,
"end": 649,
"text": "(Polanyi and Van den Berg, 2011;",
"ref_id": "BIBREF48"
},
{
"start": 650,
"end": 670,
"text": "Bhatia et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 701,
"end": 730,
"text": "(Miltsakaki and Kukich, 2004)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the promising progress achieved in recent decades (Carlson et al., 2001; Hernault et al., 2010; Ji and Eisenstein, 2014; Feng and Hirst, 2014; Li et al., 2014; Joty et al., 2015; Morey et al., 2017) , discourse parsing still remains a significant challenge. The difficulty is due in part to shortage and low reliability of hand-annotated discourse structures. To develop a better-generalized parser, existing algorithms require a larger amounts of training data. However, manually annotating discourse structures is expensive, time-consuming, and sometimes highly ambiguous (Marcu et al., 1999) .",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Carlson et al., 2001;",
"ref_id": "BIBREF7"
},
{
"start": 81,
"end": 103,
"text": "Hernault et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 104,
"end": 128,
"text": "Ji and Eisenstein, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 129,
"end": 150,
"text": "Feng and Hirst, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 151,
"end": 167,
"text": "Li et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 168,
"end": 186,
"text": "Joty et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 187,
"end": 206,
"text": "Morey et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 582,
"end": 602,
"text": "(Marcu et al., 1999)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One possible solution to these problems is grammar induction (or unsupervised syntactic parsing) algorithms for discourse parsing. However, existing studies on unsupervised parsing mainly focus on sentence structures, such as phrase structures (Lari and Young, 1990; Klein and Manning, 2002; Golland et al., 2012; Jin et al., 2018) or dependency structures (Klein and Manning, 2004; Berg-Kirkpatrick et al., 2010; Naseem et al., 2010; Jiang et al., 2016) , though text-level structural regularities can also exist beyond the scope of a single sentence. For instance, in order to convey information to readers as intended, a writer should arrange utterances in a coherent order.",
"cite_spans": [
{
"start": 244,
"end": 266,
"text": "(Lari and Young, 1990;",
"ref_id": "BIBREF31"
},
{
"start": 267,
"end": 291,
"text": "Klein and Manning, 2002;",
"ref_id": "BIBREF28"
},
{
"start": 292,
"end": 313,
"text": "Golland et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 314,
"end": 331,
"text": "Jin et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 357,
"end": 382,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF29"
},
{
"start": 383,
"end": 413,
"text": "Berg-Kirkpatrick et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 414,
"end": 434,
"text": "Naseem et al., 2010;",
"ref_id": "BIBREF44"
},
{
"start": 435,
"end": 454,
"text": "Jiang et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We tackle these problems by introducing unsupervised discourse parsing, which induces discourse structures for given text without relying on human-annotated discourse structures. Based on Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) , which is one of the most widely accepted theories of discourse structure, we assume that coherent text can be represented as tree structures, such as the one in Figure 1 . The leaf nodes correspond to non-overlapping clauselevel text spans called elementary discourse units (EDUs) . Consecutive text spans are combined to each other recursively in a bottom-up manner to form larger text spans (represented by internal nodes) up to a global document span. These text spans are called discourse constituents. The internal nodes are labeled with both nuclearity statuses (e.g., Nucleus-Satellite or NS) and rhetorical Figure 1 : An example of RST-based discourse constituent structure we assume in this paper. Leaf nodes x i correspond to non-overlapping clause-level text segments, and internal nodes consists of three complementary elements: discourse constituents x i:j , discourse nuclearities (e.g., NS), and discourse relations (e.g., ELABORATION).",
"cite_spans": [
{
"start": 222,
"end": 247,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF34"
},
{
"start": 524,
"end": 530,
"text": "(EDUs)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 411,
"end": 419,
"text": "Figure 1",
"ref_id": null
},
{
"start": 865,
"end": 873,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "relations (e.g., ELABORATION, CONTRAST) that hold between connected text spans.",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "ELABORATION, CONTRAST)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we especially focus on unsupervised induction of an unlabeled discourse constituent structure (i.e., a set of unlabeled discourse constituent spans) given a sequence of EDUs, which corresponds to the first tree-building step in conventional RST parsing. Such constituent structures provide hierarchical information of input text, which is useful in downstream tasks (Louis et al., 2010) . For instance, a constituent structure [X [Y Z]] indicates that text span Y is preferentially combined with Z (rather than X) to form a constituent span, and then the text span [Y Z] is connected with X. In other words, this structure implies that [X Y] is a distituent span and requires Z to become a constituent span. Our challenge is to find such discourse-level constituentness from EDU sequences.",
"cite_spans": [
{
"start": 381,
"end": 401,
"text": "(Louis et al., 2010)",
"ref_id": "BIBREF33"
},
{
"start": 445,
"end": 451,
"text": "[Y Z]]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The core hypothesis of this paper is that discourse tree structures and syntactic tree structures share the same (or similar) constituent properties at a metalevel, and thus, learning algorithms developed for grammar inductions are transferable to unsupervised discourse constituency parsing by proper modifications. Actually, RST structures can be formulated in a similar way as phrase structures in the Penn Treebank, though there are a few differences: The leaf nodes are not words but EDUs (e.g., clauses), and the internal nodes do not contain phrase labels but hold nuclearity statuses and rhetorical relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The expectation-maximization (EM) algorithm (Klein and Manning, 2004) has been the dominating unsupervised learning algorithm for grammar induction. Based on our hypothesis and this fact, we develop a span-based discourse parser (in an unsupervised manner) by using Viterbi EM (or ''hard'' EM) (Neal and Hinton, 1998; Spitkovsky et al., 2010; DeNero and Klein, 2008; Choi and Cardie, 2007; Goldwater and Johnson, 2005 ) with a margin-based criterion (Stern et al., 2017; Gaddy et al., 2018) . 1 Unlike the classic EM algorithm using inside-outside re-estimation (Baker, 1979) , Viterbi EM allows us to avoid explicitly counting discourse constituent patterns, which are generally too sparse to estimate reliable scores of text spans.",
"cite_spans": [
{
"start": 44,
"end": 69,
"text": "(Klein and Manning, 2004)",
"ref_id": "BIBREF29"
},
{
"start": 294,
"end": 317,
"text": "(Neal and Hinton, 1998;",
"ref_id": "BIBREF45"
},
{
"start": 318,
"end": 342,
"text": "Spitkovsky et al., 2010;",
"ref_id": null
},
{
"start": 343,
"end": 366,
"text": "DeNero and Klein, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 367,
"end": 389,
"text": "Choi and Cardie, 2007;",
"ref_id": "BIBREF10"
},
{
"start": 390,
"end": 417,
"text": "Goldwater and Johnson, 2005",
"ref_id": "BIBREF15"
},
{
"start": 450,
"end": 470,
"text": "(Stern et al., 2017;",
"ref_id": "BIBREF56"
},
{
"start": 471,
"end": 490,
"text": "Gaddy et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 562,
"end": 575,
"text": "(Baker, 1979)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The other technical contribution is to present effective initialization methods for Viterbi training of discourse constituents. We introduce initial-tree sampling methods based on our prior knowledge of document structures. We show that proper initialization is crucial in this task, as observed in grammar induction (Klein and Manning, 2004; Gimpel and Smith, 2012) .",
"cite_spans": [
{
"start": 317,
"end": 342,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF29"
},
{
"start": 343,
"end": 366,
"text": "Gimpel and Smith, 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the RST Discourse Treebank (RST-DT) (Carlson et al., 2001) , we compared our parse trees with manually annotated ones. We observed that our method achieves a Micro F 1 score of 68.6% (84.6%) in the (corrected) RST-PARSEVAL (Marcu, 2000; Morey et al., 2018) , which is comparable with or even superior to fully supervised parsers. We also investigated the discourse constituents that can or cannot be learned well by our method.",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Carlson et al., 2001)",
"ref_id": "BIBREF7"
},
{
"start": 226,
"end": 239,
"text": "(Marcu, 2000;",
"ref_id": "BIBREF37"
},
{
"start": 240,
"end": 259,
"text": "Morey et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: Section 2 introduces the related work. Section 3 gives the details of our parsing model and training algorithm. Section 4 describes the experimental setting and Section 5 discusses the experimental results. Conclusions are given in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The earliest studies that use EM in unsupervised parsing are Lari and Young (1990) and Carroll and Charniak (1992) , which attempted to induce probabilistic context-free grammars (PCFG) and probabilistic dependency grammars using the classic inside-outside algorithm (Baker, 1979) . Klein and Manning (2001b, 2002) perform a weakened version of constituent tests (Radford, 1988) by the Constituent-Context Model (CCM), which, unlike a PCFG, describes whether a contiguous text span (such as DT JJ NN) is a constituent or a distituent. The CCM uses EM to learn constituenthood over part-of-speech (POS) tags and for the first time outperformed the strong right-branching baseline in unsupervised constituency parsing. Klein and Manning (2004) proposed the Dependency Model with Valence (DMV), which is a head automata model (Alshawi, 1996) for unsupervised dependency parsing over POS tags and also relies on EM. These two models have been extended in various works for further improvements (Berg-Kirkpatrick et al., 2010; Naseem et al., 2010; Golland et al., 2012; Jiang et al., 2016) .",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "Lari and Young (1990)",
"ref_id": "BIBREF31"
},
{
"start": 87,
"end": 114,
"text": "Carroll and Charniak (1992)",
"ref_id": "BIBREF8"
},
{
"start": 267,
"end": 280,
"text": "(Baker, 1979)",
"ref_id": "BIBREF2"
},
{
"start": 283,
"end": 292,
"text": "Klein and",
"ref_id": "BIBREF25"
},
{
"start": 293,
"end": 314,
"text": "Manning (2001b, 2002)",
"ref_id": null
},
{
"start": 363,
"end": 378,
"text": "(Radford, 1988)",
"ref_id": "BIBREF49"
},
{
"start": 717,
"end": 741,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF29"
},
{
"start": 823,
"end": 838,
"text": "(Alshawi, 1996)",
"ref_id": null
},
{
"start": 990,
"end": 1021,
"text": "(Berg-Kirkpatrick et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 1022,
"end": 1042,
"text": "Naseem et al., 2010;",
"ref_id": "BIBREF44"
},
{
"start": 1043,
"end": 1064,
"text": "Golland et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 1065,
"end": 1084,
"text": "Jiang et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In general, these methods use the inside-outside (dynamic programming) re-estimation (Baker, 1979) in the E step. However, Spitkovsky et al. (2010) showed that Viterbi training (Brown et al., 1993) , which uses only the best-scoring tree to count the grammatical patterns, is not only computationally more efficient but also empirically more accurate in longer sentences. These properties are, thus, suitable for ''document-level'' grammar induction, where the document length (i.e., the number of EDUs) tends to be long. 2 In addition, as ex-plained later in Section 3, we incorporate Viterbi EM with a margin-based criterion (Stern et al., 2017; Gaddy et al., 2018) ; this allows us to avoid explicitly counting each possible discourse constituent pattern symbolically, which is generally too sparse and appears only once.",
"cite_spans": [
{
"start": 85,
"end": 98,
"text": "(Baker, 1979)",
"ref_id": "BIBREF2"
},
{
"start": 141,
"end": 147,
"text": "(2010)",
"ref_id": "BIBREF56"
},
{
"start": 177,
"end": 197,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF6"
},
{
"start": 627,
"end": 647,
"text": "(Stern et al., 2017;",
"ref_id": "BIBREF56"
},
{
"start": 648,
"end": 667,
"text": "Gaddy et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Prior studies (Klein and Manning, 2004; Gimpel and Smith, 2012; Naseem et al., 2010) have shown that initialization or linguistic knowledge plays an important role in EM-based grammar induction. Gimpel and Smith (2012) demonstrated that properly initialized DMV achieves improvements in attachment accuracies by 20 \u223c 40 points (i.e., 21.3% \u2192 64.3%), compared with the uniform initialization. Naseem et al. (2010) also found that controlling the learning process with the prior (universal) linguistic knowledge improves the parsing performance of DMV. These studies usually rely on insights on syntactic structures. In this paper, we explore discourse-level prior knowledge for effective initialization of the Viterbi training of discourse constituency parsers.",
"cite_spans": [
{
"start": 14,
"end": 39,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF29"
},
{
"start": 40,
"end": 63,
"text": "Gimpel and Smith, 2012;",
"ref_id": "BIBREF14"
},
{
"start": 64,
"end": 84,
"text": "Naseem et al., 2010)",
"ref_id": "BIBREF44"
},
{
"start": 195,
"end": 218,
"text": "Gimpel and Smith (2012)",
"ref_id": "BIBREF14"
},
{
"start": 392,
"end": 412,
"text": "Naseem et al. (2010)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our method also relies on recent work on RST parsing. In particular, one of the initialization methods in our EM training (in Section 3.3 (i)) is inspired by the inter-sentential and multisentential approach used in RST parsing (Feng and Hirst, 2014; Joty et al., 2013 Joty et al., , 2015 . We also follow prior studies (Sagae, 2009; Ji and Eisenstein, 2014 ) and utilize syntactic information, i.e., dependency heads, which contributes to further performance gains in our method.",
"cite_spans": [
{
"start": 228,
"end": 250,
"text": "(Feng and Hirst, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 251,
"end": 268,
"text": "Joty et al., 2013",
"ref_id": "BIBREF23"
},
{
"start": 269,
"end": 288,
"text": "Joty et al., , 2015",
"ref_id": "BIBREF22"
},
{
"start": 320,
"end": 333,
"text": "(Sagae, 2009;",
"ref_id": "BIBREF50"
},
{
"start": 334,
"end": 357,
"text": "Ji and Eisenstein, 2014",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The most similar work to that presented here is Kobayashi et al. (2019) , who propose unsupervised RST parsing algorithms in parallel with our work. Their method builds an unlabeled discourse tree by using the CKY dynamic programming algorithm. The tree-merging (splitting) scores in CKY are defined as similarity (dissimilarity) between adjacent text spans. The similarity scores are calculated based on distributed representations using pre-trained embeddings. However, similarity between adjacent elements are not always good indicators of constituentness. Consider tag sequences ''VBD IN'' and ''IN NN''. The former is an example of a distituent sequence, whereas the latter is a constituent. ''VBD'', ''IN'', and ''NN'' may have similar distributed representations because these tags cooccur frequently in corpora. This implies that it is difficult to distinguish constituents and distituents if we use only similarity (dissimilarity) measures. In this paper, we aim to mitigate this issue by introducing parameterized models to learn discourse constituentness.",
"cite_spans": [
{
"start": 48,
"end": 71,
"text": "Kobayashi et al. (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we first describe the parsing model we develop. Next, we explain how to train the model in an unsupervised manner by using Viterbi EM. Finally, we present the initialization methods we use for further improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The parsing problem in this study is to find the unlabeled constituent structure with the highest score for an input text x, that is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T = arg max T \u2208valid (x) s(x, T )",
"eq_num": "(1)"
}
],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "where s(x, T ) \u2208 R denotes a real-valued score of a tree T , and valid (x) represents a set of all valid trees for x. We assume that x has already been manually segmented into a sequence of EDUs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "x = x 0 , . . . , x n\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "Inspired by the success of recent span-based constituency parsers (Stern et al., 2017; Gaddy et al., 2018) , we define the tree scores as the sum of constituent scores over internal nodes, that is,",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Stern et al., 2017;",
"ref_id": "BIBREF56"
},
{
"start": 87,
"end": 106,
"text": "Gaddy et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "s(x, T ) = (i,j)\u2208T s(i, j).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "( 2)Thus, our parsing model consists of a single scoring function s(i, j) that computes a constituent score of a contiguous text span",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "x i:j = x i , . . . , x j , or simply (i, j).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "The higher the value of s(i, j), the more likely that x i:j is a discourse constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "We show our parsing model in Figure 2 . Our implementation of s(i, j) can be decomposed into three modules: EDU-level feature extraction, span-level feature extraction, and span scoring. We discuss each of these in turn. Later, we also explain the decoding algorithm that we use to find the globally best-scoring tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.1"
},
{
"text": "Inspired by existing RST parsers (Ji and Eisenstein, 2014; Li et al., 2014; Joty et al., 2015) , we first encode the beginning and end words of an EDU:",
"cite_spans": [
{
"start": 33,
"end": 58,
"text": "(Ji and Eisenstein, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 59,
"end": 75,
"text": "Li et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 76,
"end": 94,
"text": "Joty et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v bw i = Embed w (b w ),",
"eq_num": "(3)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v ew i = Embed w (e w ),",
"eq_num": "(4)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "where b w and e w denote the beginning and end words of the i-th EDU, and Embed w is a function that returns a parameterized embedding of the input word. We also encode the POS tags corresponding to b w and e w as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v bp i = Embed p (b p ),",
"eq_num": "(5)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v ep i = Embed p (e p ),",
"eq_num": "(6)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "where Embed p is an embedding function for POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "Prior work (Sagae, 2009; Ji and Eisenstein, 2014) has shown that syntactic cues can accelerate discourse parsing performance. We therefore extract syntactic features from each EDU. We apply a (syntactic) dependency parser to each sentence in the input text, 3 and then choose a head word for each EDU. A head word is a token whose parent in the dependency graph is ROOT or is not within the EDU. 4 We also extract the POS tag and the dependency label corresponding to the head word. A dependency label is a relation between a head word and its parent.",
"cite_spans": [
{
"start": 11,
"end": 24,
"text": "(Sagae, 2009;",
"ref_id": "BIBREF50"
},
{
"start": 25,
"end": 49,
"text": "Ji and Eisenstein, 2014)",
"ref_id": "BIBREF19"
},
{
"start": 396,
"end": 397,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "To sum up, we now have triplets of head information,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "{(h w , h p , h r ) i } n\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "i=0 , each denoting the head word, the head POS, and the head relation of the i-th EDU, respectively. We embed these symbols using look-up tables:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v hw i = Embed w (h w ),",
"eq_num": "(7)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v hp i = Embed p (h p ),",
"eq_num": "(8)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v hr i = Embed r (h r ),",
"eq_num": "(9)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "where Embed r is an embedding function for dependency relations. Finally, we concatenate these embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v \u2032 i = [v bw i ; v ew i ; v bp i ; v ep i ; v hw i ; v hp i ; v hr i ],",
"eq_num": "(10)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "and then transform it using a linear projection and Rectified Linear Unit (ReLU) activation function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v i = ReLU (W v \u2032 i + b).",
"eq_num": "(11)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "In the following, we use {v i } n\u22121 i=0 as the feature vectors for the EDUs, {x i } n\u22121 i=0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "Figure 2: Our span-based discourse parsing model. We first encode each EDU based on the beginning and ending words and POS tags using embeddings. We also embed head information of each EDU. We then run a bidirectional LSTM and concatenate the span differences. The resulting vector is used to predict the constituent score of the text span (i, j). This figure illustrates the process for the span (1, 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "Following the span-based parsing models developed in the syntax domain (Stern et al., 2017; Gaddy et al., 2018) , we then run a bidirectional Long Short-Term Memory (LSTM) over the sequence of EDU representations, {v i } n\u22121 i=0 , resulting in forward and backward representations for",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Stern et al., 2017;",
"ref_id": "BIBREF56"
},
{
"start": 92,
"end": 111,
"text": "Gaddy et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "each step i (0 \u2264 i \u2264 n \u2212 1): \u2212 \u2192 h 0 , . . . , \u2212 \u2192 h n\u22121 = \u2212 \u2212\u2212\u2212 \u2192 LSTM (v 0 , . . . , v n\u22121 ), (12) \u2190 \u2212 h 0 , . . . , \u2190 \u2212 h n\u22121 = \u2190 \u2212\u2212\u2212 \u2212 LSTM (v 0 , . . . , v n\u22121 ). (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "We then compute a feature vector for a span (i, j) by concatenating the forward and backward span differences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i,j = [ \u2212 \u2192 h j \u2212 \u2212 \u2192 h i\u22121 ; \u2190 \u2212 h i \u2212 \u2190\u2212 \u2212 h j+1 ].",
"eq_num": "(14)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "The feature vector, h i,j , is assumed to represent the content of the contiguous text span x i:j along with contextual information captured by the LSTMs. 5 We did not use any feature templates because we found that they did not improve parsing performance in our unsupervised setting, though we observed that template features roughly following Joty et al. (2015) improved performance in a supervised setting.",
"cite_spans": [
{
"start": 155,
"end": 156,
"text": "5",
"ref_id": null
},
{
"start": 346,
"end": 364,
"text": "Joty et al. (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "Finally, given a span-level feature vector, h i,j , we use two-layer perceptrons with the ReLU activation function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(i, j) = MLP (h i,j ),",
"eq_num": "(15)"
}
],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "which computes the constituent score of the contiguous text span x i:j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Scoring",
"sec_num": null
},
{
"text": "We use a Cocke-Kasami-Younger (CKY)-style dynamic programming algorithm to perform a global search over the space of valid trees and find the highest-scoring tree. For a document with n EDUs, we use an n \u00d7 n table C, the cell C[i, j] of which stores the subtree score spanning from i to j. For spans of length one (i.e., i = j), we assign constant scalar values:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C[i, i] = 1.",
"eq_num": "(16)"
}
],
"section": "Decoding",
"sec_num": null
},
{
"text": "For general spans 0 \u2264 i < j \u2264 n \u2212 1, we define the following recursion:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C[i, j] = s(i, j) + max i\u2264k<j C[i, k] + C[k + 1, j],",
"eq_num": "(17)"
}
],
"section": "Decoding",
"sec_num": null
},
{
"text": "where s(i, j) denotes the constituent score computed by our model. To parse the full document, we first compute C[0, n \u2212 1] in a bottom-up manner and then recursively trace the history of the selected split positions, k, resulting in a binary tree spanning the entire document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": null
},
{
"text": "Viterbi EM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Learning Using",
"sec_num": "3.2"
},
{
"text": "In this paper, we use Viterbi EM (Brown et al., 1993; Spitkovsky et al., 2010) , a variant of the EM algorithm and self-training (McClosky et al., 2006a,b) , to train the span-based discourse constituency parser (Section 3.1) in an unsupervised manner. Viterbi EM has suitable properties for discourse processing, as described later in this section.",
"cite_spans": [
{
"start": 33,
"end": 53,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF6"
},
{
"start": 54,
"end": 78,
"text": "Spitkovsky et al., 2010)",
"ref_id": null
},
{
"start": 129,
"end": 155,
"text": "(McClosky et al., 2006a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Learning Using",
"sec_num": "3.2"
},
{
"text": "We first automatically sample initial trees based on our prior knowledge of document structures (described later in Section 3.3) and then perform the M step on the sampled trees to initialize the model parameters. After the initialization step, we repeat the E step and the M step in turns. To perform early stopping, we use a held-out development set of 30 documents with annotated trees T * dev , which are never used as the supervision to estimate the parsing model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Procedure",
"sec_num": null
},
{
"text": "In the E step of Viterbi EM, based on the current model, we perform discourse constituency parsing for whole training documents X , resulting in a pseudo treebank with discourse constituent structures, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Step",
"sec_num": null
},
{
"text": "D = {(x,T ) | x \u2208 X ,T = arg max T \u2208valid (x) s(x, T )} (18)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Step",
"sec_num": null
},
{
"text": "where valid (x) denotes a set of all valid trees for x, s(x, T ) is defined in Equation 2, andT is the highest-scoring parse tree based on the current model. Klein and Manning (2001b) and Spitkovsky et al. 2010count grammatical patterns used to derive syntactic trees in D, which are then normalized and converted to probabilistic grammars in the next M step.",
"cite_spans": [
{
"start": 158,
"end": 183,
"text": "Klein and Manning (2001b)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E Step",
"sec_num": null
},
{
"text": "In contrast, ''discourse'' constituents are significantly sparse and tend to appear only once, which implies that it is almost meaningless to explicitly count discourse constituent patterns symbolically. We therefore attempt to directly use the trees in D to update the model parameters in the next M step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Step",
"sec_num": null
},
{
"text": "In the M step, we re-estimate the next model as if it is supervised by the best parse trees found in the previous E step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "More precisely, we update the model parameters so that the next model satisfies the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(x,T ) \u2265 s(x, T \u2032 ) + \u2206(T , T \u2032 ),",
"eq_num": "(19)"
}
],
"section": "M Step",
"sec_num": null
},
{
"text": "for each instance (x,T ) \u2208 D, where T \u2032 ranges over all valid trees. \u2206(T , T \u2032 ) is a tree distance we define as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206(T , T \u2032 ) = |T | \u2212 |T \u2229 T \u2032 |,",
"eq_num": "(20)"
}
],
"section": "M Step",
"sec_num": null
},
{
"text": "where |T | denotes the number of constituent spans (or internal nodes) in T , and |T \u2229 T \u2032 | represents the number of spans shared betweenT and T \u2032 . In other words, we hope that the score of the best parse treeT should be larger than that of the lessprobable tree T \u2032 by at least the margin \u2206(T , T \u2032 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "Please note that |T | = |T \u2032 | always holds, because the parse treeT and the negative-sample tree T \u2032 are binary trees. \u2206(T , T \u2032 ) = 0 holds if, and only if,T = T \u2032 . These constraints can be rewritten by using the margin-based criterion as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "max 0, max T \u2032 s(x, T \u2032 ) + \u2206(T , T \u2032 ) \u2212 s(x,T ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "We minimize this criterion by using the minibatch stochastic gradient descent and the backpropagation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "The highest-scoring negative tree T \u2032 ( =T ) can be efficiently found by modifying the dynamic programming algorithm in Equation 17. In particular, we replace s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "(i, j) with s(i, j) + \u00bd[(i, j) / \u2208T ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "Combining Viterbi training and the marginbased objective function allows us to (1) avoid explicitly counting discourse constituent patterns as symbolic variables and (2) directly use the scores of the trees found in the E step for reestimation of the next model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M Step",
"sec_num": null
},
{
"text": "In general, the EM algorithm tends to get stuck in local optima of the objective function (Charniak, 1993) . Therefore, proper initialization is vital in order to avoid trivial solutions. This phenomenon has also been observed in EM-based grammar induction (Klein and Manning, 2004; Gimpel and Smith, 2012) .",
"cite_spans": [
{
"start": 90,
"end": 106,
"text": "(Charniak, 1993)",
"ref_id": "BIBREF9"
},
{
"start": 257,
"end": 282,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF29"
},
{
"start": 283,
"end": 306,
"text": "Gimpel and Smith, 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization in EM",
"sec_num": "3.3"
},
{
"text": "In this section, we introduce the initialization methods we use in Viterbi EM. More precisely, given an input document (i.e., a sequence of EDUs), we automatically build a discourse constituent structure based on our general prior knowledge of document structures. Below, we describe the four pieces of prior knowledge we use for the initial-tree sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization in EM",
"sec_num": "3.3"
},
{
"text": "(i) Document Hierarchy It is intuitively reasonable to consider that (elementary) discourse units belonging to the same textual chunk (e.g., sentence, paragraph) tend to form a subtree before crossing over the chunk boundaries. For example, we can assume that EDUs in the same sentence are preferentially connected with each other before getting combined with EDUs in other sentences. Actually, Joty et al. (2013 Joty et al. ( , 2015 and Feng and Hirst (2014) observed that it is effective to incorporate intersentential and multi-sentential parsing to build a document-level tree.",
"cite_spans": [
{
"start": 395,
"end": 412,
"text": "Joty et al. (2013",
"ref_id": "BIBREF23"
},
{
"start": 413,
"end": 433,
"text": "Joty et al. ( , 2015",
"ref_id": "BIBREF22"
},
{
"start": 438,
"end": 459,
"text": "Feng and Hirst (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization in EM",
"sec_num": "3.3"
},
{
"text": "First, we split an input document into sentencelevel and paragraph-level segments by detecting sentence and paragraph boundaries, respectively. We obtain sentence segmentation by applying the Stanford CoreNLP to the concatenation of EDUs. We also extract paragraph boundaries by detecting empty lines in the raw documents. 6 We then build a discourse constituent structure incrementally from sentencelevel subtrees to paragraph-level subtrees and then to the document-level tree in a bottom-up manner. Figure 3 shows this process.",
"cite_spans": [
{
"start": 323,
"end": 324,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 502,
"end": 510,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Initialization in EM",
"sec_num": "3.3"
},
{
"text": "The second prior knowledge relates to information order in discourses and the branching tendencies of discourse trees. In general, an important text element tends to appear at earlier positions in the document, and then the text following it complements the message, which is reflected in the Right Frontier Constraint (Polanyi, 1985) 6 Therefore, our ''paragraph'' boundaries do not strictly correspond to paragraph segmentation. However, we found that this pseudo ''paragraph'' segmentation improves the parsing accuracy. We used the raw WSJ files (''*.out'') in RST-DT, e.g., ''wsj 1135.out.'' Figure 4 : (a) We assume that an important text element tends to appear at earlier positions in the text, and the text following it complements the message, which leads to the right-heavy structure. (b)-(c) We split a intra-sentential EDU sequence into two subsequences based on the location of the EDU with the ROOT word. We build right-branching trees for each subsequence individually and finally bracket them. Head words are underlined.",
"cite_spans": [
{
"start": 319,
"end": 334,
"text": "(Polanyi, 1985)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [
{
"start": 597,
"end": 605,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "(ii) Discourse Branching Tendency",
"sec_num": null
},
{
"text": "in Segmented Discourse Representation Theory (Asher and Lascarides, 2003) . This tendency can be assumed to hold recursively. Therefore, it is reasonable to consider that discourse structures tend to form right-heavy trees, as shown in Figure 4(a) . Based on this assumption, we build right-branching trees for sentence-level, paragraph-level, and document-level discourse structures in the initial-tree sampling.",
"cite_spans": [
{
"start": 45,
"end": 73,
"text": "(Asher and Lascarides, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 236,
"end": 247,
"text": "Figure 4(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(ii) Discourse Branching Tendency",
"sec_num": null
},
{
"text": "As already discussed, this work assumes that discourse structures tend to form right-heavy trees. However, in our preliminary experiments, we found that this naive assumption produces about 44% erroneous trees for sentence-level structures with 3 EDUs. For sentences with 4 EDUs, the error rate increases to about 70%, which is a non-negligible number in the initialization step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(iii) Syntax-Aware Branching Tendency",
"sec_num": null
},
{
"text": "To resolve this problem, we introduce another, more fine-grained, knowledge concept for sentencelevel discourse structures. We expect that sentencelevel trees are more strongly affected by syntactic cues (e.g., dependency graphs) than paragraphlevel or document-level trees. More specifically, given an EDU sequence of one sentence,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(iii) Syntax-Aware Branching Tendency",
"sec_num": null
},
{
"text": "x i , \u2022 \u2022 \u2022 , x j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(iii) Syntax-Aware Branching Tendency",
"sec_num": null
},
{
"text": "we focus on a position of the EDU x k with a head word that is in a ROOT relation with its parent in the dependency graph. We assume that the sub-sequence after the ROOT EDU, x k:j , roughly corresponds to the predicate of the sentence, and the sub-sequence before the ROOT EDU, x i:k\u22121 , corresponds to the subject. We build right-branching trees for each subsequence individually and finally bracket them. We illustrate the procedure in Figure 4(b) -(c).",
"cite_spans": [],
"ref_spans": [
{
"start": 439,
"end": 450,
"text": "Figure 4(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(iii) Syntax-Aware Branching Tendency",
"sec_num": null
},
{
"text": "Inspired by Smith and Eisner (2006) , we introduce a structural locality bias as the last prior knowledge. The locality bias was observed to improve the accuracy of dependency grammar induction. We hypothesize that discourse constituents of shorter spans are preferable to those of longer ones.",
"cite_spans": [
{
"start": 12,
"end": 35,
"text": "Smith and Eisner (2006)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(iv) Locality Bias",
"sec_num": null
},
{
"text": "Instead of introducing the locality bias into the initial-tree sampling, we encode it into the decoding algorithm in training and inference. More precisely, we re-write the CKY recursion in Equation 17as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(iv) Locality Bias",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C[i, j] = s(i, j) + \u03bb |i \u2212 j + 1| + max i\u2264k<j C[i, k] + C[k + 1, j],",
"eq_num": "(21)"
}
],
"section": "(iv) Locality Bias",
"sec_num": null
},
{
"text": "where \u03bb denotes the hyperparameter and we empirically set \u03bb = 10. The second term decreases in inverse proportion to the span distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(iv) Locality Bias",
"sec_num": null
},
{
"text": "4 Experiment Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(iv) Locality Bias",
"sec_num": null
},
{
"text": "We use the RST Discourse Treebank (RST-DT) built by Carlson et al. (2001) , 7 which consists of 385 Wall Street Journal articles manually annotated with RST structures (Mann and Thompson, 1988) . We use the predefined split of 347 training articles and 38 test articles. We also prepare a development set with 30 instances randomly sampled from the training set, which is used only for hyper-parameter tuning and early stopping. We tokenized the documents using Stanford CoreNLP tokenizer and converted them to lowercase. We also replaced digits with ''7'' (e.g., ''12.34'' \u2192 ''77.77'') to reduce data sparsity.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "Carlson et al. (2001)",
"ref_id": "BIBREF7"
},
{
"start": 168,
"end": 193,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF34"
},
{
"start": 564,
"end": 586,
"text": "''12.34'' \u2192 ''77.77'')",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We also replaced out-of-vocabulary tokens with special symbols '' UNK .''",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "Following existing studies in unsupervised syntactic parsing (Klein, 2005; Smith, 2006) , we quantitatively evaluate unsupervised parsers by comparing parse trees with the manually annotated ones. We use the standard (unlabeled) constituency metrics in PARSEVAL: Unlabeled Precision (UP), Unlabeled Recall (UR), and their Micro F 1 , which can indicate how well the parser identifies the linguistically reasonable structures.",
"cite_spans": [
{
"start": 61,
"end": 74,
"text": "(Klein, 2005;",
"ref_id": "BIBREF25"
},
{
"start": 75,
"end": 87,
"text": "Smith, 2006)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "The traditional evaluation procedure for RST parsing is RST-PARSEVAL (Marcu, 2000) , which adapts the PARSEVAL for the RST representation shown in Figure 5 (a)-(b). However, Morey et al. (2018) showed that, as shown in Figure 5(c) , traditional RST-PARSEVAL gives a higher-thanexpected score because it considers pre-terminals (i.e., spans of length 1), which cannot be incorrect in the unlabeled constituency metrics. We therefore follow Morey et al. (2018) and perform the encoding of RST trees as shown in Figure 5 ",
"cite_spans": [
{
"start": 69,
"end": 82,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF37"
},
{
"start": 174,
"end": 193,
"text": "Morey et al. (2018)",
"ref_id": "BIBREF43"
},
{
"start": 439,
"end": 458,
"text": "Morey et al. (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 219,
"end": 230,
"text": "Figure 5(c)",
"ref_id": "FIGREF1"
},
{
"start": 509,
"end": 517,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "(d)-(f).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "That is, we exclude spans of length 1 and include the root node. We also do not binarize the goldstandard trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "To quantitatively evaluate our unsupervised discourse constituency parser, it is necessary to develop strong baseline parsers. We thus propose Combinational Incremental Parsers (CIPs), which automatically and incrementally build a discourse (unlabeled) constituent structure from an EDU sequence based on the prior knowledge introduced in Section 3.3. That is, CIPs first build sentence-level discourse trees based on sentence segmentation using an elementary parser f s . They then build paragraph-level trees using another elementary parser f p , and finally output the document-level tree using f d . An elementary parser is a function that returns a single tree given a sequence of EDUs or subtrees. CIPs can be represented as a triplet of elementary parsers, namely, the following four candidates for the elementary parsers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Right Branching (RB) Given a sequence of elements (i.e., EDUs or subtrees), RB always chooses the left-most element as a left terminal node and then treats the remaining elements as a right nonterminal (or terminal). This procedure is recursively applied to the remaining elements on the right, resulting in (x 0 (x 1 (x 2 . . . ))). As described in Section 3.3, we predict that RB somewhat captures the branching tendency of discourse informational structures. RB was also used as a strong baseline for unsupervised syntactic constituency parsing in Klein and Manning (2001b) .",
"cite_spans": [
{
"start": 551,
"end": 576,
"text": "Klein and Manning (2001b)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Left Branching (LB) Contrary to RB, LB always chooses the right-most element as the right terminal and then transforms the remaining elements on the left to a subtree, resulting in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "(((. . . x n\u22123 ) x n\u22122 ) x n\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Adaptive Right Branching (RB * ) We augment RB by considering the syntax-aware branching tendency, described in Section 3.3(iii). That is, based on the position of the head EDU (with the ROOT relation), we split the sentence into two parts and then perform RB for each sub-sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Random Bottom-Up (BU) BU randomly selects two adjacent elements and brackets them. This operation is repeated in a bottom-up manner until we obtain a single binary tree spanning the whole sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "We set the dimensionalities of the word embeddings, POS embeddings, relation embeddings, forward/backward LSTM hidden layers, and MLP to 300, 10, 10, 125, and 100, respectively. We initialized the word embeddings with the GloVe vectors trained on 840 billion tokens (Pennington et al., 2014) . During the training, we did not fine-tune the word embeddings. We run the initialization steps for 3 epochs. We used a minibatch size of 10. We also used the Adam optimizer (Kingma and Ba, 2015) .",
"cite_spans": [
{
"start": 266,
"end": 291,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF46"
},
{
"start": 467,
"end": 488,
"text": "(Kingma and Ba, 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "4.4"
},
{
"text": "In this section we report the results of the experiments and discuss them. We first discuss the comparison results of our method with baselines and the fully supervised RST parsers, including the results published in literature (Section 5.1). We then investigate the impact of initialization methods (Section 5.2). Finally, we provide our analysis on discourse constituents induced by our method (Section 5.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We compared our method with the baselines described in Section 4.3. We also included the previous work (Kobayashi et al., 2019) golden trees for evaluation. 8 For reference, we also compared our method with fully supervised parsers: the supervised version of our model 9 and recent supervised parsers (Feng and Hirst, 2014; Joty et al., 2015) that incorporate intra-sentential and multi-sentential parsing as in our parser. Table 1 shows the unlabeled constituency scores in the corrected RST-PARSEVAL (Morey et al., 2018) against non-binarized trees. We also show the traditional RST-PARSEVAL Micro F 1 scores in parentheses. f s , f d indicates that we used only sentence boundaries and discarded paragraph boundaries. The scores of external supervised parsers (Feng and Hirst, 2014; Joty et al., 2015) are borrowed from Morey et al. (2018) . 8 However, scores against the binarized trees and the original trees are quite similar (Morey et al., 2018) . 9 We used the same model and hyperparameters as the unsupervised model. The only difference is that we used conventional supervised learning with manually annotated trees in stead of Viterbi EM.",
"cite_spans": [
{
"start": 103,
"end": 127,
"text": "(Kobayashi et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 301,
"end": 323,
"text": "(Feng and Hirst, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 324,
"end": 342,
"text": "Joty et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 502,
"end": 522,
"text": "(Morey et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 763,
"end": 785,
"text": "(Feng and Hirst, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 786,
"end": 804,
"text": "Joty et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 823,
"end": 842,
"text": "Morey et al. (2018)",
"ref_id": "BIBREF43"
},
{
"start": 845,
"end": 846,
"text": "8",
"ref_id": null
},
{
"start": 932,
"end": 952,
"text": "(Morey et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 955,
"end": 956,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "5.1"
},
{
"text": "We observe that: (1) the incremental treeconstruction approach with boundary information consistently improves parsing performances of the baselines; (2) RB-based CIPs are better than those with LB or BU; and (3) replacing RB with RB * yields further improvements. These results confirm the reasonability of the prior knowledge of document structures. The best baseline is RB * s , RB p , LB d , which achieves a Micro F 1 score of 66.8% (83.7%) without any learning. Quite shockingly, the score is competitive with those of the supervised parsers. Table 1 also demonstrates that our method outperforms all the baselines and achieves an F 1 score of 67.5% (84.0%). If we use the best baseline for initial-tree sampling in Viterbi EM, the performance further improves to 68.0% (84.3%).",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 556,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "5.1"
},
{
"text": "To investigate the potential of our unsupervised parser, we also augmented the training dataset with an external unlabeled corpus. We used about 2,000 news articles from Wall Street Journal in Penn Treebank (Marcus et al., 1993) that are not shared with the RST-DT test set. We split the raw documents into EDU segmentations by using an external pre-trained EDU segmenter (Wang et al., 2018) 10 and found that the larger unlabeled dataset can improve parsing performance to 68.6%.",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF38"
},
{
"start": 372,
"end": 391,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "5.1"
},
{
"text": "It is worth noting that our method outperforms the baselines used for the initialization, which implies that our method learns some knowledge of discourse constituentness in an unsupervised manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "5.1"
},
{
"text": "Our method also achieves comparable or superior results to supervised models. We suspect that the reason why the supervised version of our model outperforms the external supervised parsers (Feng and Hirst, 2014; Joty et al., 2015) is mostly dependent on feature extraction the introduction of paragraph boundaries.",
"cite_spans": [
{
"start": 189,
"end": 211,
"text": "(Feng and Hirst, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 212,
"end": 230,
"text": "Joty et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "5.1"
},
{
"text": "Here, we evaluate the importance of initialization in Viterbi EM. Beginning with uniform initialization, we incrementally applied the initialization techniques introduced in Section 3.3 and investigated their impact on the results. Table 2 shows the results. We observe that our model yields the lowest score of 58.9% with Table 1 ). We then introduced Discourse Branching Tendency in Section 3.3(ii) by replacing BU with RB in the CIP, which also improved the performance, slightly, to 59.7%. We then introduced Syntax-Aware Branching Tendency in Section 3.3(iii) by replacing RB with RB * only for the sentence level, which brought a considerable performance gain of 6.6 points (66.3%). Finally, we introduced Locality Bias in Section 3.3(iv) and achieved 67.5%. We also found that our model can be improved further to 68.0% if we use the best baseline for initialization. In total, these initialization techniques made a difference of 9.1 points compared with uniform initialization (i.e., 58.9 \u2192 68.0), which implies that initialization should be carefully considered in unsupervised discourse (constituency) parsing using EM and that the prior knowledge we proposed in Section 3.3(i)-(iv) can capture some of the tendencies of document structures. We also found that Syntax-Aware Branching Tendency is most effective among the techniques, which suggests that more detailed knowledge can yield further improvements.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 323,
"end": 330,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Impact of Initialization Methods",
"sec_num": "5.2"
},
{
"text": "Knowledge Initial Trees Micro F 1 No (Uniform) BU 58.9 (i) BU s , BU p , BU d 59.1 (i)+(ii) RB s , RB p , RB d 59.7 (i)+(ii)+(iii) RB * s , RB p , RB d 66.3 (i)+(ii)+(iii)+(iv) RB * s , RB p , RB d 67.5 Best baseline RB * s , RB p , LB d 68.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Initialization Methods",
"sec_num": "5.2"
},
{
"text": "Here, we further investigate the discourse constituentness learned by our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learned Discourse Constituentness",
"sec_num": "5.3"
},
{
"text": "First, we calculated Unlabeled Recall (UR) scores for each relation class in RST-DT. We used 18 coarse-grained classes. Please note that we only focus on constituent spans {(i, j)} because our method does not predict relation labels. Table 3 shows the results of the best four and the worst four relation classes of our method. We compare the results with the supervised version. We observe that although our method uses an unsupervised approach and does not rely on structural annotations, some scores are comparable to those of the supervised version. We also found that relation classes with relatively higher scores can be assumed to form right-heavy structures (e.g., ATTRIBUTION, ENABLEMENT), whereas relations with lower scores can be considered to form left-heavy structures (e.g., EVALUATION, SUMMARY). These results are natural because the initialization methods we used in the Viterbi training strongly rely on RB-based CIP. This implies that, to capture discourse constituency phenomena of SUMMARY or EVALUTION relations, it is necessary to introduce other initialization techniques (or prior knowledge) in future.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Learned Discourse Constituentness",
"sec_num": "5.3"
},
{
"text": "Lastly, we qualitatively inspected the discourse constituentness learned by our method. We computed span scores s(i, j) for all possible spans (i, j) in the RST-DT test set without using any boundary information. We then sampled text spans x i:j with relatively higher constituent scores, s(i, j) > 10.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learned Discourse Constituentness",
"sec_num": "5.3"
},
{
"text": "As shown in the upper part of Table 4 , we can observe that our method learns some aspects of discourse constituentness that seems linguistically reasonable. In particular, we found that our method has a potential to predict brackets for (1) clauses with connectives qualifying other clauses from right to left (e.g., ''X [because B.]'') and (2) attribution structures (e.g., ''say that [B]''). These results indicate that our method is good at identifying discourse constituents near the end Table 4 : Discourse constituents and their predicted scores (in parentheses). We show the discourse constituents (in bold) in the RST-DT test set, which have relatively high span scores. We did NOT use any sentence/paragraph boundaries for scoring. of sentences (or paragraphs), which is natural because RB is mainly used for generating initial trees in EM training. The bottom part of Table 4 demonstrates that the beginning position of the text span is also important to estimate constituenthood, along with the ending position.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 4",
"ref_id": null
},
{
"start": 493,
"end": 500,
"text": "Table 4",
"ref_id": null
},
{
"start": 879,
"end": 886,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learned Discourse Constituentness",
"sec_num": "5.3"
},
{
"text": "In this paper, we introduced an unsupervised discourse constituency parsing algorithm that uses Viterbi EM with a margin-based criterion to train a span-based neural parser. We also introduced initialization methods for the Viterbi training of discourse constituents. We observed that our unsupervised parser achieves comparable or even superior performance to the baselines and fully supervised parsers. We also found that learned discourse constituents depend strongly on initialization used in Viterbi EM, and it is necessary to explore other initialization techniques to capture more diverse discourse phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have two limitations in this study. First, this work focuses only on unlabeled discourse constituent structures. Although such hierarchical information is useful in downstream applications (Louis et al., 2010) , both nuclearity statuses and rhetorical relations are also necessary for a more complete RST analysis. Second, our study uses only English documents for evaluation. However, different languages may have different structural regularities. Hence, it would be interesting to investigate whether the initialization methods are effective in different languages, which we believe gives suggestions on discourse-level universals. We leave these issues as a future work.",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "(Louis et al., 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 8, pp. 215-230, 2020. https://doi.org/10.1162/tacl a 00312 Action Editor: Yuji Matsumoto. Submission batch: 07/2019; Revision batch: 12/2019; Published 4/2020. c 2020 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our code can be found at https://github.com/ norikinishida/DiscourseConstituencyInduction-ViterbiEM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Prior studies on grammar induction generally use sentences up to length 10, 15, or 40. On the other hand, about half the documents in the RST-DT corpus(Carlson et al., 2001) are longer than 40.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We apply the Stanford CoreNLP parser to the concatenation of the EDUs; https:// stanfordnlp.github.io/CoreNLP/.4 If there are multiple head words in an EDU, we choose the left most one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A detailed investigation of the span-based parsing model using LSTM can be found inGaddy et al. (2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://catalog.ldc.upenn.edu/ LDC2002T07.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "f s , f p , f d .(22)Inspired by earlier studies in unsupervised syntactic constituency parsing(Klein and Manning, 2001a,b;Klein, 2005;Seginer, 2007), we prepare",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/PKU-TANGENT/ NeuralEDUSeg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research results have been achieved by ''Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation'', the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan. This work was also supported by JSPS KAKENHI grant number JP19K22861, JP18J12366.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "After 93 hours of deliberation, the jurors in the second trial said",
"authors": [],
"year": null,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The bankruptcy-court reorganization is being challenged ... by a dissident group of claimants] [because it places a cap on the total amount of money available] [to settle claims.] [It also bars future suits against ...] (11.74) [The first two GAF trials were watched closely on Wall Street] [because they were considered to be important tests of goverment's ability] [to convince a jury of allegations] [stemming from its insider-trading investigations.] [In an eight-court indictment, the goverment charged GAF, ...] (10.16) [The posters were sold for $1,300 to $6,000,] [although the government says] [they had a value of only $53 to $200 apiece.] [Henry Pitman, the assistant U.S. attorney] [handling the case,] [said] [about ...] (11.31) [The office, an arm of the Treasury, said] [it doesn't have data on the financial position of applications] [and thus can't determine] [why blacks are rejected more often.] [Nevertheless, on Capital Hill,] [where ...] (11.57) [After 93 hours of deliberation, the jurors in the second trial said] [they were hopelessly deadlocked,] [and another mistrial was declared on March 22.] [Meanwhile, a federal jury found Mr. Bilzerian ...] (11.66) [(''I think | she knows me,] [but I'm not sure '')] [and Bridget Fonda, the actress] [(''She knows me,] [but we're not really the best of friends'').] [Mr. Revson, the gossip columnist, says] [there are people] [who ...] (11.11) [its vice president ... resigned] [and its Houston work force has been trimmed by 40 people, of about 15%.] [The maker of hand-held computers and computer systems said] [the personnel changes were needed] [to improve the efficiency of its manufacturing operation.] [The company said] [it hasn't named a successor ...] (4.44) [its vice president ... resigned] [and its Houston work force has been trimmed by 40 people, of about 15%.] [The maker of hand-held computers and computer systems said] [the personnel changes were needed] [to improve the efficiency of its manufacturing operation.] [The company said] [it hasn't named a successor...] (11.04) [its vice president ... resigned] [and its Houston work force has been trimmed by 40 people, of about 15%.] [The maker of hand-held computers and computer systems said ] [the personnel changes were needed] [to improve the efficiency of its manufacturing operation.] [The company said] [it hasn't named a successor...] (5.50) [its vice president ... resigned] [and its Houston work force has been trimmed by 40 people, of about 15%.] [The maker of hand-held computers and computer systems said] [the personnel changes were needed] [to improve the efficiency of its manufacturing operation.] [The company said] [it hasn't named a successor...] (7.68) References Hiyan Alshawi. 1996. Head automata and bilin- gual tiling: Translation with minimal represen- tations. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Logics and Conversation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics and Conversation, Cambridge University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "James",
"middle": [
"K"
],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "Speech Communication Papers for the 97th Meeting of the Acoustic Society of America",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James K. Baker. 1979. Trainable grammars for speech recognition. In Speech Communication Papers for the 97th Meeting of the Acoustic Society of America.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Painless unsupervised learning with features",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard- C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proceedings of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Better document-level sentiment analysis from RST discourse parsing",
"authors": [
{
"first": "Parminder",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level senti- ment analysis from RST discourse parsing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine trans- lation: Parameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2nd SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. In Proceedings of the 2nd SIGdial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Two experiments on learning probabilistic dependency grammars from corpora",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1992,
"venue": "Working Notes of the Workshop Statistically-based NLP Techniques",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic depen- dency grammars from corpora. In Working Notes of the Workshop Statistically-based NLP Techniques.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical language learning",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 1993. Statistical language learning. MIT Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Structured local training and biased potential functions for conditional random fields with application to coreference resolution",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2007. Structured local training and biased potential functions for conditional random fields with application to coreference resolution. In Proceedings of the 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The complexity of phrase alighment problems",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Dan Klein. 2008. The com- plexity of phrase alighment problems. In Pro- ceedings of the 46th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A linear-time bottom-up discourse parser with constraints and post-editing",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Wei Feng",
"suffix": ""
},
{
"first": "Graema",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vanessa Wei Feng and Graema Hirst. 2014. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "What's going on in neural constituency parsers? An analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Gaddy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Gaddy, Mitchell Stern, and Dan Klein. 2018. What's going on in neural constituency parsers? An analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Concavity and initialization for unsupervised dependency parsing",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel and Noah A. Smith. 2012. Con- cavity and initialization for unsupervised de- pendency parsing. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Representation bias in unsupervised learning of syllable structure",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 9th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater and Mark Johnson. 2005. Representation bias in unsupervised learning of syllable structure. In Proceedings of the 9th Conference on Natural Language Learning.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A feature-rich constituent context model for grammar induction",
"authors": [
{
"first": "Dave",
"middle": [],
"last": "Golland",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dave Golland, John DeNero, and Jakob Uszkoreit. 2012. A feature-rich constituent context model for grammar induction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "HILDA: A discourse parser using support vector machine classification",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Duverle",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Dialogue & Discourse",
"volume": "1",
"issue": "3",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Hernault, Helmut Prendinger, David A. DuVerle, and Mitsuru Ishizuka. 2010. HILDA: A discourse parser using support vector machine classification. Dialogue & Discourse, 1(3):1-33.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Representation learning for text-level discourse parsing",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2014. Repre- sentation learning for text-level discourse pars- ing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised neural dependency parsing",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wenjuan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Jiang, Wenjuan Han, and Kewei Tu. 2016. Unsupervised neural dependency parsing. In Proceedings of the 2016 Conference of Empir- ical Methods in Natural Language Processing.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised grammar induction with depthbounded pcfg",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Schuler",
"suffix": ""
},
{
"first": "Lane",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "211--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2018. Unsupervised grammar induction with depth- bounded pcfg. Transactions of the Association for Computational Linguistics, 6:211-224.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "CODRA a novel discriminative framework for rhetorical analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "3",
"pages": "385--435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA a novel discriminative framework for rhetorical analysis. Computa- tional Linguistics, 41(3):385-435.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Combining intra-and multi-sentential rhetorical parsing for document-level discourse analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond T. Ng, and Yashar Mehdad. 2013. Combining intra-and multi-sentential rhetorical parsing for document-level discourse analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference Learning Representations.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The unsupervised learning of natural language structure",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein. 2005. The unsupervised learning of natural language structure. Ph.D. Thesis, Stanford University",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Distributional phrase structure induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 Workshop on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001a. Distributional phrase structure induction. In Proceedings of the 2001 Workshop on Compu- tational Natural Language Learning.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Natural language grammar induction using a constituent-context model",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001b. Natural language grammar induction using a constituent-context model. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A generative constituent-context model for improved grammar induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Corpus-based induction of syntactic structure: Models of constituency and dependency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2004. Corpus-based induction of syntactic structure: Models of constituency and dependency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Split of merge: Which is better for unsupervised RST parsing?",
"authors": [
{
"first": "Naoki",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kengo",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Hidetaka",
"middle": [],
"last": "Kamigaito",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, and Masaaki Nagata. 2019. Split of merge: Which is better for unsupervised RST parsing? In Proceedings of the 2019 Conference of Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The estimation of stochastic context-free grammars using the Inside-Outside algorithm",
"authors": [
{
"first": "Karim",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karim Lari and Steve J. Young. 1990. The esti- mation of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Text-level discourse dependency parsing",
"authors": [
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Discourse indicators for content selection in summarization",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "SIGDIAL'10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In SIGDIAL'10.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Rhetorical Structure Theory: Towards a functional theory of text organization",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text-Interdisciplinary Journal for the Study of Discourse",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Towards a functional theory of text organization. Text- Interdisciplinary Journal for the Study of Dis- course, 8(3):243-281.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceed- ings of 52nd Annual Meeting of the Associa- tion for Computational Linguistics: System Demonstrations.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Experiments in constructing a corpus of discourse trees: Problems, annotation choices, issues",
"authors": [
{
"first": "Danial",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Romera",
"suffix": ""
},
{
"first": "Estibaliz",
"middle": [],
"last": "Amorrortu",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the ACL'99 Workshop on Standards and Tools for Discourse Tagging",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danial Marcu, Magdalena Romera, and Estibaliz Amorrortu. 1999. Experiments in constructing a corpus of discourse trees: Problems, annotation choices, issues. In Proceedings of the ACL'99 Workshop on Standards and Tools for Dis- course Tagging.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Theory and Practice of Discourse Parsing and Summarization",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization, MIT Press.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Tree- bank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Effective self-training for parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006a. Effective self-training for parsing. In Proceedings of the 2006 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006b. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Evaluation of text coherence for electronic essay scoring systems",
"authors": [
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Kukich",
"suffix": ""
}
],
"year": 2004,
"venue": "Natural Language Engineering",
"volume": "10",
"issue": "1",
"pages": "25--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleni Miltsakaki and Karen Kukich. 2004. Eval- uation of text coherence for electronic essay scoring systems. Natural Language Engineer- ing, 10(1):25-55.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "How much progress have we made on rst discourse parsing? A replication study of recent results on the RST-DT",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathieu Morey, Philippe Muller, and Nicholas Asher. 2017. How much progress have we made on rst discourse parsing? A replication study of recent results on the RST-DT. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A dependency perspective on RST discourse parsing and evaluation",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "2",
"pages": "197--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathieu Morey, Philippe Muller, and Nicholas Asher. 2018. A dependency perspective on RST discourse parsing and evaluation. Compu- tational Linguistics, 44(2):197-235.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Using universal linguistic knowledge to guide grammar induction",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induc- tion. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A View of the EM Algorithm That Justifies Incremental, Sparse, and Other Variants, Learning and Graphical Models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Neal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radford M. Neal and Geoffrey E. Hinton. 1998. A View of the EM Algorithm That Justifies Incre- mental, Sparse, and Other Variants, Learn- ing and Graphical Models.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "GloVe: Global vectors for word representations",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representations. In Proceed- ings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A theory of discourse structure and discourse coherence",
"authors": [
{
"first": "Livia",
"middle": [],
"last": "Polanyi",
"suffix": ""
}
],
"year": 1985,
"venue": "Proceedings of the 21st Regional Meeting of the Chicago Linguistics Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Livia Polanyi. 1985. A theory of discourse struc- ture and discourse coherence. In Proceedings of the 21st Regional Meeting of the Chicago Linguistics Society.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Discourse structure and sentiment",
"authors": [
{
"first": "Livia",
"middle": [],
"last": "Polanyi",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE 11th International Conference on Data Mining Workshops",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Livia Polanyi and Martin Van den Berg. 2011. Discourse structure and sentiment. In 2011 IEEE 11th International Conference on Data Mining Workshops.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Transformational Grammar",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Radford. 1988. Transformational Gram- mar, Cambridge University Press.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Analysis of discourse structure with syntactic dependencies and datadriven shift-reduce parsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 11th International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae. 2009. Analysis of discourse struc- ture with syntactic dependencies and data- driven shift-reduce parsing. In Proceedings of the 11th International Workshop on Parsing Technology.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Fast unsupervised incremental parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Seginer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Seginer. 2007. Fast unsupervised incre- mental parsing. In Proceedings of the 45th",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Annual Meeting of the Association of Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association of Compu- tational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Annealing structural bias in multilingual weighted grammar induction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2006. Anneal- ing structural bias in multilingual weighted grammar induction. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Novel estimation methods for unsupervised discovery of latent structure in natural language text",
"authors": [
{
"first": "Noah Ashton",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah Ashton Smith. 2006. Novel estimation methods for unsupervised discovery of latent structure in natural language text. Ph.D. Thesis, Johns Hopkins University.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "A minimal span-based neural consituency parser",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural consituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Toward fast and accurate neural discourse segmentation",
"authors": [
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingfeng",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Lan- guage Processing.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Dependency-based discourse parser for single-document summarization",
"authors": [
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based discourse parser for single-document summari- zation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "We build a discourse constituent structure incrementally in a bottom-up manner. Sentence-level subtrees are shown in red rectangles, paragraphlevel subtrees in green rectangles, and the documentlevel tree in a blue rectangle."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Variants of RST encodings and the corresponding unlabeled constituency scores: Unlabeled Recall (UR) and Unlabeled Precision (UP)."
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>: Unlabeled constituency scores in the cor-</td></tr><tr><td>rected RST-PARSEVAL (Morey et al., 2018)</td></tr><tr><td>against non-binarized trees. UP and UR represent</td></tr><tr><td>Unlabeled Precision and Unlabeled Recall, respec-</td></tr><tr><td>tively. For reference, we also show the traditional</td></tr><tr><td>RST-PARSEVAL Micro F 1 scores in parentheses.</td></tr><tr><td>Asterisk indicates that we have borrowed the score</td></tr><tr><td>from Morey et al. (2018).</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>: Comparison of initialization methods in</td></tr><tr><td>our Viterbi training.</td></tr><tr><td>uniform initialization (no prior knowledge). By</td></tr><tr><td>introducing Document Hierarchy in Section 3.3(i),</td></tr><tr><td>parsing performance improves slightly to 59.1%.</td></tr><tr><td>This result is interesting because the unlabeled</td></tr><tr><td>constituency scores of</td></tr></table>",
"text": "BU and BU s , BU p , BU d are quite different (19.5 vs. 55.5; see",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "The best four and worst four rhetorical relations with their corresponding Unlabeled Recall scores. The relations are ordered according to scores of the unsupervised parser.",
"num": null,
"type_str": "table"
}
}
}
}