{ "paper_id": "S17-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:29:00.617592Z" }, "title": "Classifying Semantic Clause Types: Modeling Context and Genre Characteristics with Recurrent Neural Networks and Attention", "authors": [ { "first": "Maria", "middle": [], "last": "Becker", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "mbecker@cl.uni-heidelberg.de" }, { "first": "Michael", "middle": [], "last": "Staniek \u2666\u2660", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "" }, { "first": "Vivi", "middle": [], "last": "Nastase \u2666\u2660", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "alexis.palmer@unt.edu" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of North Texas", "location": {} }, "email": "frank@cl.uni-heidelberg.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Detecting aspectual properties of clauses in the form of situation entity types has been shown to depend on a combination of syntactic-semantic and contextual features. We explore this task in a deeplearning framework, where tuned word representations capture lexical, syntactic and semantic features. We introduce an attention mechanism that pinpoints relevant context not only for the current instance, but also for the larger context. Apart from implicitly capturing task relevant features, the advantage of our neural model is that it avoids the need to reproduce linguistic features for other languages and is thus more easily transferable. We present experiments for English and German that achieve competitive performance. We present a novel take on modeling and exploiting genre information and showcase the adaptation of our system from one language to another.", "pdf_parse": { "paper_id": "S17-1027", "_pdf_hash": "", "abstract": [ { "text": "Detecting aspectual properties of clauses in the form of situation entity types has been shown to depend on a combination of syntactic-semantic and contextual features. We explore this task in a deeplearning framework, where tuned word representations capture lexical, syntactic and semantic features. We introduce an attention mechanism that pinpoints relevant context not only for the current instance, but also for the larger context. Apart from implicitly capturing task relevant features, the advantage of our neural model is that it avoids the need to reproduce linguistic features for other languages and is thus more easily transferable. We present experiments for English and German that achieve competitive performance. We present a novel take on modeling and exploiting genre information and showcase the adaptation of our system from one language to another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic clause types, called Situation Entity (SE) types (Smith, 2003; Palmer et al., 2007) are linguistic characterizations of aspectual properties shown to be useful for argumentation structure analysis (Becker et al., 2016b) , genre characterization , and detection of generic and generalizing sentences . Recent work on automatic identification of SE types relies on feature-based classifiers for English that have been successfully applied to various textual genres (Friedrich et al., 2016) , and also show that a sequence labeling approach that models contextual clause labels yields improved classification performance.", "cite_spans": [ { "start": 58, "end": 71, "text": "(Smith, 2003;", "ref_id": "BIBREF33" }, { "start": 72, "end": 92, "text": "Palmer et al., 2007)", "ref_id": "BIBREF27" }, { "start": 206, "end": 228, "text": "(Becker et al., 2016b)", "ref_id": "BIBREF2" }, { "start": 472, "end": 496, "text": "(Friedrich et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Deep learning provides a powerful framework in which linguistic and semantic regularities can be implicitly captured through word embeddings (Mikolov et al., 2013b) . Patterns in larger text fragments can be encoded and exploited by recurrent (RNNs) or convolutional neural networks (CNNs) which have been successfully used for various sentence-based classification tasks, e.g. sentiment (Kim, 2014) or relation classification (Vu et al., 2016; Tai et al., 2015) .", "cite_spans": [ { "start": 141, "end": 164, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF24" }, { "start": 388, "end": 399, "text": "(Kim, 2014)", "ref_id": "BIBREF18" }, { "start": 427, "end": 444, "text": "(Vu et al., 2016;", "ref_id": "BIBREF36" }, { "start": 445, "end": 462, "text": "Tai et al., 2015)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We frame the task of classifying clauses with respect to their aspectual properties -i.e., situation entity types -in a recurrent neural network architecture. We adopt a Gated Recurrent Unit (GRU)based RNN architecture that is well suited to modeling long sequences (Yin et al., 2017) . This initial model is enhanced with an attention mechanism shown to be beneficial for sentence classification and sequence modeling (Dong and Lapata, 2016) . We explore the usefulness of attention in two settings: (i) the individual classification task and (ii) in a setting approximating sequential labeling in which the attention vector provides features that describe the clauses preceding the current instance. Compared to the strong baseline provided by the feature based system of Friedrich et al. (2016) , we achieve competitive performance and find that attention as well as context representation using predicted or goldstandard labels of the previous N clauses, and text genre information improve our model.", "cite_spans": [ { "start": 266, "end": 284, "text": "(Yin et al., 2017)", "ref_id": "BIBREF38" }, { "start": 419, "end": 442, "text": "(Dong and Lapata, 2016)", "ref_id": "BIBREF9" }, { "start": 774, "end": 797, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A strong motivation for developing NN-based systems is that they can be transferred with low cost to other languages without major feature engineering or use of hand-crafted linguistic knowledge resources. Given the highly-engineered feature sets used for SE classification so far (Friedrich et al., 2016) , porting such classifiers to other languages is a non-trivial issue. We test the portability of our system by applying it to German.", "cite_spans": [ { "start": 281, "end": 305, "text": "(Friedrich et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present a novel take on modeling and exploiting genre information and test it on the English multi-genre corpus of Friedrich et al. (2016) .", "cite_spans": [ { "start": 118, "end": 141, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our aims and contributions are: (i) We study the performance of GRU-based models enhanced with attention for modeling local and non-local characteristics of semantic clause types. (ii) We compare the effectiveness of the learned attention weights as features for a sequence labeling system to the explicitly defined syntactic-semantic features in (Friedrich et al., 2016) . (iii) We define extensions of our models that integrate external knowledge about genre and show that this can be used to improve classification performance across genres. (iv) We test the portability of our models to other languages by applying them to a smaller, manually annotated German dataset. The performance is comparable to English.", "cite_spans": [ { "start": 347, "end": 371, "text": "(Friedrich et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semantic clause types can be distinguished by the function they have within a text or discourse. We use the inventory of semantic clause types, also known as situation entity (SE) types, developed by Smith (2003) and extended in later work (Palmer et al., 2007; . SE types describe abstract semantic types of situations evoked in discourse through clauses. As such, they capture the manner of presentation of content, along with the information content itself. The seven SE types we use are described below.", "cite_spans": [ { "start": 200, "end": 212, "text": "Smith (2003)", "ref_id": "BIBREF33" }, { "start": 240, "end": 261, "text": "(Palmer et al., 2007;", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Clause Types", "sec_num": "2" }, { "text": "1. STATE (S): Armin has brown eyes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Clause Types", "sec_num": "2" }, { "text": "Bonnie ate three tacos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVENT (EV):", "sec_num": "2." }, { "text": "The agency said costs had increased. 4. GENERIC SENTENCE (GEN) predicates over classes or kinds: Birds can fly. -Scientists make arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REPORT (R) provides attribution:", "sec_num": "3." }, { "text": "Fei travels to India every year. 6. QUESTION (Q): Why do you torment me so? 7. IMPERATIVE (IMP): Listen to this. An eighth class OTHER is assigned to clauses without an SE label, e.g. bylines or email headers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GENERALIZING SENTENCE (GS) describes regularly occurring events:", "sec_num": "5." }, { "text": "Features that distinguish SE types are a combination of linguistic features of the clause and its main verb, and the nature of the main referent of the clause. 1 There is a correlation between the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GENERALIZING SENTENCE (GS) describes regularly occurring events:", "sec_num": "5." }, { "text": "Feature-based classification of situation entity types. The first robust system for SE type classification (Friedrich et al., 2016) combines taskspecific syntactic and semantic features with distributional word features, as captured by Brown clusters (Brown et al., 1992) . This system segments each text into a sequence of clauses and then predicts the best sequence of SE labels for the text using a linear chain conditional random field (CRF) with label bigram features. 2 Although SE types are relevant across languages, their linguistic realization varies across languages.", "cite_spans": [ { "start": 107, "end": 131, "text": "(Friedrich et al., 2016)", "ref_id": "BIBREF12" }, { "start": 251, "end": 271, "text": "(Brown et al., 1992)", "ref_id": "BIBREF7" }, { "start": 474, "end": 475, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Accordingly, some of Friedrich et al. 2016 Friedrich et al. (2016)'s system is trained and evaluated on data sets from MASC and Wikipedia (Section 5), reaching accuracies of 76.4% (F1 71.2) with 10-fold cross-validation, and 74.7% (F1 69.3) on a held-out test set. To evaluate the contribution of sequence information, Friedrich et al. (2016) compare the CRF model to a Maximum Entropy baseline, with the result that the sequential model significantly outperforms the model which classifies clauses in isolation, particularly for the less-frequent SE types of GENERIC SEN-TENCE and GENERALIZING SENTENCE.", "cite_spans": [ { "start": 319, "end": 342, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "When trained and tested within a single genre (of the 13 genres represented in the data sets), Friedrich et al. (2016)'s system performance ranges from 26.6 F1 (for government documents) to 66.2 F1 (for jokes). Training on all genres levels out this performance difference, with a range of F1 scores from 58.1-69.8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Neural approaches to sentence classification, sequence and context modeling. Inspired by research in vision, sentence classification tasks have initially been modeled using Convolutional Neural Networks (Kim, 2014; Kalchbrenner et al., 2014) . RNN variations -with Gated Recurrent Units (GRU) (Cho et al., 2014) or Long Short-Term Memory units (LSTM) (Hochreiter and Schmidhuber, 1997) -have since achieved state of the art performance in both sequence modeling and classification tasks. Recent work applies bi-LSTM models in sequence modeling (PoS tagging, Plank et al. (2016) , NER Lample et al. (2016) ) and structure prediction tasks (Semantic Role Labeling, Zhou and Xu (2015) or semantic parsing into logical forms Dong and Lapata (2016) ). Tree-based LSTM models have been shown to often perform better than purely sequential bi-LSTMs (Tai et al., 2015; Miwa and Bansal, 2016) , but depend on parsed input.", "cite_spans": [ { "start": 203, "end": 214, "text": "(Kim, 2014;", "ref_id": "BIBREF18" }, { "start": 215, "end": 241, "text": "Kalchbrenner et al., 2014)", "ref_id": "BIBREF17" }, { "start": 293, "end": 311, "text": "(Cho et al., 2014)", "ref_id": "BIBREF8" }, { "start": 351, "end": 385, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF15" }, { "start": 558, "end": 577, "text": "Plank et al. (2016)", "ref_id": "BIBREF29" }, { "start": 580, "end": 604, "text": "NER Lample et al. (2016)", "ref_id": null }, { "start": 663, "end": 681, "text": "Zhou and Xu (2015)", "ref_id": "BIBREF40" }, { "start": 721, "end": 743, "text": "Dong and Lapata (2016)", "ref_id": "BIBREF9" }, { "start": 842, "end": 860, "text": "(Tai et al., 2015;", "ref_id": "BIBREF35" }, { "start": 861, "end": 883, "text": "Miwa and Bansal, 2016)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Attention. Attention has been established as an effective mechanism that allows models to focus on specific words in the larger context. A model with attention learns what input tokens or token sequences to attend to and thus does not need to capture the complete input information in its hidden state. Attention has been used successfully e.g. in aspect-based sentiment classification , for modeling relations between words or phrases in encoder-decoder models for translation (Bahdanau et al., 2015) , or bi-clausal classification tasks such as textual entailment (Rockt\u00e4schel et al., 2016) . We use attention to larger context windows and previous labeling decisions to capture sequential information relevant for our classification task. We investigate the learned weights to gain information about what the models learn, and we start to explore how they can be used to provide features for a sequential labeling approach.", "cite_spans": [ { "start": 478, "end": 501, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 566, "end": 592, "text": "(Rockt\u00e4schel et al., 2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "We aim for a system that can fine-tune input word embeddings to the task, and can process clauses as sequences of words from which to encode larger patterns that help our particular clause classification task. GRU RNNs are used because they can successfully process long sequences and capture long-term dependencies. Attention can encode which parts of the input contain relevant information. These modeling choices are described and justified in detail below. The performance of the models is reported in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4" }, { "text": "Recurrent Neural Networks (RNNs) are modifications of feed-forward neural networks with recur-rent connections, which allow them to find patterns in -and thus model -sequences. Simple RNNs cannot capture long-term dependencies (Bengio et al., 1994) because the gradients tend to vanish or grow out of control with long sequences. Gated Recurrent Unit (GRU) RNNs, proposed by Cho et al. (2014) , address this shortcoming. GRUs have fewer parameters and thus need less data to generalize (Zhou et al., 2016) than LSTM RNNs, and also outperform the LSTM in many cases (Yin et al., 2017) , which makes them a good choice for our relatively small dataset. 3 The relevant equations for a GRU are below. x t is the input at time t, r t is a reset gate which determines how to combine the new input with the previous memory, and the update gate z t defines how much of the previous memory to keep. h t is the hidden state (memory) at time t, andh t is the candidate activation at time t. W * and U * are weights that are learned. denotes the element-wise multiplication of two vectors.", "cite_spans": [ { "start": 227, "end": 248, "text": "(Bengio et al., 1994)", "ref_id": "BIBREF3" }, { "start": 375, "end": 392, "text": "Cho et al. (2014)", "ref_id": "BIBREF8" }, { "start": 486, "end": 505, "text": "(Zhou et al., 2016)", "ref_id": "BIBREF39" }, { "start": 565, "end": 583, "text": "(Yin et al., 2017)", "ref_id": "BIBREF38" }, { "start": 651, "end": 652, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Basic Model: Gated Recurrent Unit", "sec_num": "4.1" }, { "text": "r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Model: Gated Recurrent Unit", "sec_num": "4.1" }, { "text": "t = \u03c3(W r x t + U r h t\u22121 ) h t = tanh(W x t + U (r t h t\u22121 )) z t = \u03c3(W z x t + U z h t\u22121 ) h t = (1 \u2212 z t ) h t\u22121 + z t h t (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Model: Gated Recurrent Unit", "sec_num": "4.1" }, { "text": "The last hidden vector h t will be taken as the representation of the input clause. After compressing it into a vector whose length is equal to the number of class labels (=8) using a fully connected layer with sigmoid function, we apply sof tmax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Model: Gated Recurrent Unit", "sec_num": "4.1" }, { "text": "We extend our GRU model with a neural attention mechanism to capture the most relevant words in the input clauses for classifying SE types. Specifically, we adapt the implementation of attention used in Rockt\u00e4schel et al. (2016) for our clause classification task as follows:", "cite_spans": [ { "start": 203, "end": 228, "text": "Rockt\u00e4schel et al. (2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Attention Model", "sec_num": "4.2" }, { "text": "M = tanh(W h H + W v h t \u2297 e L ) \u03b1 = sof tmax(w T M ) r = H\u03b1 T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention Model", "sec_num": "4.2" }, { "text": "where H is a matrix consisting of the hidden vectors [h 1 , ..., h t ] produced by the GRU, h t is the last output vector of the GRU, and e L is a vector of 1s where L denotes the L words of the input clause. \u2297 denotes the outer product of the two vectors. \u03b1 is a vector consisting of attention weights and r is a weighted representation of the input clause. W h , W v , and w are parameters to be learned during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention Model", "sec_num": "4.2" }, { "text": "The final clause representation is obtained from a combination of the attention-weighted representation r of the clause and the last output vector v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention Model", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h * = tanh(W p r + W x h t )", "eq_num": "(2)" } ], "section": "Attention Model", "sec_num": "4.2" }, { "text": "where W p and W x are trained projection matrices. We convert h * to a real-valued vector with length 8 (the number of target classes) and apply sof tmax to transform it to a probability distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention Model", "sec_num": "4.2" }, { "text": "Text types differ in their situation entity type distributions: Palmer et al. (2007) find that GENERIC SENTENCES and GENERALIZING SENTENCES play a predominant role for texts associated with the argument or commentary mode (such as essays), and EVENTS and STATES for texts associated with the report mode (such as news texts). (Becker et al., 2016a) find that argumentative texts are characterized by a high proportion of GENERIC and GENERALIZING SENTENCES and very few EVENTS, while reports and talks contain a high proportion of STATES, and fiction is characterized by a high number of EVENTS. Ngram analyses show that sequences of SE types differ among different genres: e.g. while ST-ST is the most frequent bigram within journal articles, the most frequent bigram in Wikipedia articles is GEN-GEN. The most frequent trigram in Jokes is EV-EV-EV, followed by ST-ST-ST, whereas in government documents the most frequent trigrams are ST-ST-ST and EV-ST-ST. These results show that n-grams cluster in texts (cf. ), and they differ among genres. This supports the choice of incorporating (sequential) context information for classification of SE types. Fig. 1 illustrates both the context and the genre information our models consider for classifying SE types, while Fig. 2 illustrates our model's architecture.", "cite_spans": [ { "start": 64, "end": 84, "text": "Palmer et al. (2007)", "ref_id": "BIBREF27" }, { "start": 326, "end": 348, "text": "(Becker et al., 2016a)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1152, "end": 1158, "text": "Fig. 1", "ref_id": "FIGREF1" }, { "start": 1266, "end": 1272, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Modeling Context and Genre", "sec_num": "4.3" }, { "text": "We develop two models that not only consider the local sentence for SE classification in model training, but also the previous clauses' token sequences or the labels of previous clauses. When attending to tokens of previous clauses we add one GRU model with attention mechanism for each previous clause (N denotes the number of previous clauses) and concatenate their final outputs with the final output of the GRU with attention for the current clause (cf. Fig. 2 ).", "cite_spans": [], "ref_spans": [ { "start": 458, "end": 464, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Context Modeling: Clauses and Labels", "sec_num": "4.3.1" }, { "text": "h * con1 =< tanh(W p r 1 + W x v 1 ); ...; tanh(W p r N + W x v N ) >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Modeling: Clauses and Labels", "sec_num": "4.3.1" }, { "text": "We then transform the concatenated vector into a dense vector equal to the number of class labels and apply sof tmax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Modeling: Clauses and Labels", "sec_num": "4.3.1" }, { "text": "For attending to labels of the previous clauses, we first transform the gold labels used during training into embeddings and apply attention as described in section 4.2 to these representations. We then concatenate the last output of the current clause with the embeddings for the labels of the previous clauses (here N denotes the number of previous labels):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Modeling: Clauses and Labels", "sec_num": "4.3.1" }, { "text": "h * con2 =< tanh(W p r+W x v); y t\u22121 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Modeling: Clauses and Labels", "sec_num": "4.3.1" }, { "text": "...; y t\u2212N > where y t\u2212i is the embedding representation for the previous t-i label. At test time we use the predicted probability distribution vector as the labels of the previous clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Modeling: Clauses and Labels", "sec_num": "4.3.1" }, { "text": "The English corpus we use consists of texts from 13 genres; the German corpus covers 7 genres (Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Modeling: Textual Genres", "sec_num": "4.3.2" }, { "text": "Information about genre is encoded as dense embeddings g of size 10 initialized randomly, and we apply attention mechanism to these representations. Adding genre information produces three new versions of the model: (i) genre+basic model: < g; h t > (h t from eq.1), (ii) genre+attention model < g; h * > (h * from eq.2), (iii) genre+context in form of previous labels (cf. Fig.2) . Results for all three combinations are reported in Section 6.", "cite_spans": [], "ref_spans": [ { "start": 374, "end": 380, "text": "Fig.2)", "ref_id": null } ], "eq_spans": [], "section": "Feature Modeling: Textual Genres", "sec_num": "4.3.2" }, { "text": "Word embeddings have been shown to capture syntactic and semantic regularities (Mikolov et al., 2013b) and to benefit from fine tuning for specific tasks. The features used by Friedrich et al. (2016) cover a variety of linguistic features -such as tense, voice, number, POS, semantic clusters -some of which we expect to be encoded in pre-trained embeddings, while others will emerge through model training. We start with pre-trained embeddings for both English and German, because this leads to better results than random ini- Fig. 1 ).", "cite_spans": [ { "start": 79, "end": 102, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF24" }, { "start": 176, "end": 199, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 528, "end": 534, "text": "Fig. 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Word embeddings", "sec_num": "4.4" }, { "text": "tialization. For German, we use 100-dimensional word2vec embeddings trained on a large German corpus of 116 million sentences (Reimers et al., 2014) . 4 For English, we use 300-dimensional word2vec embeddings (Mikolov et al., 2013a) trained on part of the Google News dataset (about 100 billion words). The pre-trained embeddings are tuned during training.", "cite_spans": [ { "start": 126, "end": 148, "text": "(Reimers et al., 2014)", "ref_id": "BIBREF30" }, { "start": 151, "end": 152, "text": "4", "ref_id": null }, { "start": 209, "end": 232, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Word embeddings", "sec_num": "4.4" }, { "text": "Hyperparameter settings were determined through exhaustive random search using optunity (Bergstra and Bengio, 2012) on the development set, and we use the best setting for evaluating on the test set. We tune batch size, number of layers, GRU cell size, and regularization parameter (L2). For learning rate optimization we use AdaGrad (Duchi et al., 2011) and tune the initial learning rate. For the basic model (without attention), the best result on the development set is achieved for GRU with batch size 100, 2 layers, cell size 350, learning rate 0.05, and L2 regularization parameter (0.01). For the model using attention mechanism the parameters are identical except for L2 (0.0001). ", "cite_spans": [ { "start": 88, "end": 115, "text": "(Bergstra and Bengio, 2012)", "ref_id": "BIBREF6" }, { "start": 334, "end": 354, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Parameters and Tuning", "sec_num": "4.5" }, { "text": "We use the English dataset described in . 5 The texts, drawn from Wikipedia and MASC (Ide et al., 2010) , range across 13 genres, e.g. news texts, government documents, essays, fiction, jokes, emails. For German, we combine two data sets described in Mavridou et al. (2015) and Becker et al. (2016a) and additional data annotated by ourselves. 6 The German texts cover 7 genres: argumentative essays with STATES second at 24.3%. 7 For the 12 MASC genres, STATE is the most frequent type (49.8%), with EVENTS second at 24.3%. GENERIC SEN-TENCES make up only 7.3% of the SE types in the MASC texts. In the German data, the distributions of SE types also differ according to genre: in argumentative texts, for example, GENERIC SEN-TENCES make up 48% of the SE types, followed by STATES with a frequency of 32%, while in most other genres the most frequent class is STATE.", "cite_spans": [ { "start": 42, "end": 43, "text": "5", "ref_id": null }, { "start": 85, "end": 103, "text": "(Ide et al., 2010)", "ref_id": "BIBREF16" }, { "start": 251, "end": 273, "text": "Mavridou et al. (2015)", "ref_id": "BIBREF22" }, { "start": 278, "end": 299, "text": "Becker et al. (2016a)", "ref_id": "BIBREF1" }, { "start": 344, "end": 345, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "The texts of the English dataset are split into clauses using SPADE (Soricut and Marcu, 2003) . For segmenting the German dataset into clauses we use DiscourseSegmenter's rule-based segmenter (edseg, Sidarenka et al. (2015)), which employs German-specific rules. Because Dis-courseSegmenter occasionally oversplit segments, we did a small amount of post-processing.", "cite_spans": [ { "start": 68, "end": 93, "text": "(Soricut and Marcu, 2003)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "For the English dataset, we use the same testtrain split as Friedrich et al. (2016) . 8 The German dataset was split into training and testing with a balanced distribution of genres (as is the case for the English dataset). Both datasets have a 80-20 split between training and testing (20% of training is used for development).", "cite_spans": [ { "start": 60, "end": 83, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" }, { "start": 86, "end": 87, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "We report results in terms of accuracy and macro-average F1 score on the held-out test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "Baseline systems. The feature-based system of Palmer07 (Palmer et al., 2007) (Palmer07 in Table 2 ) simulates context through predicted labels from previous clauses. Friedrich et al. (2016) type labeler for different feature sets, with 10-fold cross validation and on a held-out test set. To test if the context is useful they extend their classifier with a CRF that includes the predicted label of the preceding clause. In the oracle setting it includes the gold label of the previous clause.", "cite_spans": [ { "start": 55, "end": 76, "text": "(Palmer et al., 2007)", "ref_id": "BIBREF27" }, { "start": 167, "end": 190, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 77, "end": 98, "text": "(Palmer07 in Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "Feature set A consists of standard NLP features including POS tags and Brown clusters. Feature set B includes more detailed features such as tense, lemma, negation, modality, WordNet sense, Word-Net supersense and WordNet hypernym sense. We presume that some of the information captured by feature set B, particularly sense and hypernym information, may not be captured in the word embeddings we use in our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "Evaluation of our neural systems. Our local system (cf. Section 4.1) achieves an accuracy of 66.55 (Table 3) . Adding genre information does not help, but adding attention within the local clause yields an improvement of 2.44 percentage points (pp). Using both attention and genre information leads to a 2.13 pp increase over the model that uses only attention. Adding context information beyond the local clause -a window of up to three previous clauses -improves the wordbased attention models slightly, but a wider window (four or more clauses) causes a major drop Table 4 : SE-type classification on English test set, sequence oracle model using gold labels (gLab).", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 108, "text": "(Table 3)", "ref_id": "TABREF5" }, { "start": 568, "end": 575, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "in accuracy. 9 Using context as predicted labels of previous clauses improves the model slightly (up to 0.38 pp), but adding genre on top of that improves the model by up to 2.62 pp compared to the basic model with attention. The oracle model (cf . Table 4 ), which uses the gold labels of previous clauses, gives an upper bound for the impact of sequence information: 73.40% accuracy for previous 5 gold labels. Combined with genre information, the upper bound reaches 73.45% accuracy when using the previous 2 gold labels. The best accuracy on the English data (ignoring the oracle) is achieved by the model that uses 2 previous predicted labels plus genre information (71.61%). This model outperforms Friedrich et al. (2016)'s results when using standard NLP features (feature set A) and their model using feature set B separately. Our model comes close to Friedrich et al.'s best results obtained by applying their entire set of features, particularly considering that our system only uses generic word embeddings.", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 257, "text": ". Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "Window size as hyper-parameter? We achieve best results when incorporating two previous labels or two previous clauses (cf. Table 3 ). This is in line with Palmer et al. (2007) who report that in most cases performance starts to degrade as the model incorporates more than two previous labels. A window size of two does not always lead to best performance on the German dataset (cf. Section 7), where the model using predicted labels from the maximum window size (5) performs best. When adding genre information, we achieve best results with window size two (cf. Table 5 and 6). This inconsistency can possibly be traced back to the fact that we applied the best-performing vari- 9 We achieve 36.24 acc for 4 and 36.17 acc for 5 clauses. Results for single classes. Fig. 6 shows macroaverage F1 scores of our best performing system for the single SE classes. The scores are very similar to the results of Friedrich et al. (2016) . Scores for GENERALIZING SENTENCE are the lowest as this class is very infrequent in the data set, while scores for the classes STATE, EVENT, and RE-PORT are the highest. In addition, we explored our system's performance for binary classification (Fig. 6 ): here we classified STATE vs. the remaining classes, EVENT vs. the remaining classes etc. Binary classification achieves better performance and can be helpful for downstream applications which only need information about specific Analysis of attention. Attention is not only an effective mechanism that allows models to focus on specific parts of the input, but it may also enable interesting linguistic insights: (1) the attention to specific words or POS for specific SE types, (2) the overall distribution of attention weights among POS tag labels and SE types, and (3) the position of words with maximum/high attention scores within a clause. Fig. 3 shows example clauses with their attention weights. In the first clause, a STATE, the model attends most to the nouns \"China\" and \"Japan\". In the next clause, a GENERALIZING SENTENCE, the noun \"system\" is assigned the highest attention weight. The highest weighted word in the GENERIC SENTENCE is the pronoun \"their\", and in REPORT it is the verb \"answered\". Fig. 4 visualizes the mean attention score per POS tag for all SE types (gold labels). 10 Interestingly, attention seems to be especially important for classes that are rare, such as IMPERA-TIVE or REPORT, each less than 5% of the English dataset. The heat map indicates that the model especially attends to verbs when classifying the SE type REPORT. This is not surprising, since REPORT clauses are signaled by verbs of speech. GENERALIZING SENTENCE attend to symbols, mainly punctuation, and genitive markers such as \"'s\". The OTHER class, which includes clauses without an assigned SE type label, attends mostly to interjections. Indeed, OTHER is frequent in genres with fragmented sentences (emails, blogs), and numerous interjections such as \"wow\" or \"um\". Fig. 5 shows the relative positions of words with maximum attention within clauses. The model mostly attends to words at the end of clauses and almost never to words in the first half of clauses. This distribution shifts to the left when considering more words with high attention scores instead of only the word with maximum attention -words with 2 nd (3 rd , 4 th , 5 th ) highest attention score can often be found at the beginning of clauses. The model seems to draw information from a broad range of positions.", "cite_spans": [ { "start": 156, "end": 176, "text": "Palmer et al. (2007)", "ref_id": "BIBREF27" }, { "start": 680, "end": 681, "text": "9", "ref_id": null }, { "start": 905, "end": 928, "text": "Friedrich et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 766, "end": 772, "text": "Fig. 6", "ref_id": "FIGREF5" }, { "start": 1177, "end": 1184, "text": "(Fig. 6", "ref_id": "FIGREF5" }, { "start": 1834, "end": 1840, "text": "Fig. 3", "ref_id": "FIGREF2" }, { "start": 2200, "end": 2206, "text": "Fig. 4", "ref_id": "FIGREF3" }, { "start": 2962, "end": 2968, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "We explored the impact of the attention vectors as inputs to a sequence labeling modeleach clause is described through the words with the highest attention weights and these weights, and used in a conditional random field system (CRF++ 12 ). The best performance was obtained when using the attention vector of the current clause (and no additional context) -61.68% accuracy (47.18% F1 score). CRF++ maps the attention information to binary features, and as such cannot take advantage of information captured in the numerical values of the attention weights, or the embeddings of the given words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "6" }, { "text": "One advantage for developing NN-based systems that do not rely on hand-crafted features is that they can be used with different language data. We use the system described above with German data, only adjusting the size of the input embeddings. 13 Compared to the English dataset, the German dataset is smaller (44% in size) and less diverse with respect to genre (7 genres). The genres in the German dataset (argumentative texts, wikipedia, commentary, news, fiction, report, talk) are more similar to one another than the ones in the English dataset. The results comparing the effectiveness of integrating context and genre information are in Table 5 . The results of the oracle model using gold labels for previous clauses are in Table 6 . Compared to English, the models achieve higher performance, but attention by itself does not improve the results, and neither does the inclusion of genre information. Used jointly, attention and genre information yield a moderate increase of 1.06 pp. accuracy compared to the basic GRU. Attention may need more data and possibly more diversity to be learned effectively, and we will explore this in future work. Modeling context seems to have a larger impact:", "cite_spans": [ { "start": 244, "end": 246, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 644, "end": 651, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 732, "end": 739, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Porting the System to German", "sec_num": "7" }, { "text": "compared to the basic GRU using attention, information about the current and the previous clauses improves the model by up to 1.67 pp. More contextual information leads to higher accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Porting the System to German", "sec_num": "7" }, { "text": "We presented an RNN-based approach to situation entity classification that bears clear advantages compared to previous classifier models that rely on carefully hand-engineered features and lexical semantic resources: it is easily transferable to other languages as it can tune pre-trained embeddings to encode semantic information relevant for the task, and can develop attention models to capture -and reveal -relevant information from the larger context. We designed and compared several GRU-based RNN models that jointly model local and contextual information in a unified architecture. Genre information was added to model common properties of specific textual genres. What makes our work interesting for linguistically informed semantic models is the exploration of different model variants that combine local classification with sequence information gained from the contextual history, and how these properties interact with genre characteristics. We specifically explore attention mechanisms that help our models focus on specific characteristics of the local and non-local contexts. Attention models jointly using genre and context information in the form of previous predicted labels perform best for our task, for both languages. The performance results of our best models outperform the state of the art models of Fried16 for English when using either off-the-shelf NLP features (set A) or, separately, hand-crafted features based on lexical resources (set B). A small margin of ca. 3 pp accuracy is left to achieve in future work to compete with the knowledge-rich models of (Friedrich et al., 2016) .", "cite_spans": [ { "start": 1587, "end": 1611, "text": "(Friedrich et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "The main referent of a clause is roughly the per-distribution of SE types in text passages and discourse modes, e.g., narrative, informative, or argumentativeMavridou et al., 2015;Becker et al., 2016a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "son/thing/situation the clause is about, often realized as its grammatical subject.2 Code and data: https://github.com/annefried/sitent", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Comparison of GRUs, bi-GRUs, LSTMs and bi-LSTMs on our dataset for our classification task showed that GRUs outperform the latter three, confirming this assumption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://public.ukp.informatik.tu-darmstadt.de /reimers/2014 german embeddings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at: https://github.com/annefried/sitent 6 The data is available at http://www.cl.uni-heidelberg.de/ english/research/downloads/resource pages/GER SET/GER SET data.shtml. This dataset only contains the German data that has been annotated within the Leibniz Science campus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Wiki texts were selected by precisely in order to target GENERIC SENTENCE clauses.8 The cross validation splits of the data used byFriedrich et al. (2016) are not available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We post-process our data with POS tags using spaCy 11 with the PTB tagset(Marcus et al., 1993).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://taku910.github.io/crfpp/ 13 The different size of the embeddings (for English and German cf. section 4.4, may have an impact on the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments. We thank Sabrina Effenberger, Jesper Klein, Sarina Meyer, and Rebekka Sons for the annotations, and the reviewers for their insightful comments. This research is funded by the Leibniz Science Campus Empirical Linguistics & Computational Language Modeling, supported by Leibniz Association grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-W\u00fcrttemberg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Argumentative texts and clause types", "authors": [ { "first": "Maria", "middle": [], "last": "Becker", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 3rd Workshop on Argument Mining", "volume": "", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Becker, Alexis Palmer, and Anette Frank. 2016a. Argumentative texts and clause types. In Proceed- ings of the 3rd Workshop on Argument Mining. pages 21-30.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Clause Types and Modality in Argumentative Microtexts", "authors": [ { "first": "Maria", "middle": [], "last": "Becker", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2016, "venue": "Workshop on Foundations of the Language of Argumentation (in conjunction with COMMA)", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Becker, Alexis Palmer, and Anette Frank. 2016b. Clause Types and Modality in Argumentative Mi- crotexts. In Workshop on Foundations of the Language of Argumentation (in conjunction with COMMA). Potsdam, Germany, pages 1-9.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning long-term dependencies with gradient descent is difficult", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "P", "middle": [], "last": "Simard", "suffix": "" }, { "first": "P", "middle": [], "last": "Frasconi", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio, P. Simard, and P. Frasconi. 1994. Learn- ing long-term dependencies with gradient descent is difficult.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Random search for hyper-parameter optimization", "authors": [ { "first": "James", "middle": [], "last": "Bergstra", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2012, "venue": "Journal of Machine Learning Research", "volume": "13", "issue": "", "pages": "281--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research 13(Feb):281-305.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Peter", "middle": [ "V" ], "last": "Brown", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Desouza", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Mercer", "suffix": "" }, { "first": "Jenifer", "middle": [ "C" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Peter V. Desouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics 18(4):467479.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, B van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language to logical form with neural attention", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "33--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Berlin, Germany, pages 33-43. http://www.aclweb.org/anthology/P16-1004.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121-2159.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Situation entity annotation", "authors": [ { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Linguistic Annotation Workshop VIII", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annemarie Friedrich and Alexis Palmer. 2014. Situ- ation entity annotation. In Proceedings of the Lin- guistic Annotation Workshop VIII.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Situation entity types: automatic classification of clause-level aspect", "authors": [ { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1757--1768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: automatic classification of clause-level aspect. In Proceed- ings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 1757-1768. http://www.aclweb.org/anthology/P16-1166.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annotating genericity: a survey, a scheme, and a corpus", "authors": [ { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Melissa", "middle": [ "Peate" ], "last": "S\u00f8rensen", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2015, "venue": "The 9th Linguistic Annotation Workshop held in conjuncion with NAACL 2015", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annemarie Friedrich, Alexis Palmer, Melissa Peate S\u00f8rensen, and Manfred Pinkal. 2015. Annotating genericity: a survey, a scheme, and a corpus. In The 9th Linguistic Annotation Workshop held in conjun- cion with NAACL 2015. page 21.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Discourse-sensitive Automatic Identification of Generic Expressions", "authors": [ { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annemarie Friedrich and Manfred Pinkal. 2015. Discourse-sensitive Automatic Identification of Generic Expressions. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics (ACL). Beijing, China.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Manually Annotated Sub-Corpus: A community resource for and by the people", "authors": [ { "first": "Nancy", "middle": [], "last": "Ide", "suffix": "" }, { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "Collin", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL2010 Conference Short Papers", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Ide, Christiane Fellbaum, Collin Baker, and Re- becca Passonneau. 2010. The Manually Annotated Sub-Corpus: A community resource for and by the people. In Proceedings of the ACL2010 Conference Short Papers. pages 68-73.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A convolutional neural network for modelling sentences", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "655--665", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics. Baltimore, Maryland, pages 655- 665.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar, page 17461751.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recog- nition. In Proceedings of the 2016 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, San Diego, California, pages 260-270. http://www.aclweb.org/anthology/N16-1030.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "English-french verb phrase alignment in europarl for tense translation modeling", "authors": [ { "first": "Sharid", "middle": [], "last": "Loaiciga", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Popescu-Belis", "suffix": "" } ], "year": 2014, "venue": "Proceedings of The Ninth Language Resources and Evaluation Conference (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharid Loaiciga, Thomas Meyer, and Andrei Popescu- Belis. 2014. English-french verb phrase alignment in europarl for tense translation modeling. In Pro- ceedings of The Ninth Language Resources and Evaluation Conference (LREC).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Building a large annotated corpus of english: The penn treebank", "authors": [ { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. In Proceed- ings of the Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Linking discourse modes and situation entities in a cross-linguistic corpus study", "authors": [ { "first": "Kleio-Isidora", "middle": [], "last": "Mavridou", "suffix": "" }, { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Melissa", "middle": [ "Peate" ], "last": "Sorensen", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the EMNLP Workshop LSDSem 2015: Linking Models of Lexical", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kleio-Isidora Mavridou, Annemarie Friedrich, Melissa Peate Sorensen, Alexis Palmer, and Manfred Pinkal. 2015. Linking discourse modes and situation enti- ties in a cross-linguistic corpus study. In Proceed- ings of the EMNLP Workshop LSDSem 2015: Link- ing Models of Lexical, Sentential and Discourse- level Semantics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013a. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computa- tional Linguistics, Atlanta, Georgia, pages 746-751. http://www.aclweb.org/anthology/N13-1090.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "End-to-end relation extraction using lstms on sequences and tree structures", "authors": [ { "first": "Makoto", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1105--1116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 1105-1116. http://www.aclweb.org/anthology/P16- 1105.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Genre distinctions and discourse modes: Text types differ in their situation type distributions", "authors": [ { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and NLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Palmer and Annemarie Friedrich. 2014. Genre distinctions and discourse modes: Text types differ in their situation type distributions. In Proceedings of the Workshop on Frontiers and Connections be- tween Argumentation Theory and NLP.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A sequencing model for situation entity classification", "authors": [ { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Ponvert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Carlota", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Palmer, Elias Ponvert, Jason Baldridge, and Carlota Smith. 2007. A sequencing model for sit- uation entity classification. In Proceedings of ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An annotated corpus of argumentative microtexts", "authors": [ { "first": "Andreas", "middle": [], "last": "Peldszus", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the First European Conference on Argumentation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Peldszus and Manfred Stede. 2015. An an- notated corpus of argumentative microtexts. In Pro- ceedings of the First European Conference on Argu- mentation.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "16--2067", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 412-418. http://anthology.aclweb.org/P16-2067.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Germeval-2014: Nested Named Entity Recognition with neural networks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Eckle-Kohler", "suffix": "" }, { "first": "Carsten", "middle": [], "last": "Schnober", "suffix": "" }, { "first": "Jungi", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 12th Edition of the KONVENS Conference", "volume": "117120", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers, Judith Eckle-Kohler, Carsten Schnober, Jungi Kim, and Iryna Gurevych. 2014. Germeval- 2014: Nested Named Entity Recognition with neural networks. In Proceedings of the 12th Edition of the KONVENS Conference. page 117120.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Reasoning about entailment with neural attention", "authors": [ { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 4th International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of the 4th International Confer- ence on Learning Representations (ICLR). San Juan, Puerto Rico.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Discourse Segmentation of German Texts", "authors": [ { "first": "Uladzimir", "middle": [], "last": "Sidarenka", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Peldszus", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2015, "venue": "Journal for Language Technology and Computational Linguistics", "volume": "30", "issue": "", "pages": "71--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uladzimir Sidarenka, Andreas Peldszus, and Manfred Stede. 2015. Discourse Segmentation of German Texts. In Journal for Language Technology and Computational Linguistics. volume 30, pages 71- 98.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Modes of discourse: The local structure of texts", "authors": [ { "first": "S", "middle": [], "last": "Carlota", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "", "volume": "103", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlota S Smith. 2003. Modes of discourse: The local structure of texts, volume 103. Cambridge Univer- sity Press.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Sentence level discourse parsing using syntactic and lexical information", "authors": [ { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical infor- mation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology..", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1556--1566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representa- tions from tree-structured long short-term mem- ory networks. In Proceedings of the 53rd An- nual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Compu- tational Linguistics, Beijing, China, pages 1556- 1566. http://www.aclweb.org/anthology/P15-1150.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Combining recurrent and convolutional neural networks for relation classification", "authors": [ { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Adel", "suffix": "" }, { "first": "Pankaj", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "534--539", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016. Combining recurrent and con- volutional neural networks for relation classifica- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. San Diego, California, pages 534-539. http://www.aclweb.org/anthology/N16-1065.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Attention-based LSTM for Aspectlevel Sentiment Classification", "authors": [ { "first": "Yequan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "606--615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yequan Wang, Minlie Huang, xiaoyan zhu, and Li Zhao. 2016. Attention-based LSTM for Aspect- level Sentiment Classification. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, Austin, Texas, pages 606-615. https://aclweb.org/anthology/D16-1058.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Comparative study of cnn and rnn for natural language processing", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Katharina Kann, Mo Yu, and Hin- rich Sch\u00fctze. 2017. Comparative study of cnn and rnn for natural language processing. CoRR abs/1702.01923.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Deep recurrent models with fast-forward connections for neural machine translation", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Xuguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics pages", "volume": "", "issue": "", "pages": "371--383", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. Trans- actions of the Association for Computational Lin- guistics pages 371-383.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "End-to-end learning of semantic role labeling using recurrent neural networks", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1127--1137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learn- ing of semantic role labeling using recurrent neu- ral networks. In Proceedings of the 53rd An- nual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Compu- tational Linguistics, Beijing, China, pages 1127- 1137. http://www.aclweb.org/anthology/P15-1109.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "'s syntactic and semantic features are language-specific and are extracted using Englishspecific resources such as WordNet and Loaiciga et al. (2014)'s rules for extracting tense and voice information from POS tag sequences.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Context and genre information modeled in our system, example from Wikipedia Figure 2: Model Architecture, illustrated with an example (cf.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Visualization of attention for ST, GS, GEN, and REP.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Mean attention scores per POS tags on English dataset. POS tags from PTB. ations of our system developed on English data to our German dataset without further hyperparameter tuning.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "Position of words with maximum attention within clauses. x-axis represents the normalized position within the clause, y-axis the number of words with maximum attention at that position.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF5": { "text": "Macro-average F1 scores of our best performing system for single SE classes, multiclass vs. binary classification. SE types, for example for distinguishing generic from non-generic sentences.", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "html": null, "text": "", "num": null, "content": "", "type_str": "table" }, "TABREF3": { "html": null, "text": "Reported results of baseline models for English (accuracy and macro-average F1 score). CV=10-fold cross validation, test=eval. on test set.", "num": null, "content": "
", "type_str": "table" }, "TABREF5": { "html": null, "text": "", "num": null, "content": "
", "type_str": "table" }, "TABREF8": { "html": null, "text": "SE-type classification on German test set.", "num": null, "content": "
AccF1
GRU + att + gLab (1)71.3358.32
GRU + att + gLab (2)72.2359.43
GRU + att + gLab (3)73.8159.12
GRU + att + gLab (4)75.7460.39
GRU + att + gLab (5)76.3261.01
GRU + att + gLab + genre (1) 74.7959.34
GRU + att + gLab + genre (2) 77.9761.47
GRU + att + gLab + genre (3) 74.2859.84
GRU + att + gLab + genre (4) 74.1059.70
GRU + att + gLab + genre (5) 74.9658.18
", "type_str": "table" }, "TABREF9": { "html": null, "text": "SE-type classification on German test set, sequence oracle model .", "num": null, "content": "", "type_str": "table" } } } }