{ "paper_id": "P19-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:31:12.536703Z" }, "title": "You Only Need Attention to Traverse Trees", "authors": [ { "first": "Mahtab", "middle": [], "last": "Ahmed", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Western Ontario", "location": {} }, "email": "" }, { "first": "Muhammad", "middle": [ "Rifayat" ], "last": "Samee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Western Ontario", "location": {} }, "email": "msamee@uwo.ca" }, { "first": "Robert", "middle": [ "E" ], "last": "Mercer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Western Ontario", "location": {} }, "email": "rmercer@uwo.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In recent NLP research, a topic of interest is universal sentence encoding, sentence representations that can be used in any supervised task. At the word sequence level, fully attention-based models suffer from two problems: a quadratic increase in memory consumption with respect to the sentence length and an inability to capture and use syntactic information. Recursive neural nets can extract very good syntactic information by traversing a tree structure. To this end, we propose Tree Transformer, a model that captures phrase level syntax for constituency trees as well as word-level dependencies for dependency trees by doing recursive traversal only with attention. Evaluation of this model on four tasks gets noteworthy results compared to the standard transformer and LSTM-based models as well as tree-structured LSTMs. Ablation studies to find whether positional information is inherently encoded in the trees and which type of attention is suitable for doing the recursive traversal are provided.", "pdf_parse": { "paper_id": "P19-1030", "_pdf_hash": "", "abstract": [ { "text": "In recent NLP research, a topic of interest is universal sentence encoding, sentence representations that can be used in any supervised task. At the word sequence level, fully attention-based models suffer from two problems: a quadratic increase in memory consumption with respect to the sentence length and an inability to capture and use syntactic information. Recursive neural nets can extract very good syntactic information by traversing a tree structure. To this end, we propose Tree Transformer, a model that captures phrase level syntax for constituency trees as well as word-level dependencies for dependency trees by doing recursive traversal only with attention. Evaluation of this model on four tasks gets noteworthy results compared to the standard transformer and LSTM-based models as well as tree-structured LSTMs. Ablation studies to find whether positional information is inherently encoded in the trees and which type of attention is suitable for doing the recursive traversal are provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Following the breakthrough in NLP research with word embeddings by Mikolov et al. (2013) , recent research has focused on sentence representations. Having good sentence representations can help accomplish many NLP tasks because we eventually deal with sentences, e.g., question answering, sentiment analysis, semantic similarity, and natural language inference. Most of the existing task specific sequential sentence encoders are based on recurrent neural nets such as LSTMs or GRUs (Conneau et al., 2017; Lin et al., 2017; Liu et al., 2016) . All of these works follow a common paradigm: use an LSTM/GRU over the word sequence, extract contextual features at each time step, and apply some kind of pooling on top of that. However, a few works adopt some different methods. Kiros et al. (2015) propose a skip-gram-like objective function at the sentence level to obtain the sentence embeddings. Logeswaran and Lee (2018) reformulate the task of predicting the next sentence given the current one into a classification problem where instead of a decoder they use a classifier to predict the next sentence from a set of candidates.", "cite_spans": [ { "start": 67, "end": 88, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF15" }, { "start": 483, "end": 505, "text": "(Conneau et al., 2017;", "ref_id": "BIBREF3" }, { "start": 506, "end": 523, "text": "Lin et al., 2017;", "ref_id": "BIBREF10" }, { "start": 524, "end": 541, "text": "Liu et al., 2016)", "ref_id": "BIBREF11" }, { "start": 774, "end": 793, "text": "Kiros et al. (2015)", "ref_id": "BIBREF8" }, { "start": 895, "end": 920, "text": "Logeswaran and Lee (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The attention mechanism adopted by most of the RNN based models require access to the hidden states at every time step (Yang et al., 2016; Kumar et al., 2016) . These models are inefficient and at the same time very hard to parallelize. To overcome this, Parikh et al. (2016) propose a fully attention-based neural network which can adequately model the word dependencies and at the same time is parallelizable. Vaswani et al. (2017) adopt the multi-head version in both the encoder and decoder of their Transformer model along with positional encoding. Ahmed et al. (2017) propose a multi-branch attention framework where each branch captures a different semantic subspace and the model learns to combine them during training. Cer et al. (2018) propose an unsupervised sentence encoder by leveraging only the encoder part of the Transformer where they train on the large Stanford Natural Language Inference (SNLI) corpus and then use transfer learning on smaller task specific corpora.", "cite_spans": [ { "start": 119, "end": 138, "text": "(Yang et al., 2016;", "ref_id": "BIBREF11" }, { "start": 139, "end": 158, "text": "Kumar et al., 2016)", "ref_id": "BIBREF9" }, { "start": 255, "end": 275, "text": "Parikh et al. (2016)", "ref_id": "BIBREF16" }, { "start": 412, "end": 433, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF26" }, { "start": 554, "end": 573, "text": "Ahmed et al. (2017)", "ref_id": "BIBREF0" }, { "start": 728, "end": 745, "text": "Cer et al. (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Apart from these sequential models, there has been extensive work done on the tree structure of natural language sentences. Socher et al. (2011b Socher et al. ( , 2013 propose a family of recursive neural net (RvNN) based models where a composition function is applied recursively bottom-up on children nodes to compute the parent node representation until the root is reached. Tai et al. (2015) propose two variants of sequential LSTM, child sum tree LSTM and N-ary tree LSTM. The same gating structures as in standard LSTM are used except Recently, Shen et al. (2018) propose a Parsing-Reading-Predict Network (PRPN) which can induce syntactic structure automatically from an unannotated corpus and can learn a better language model with that induced structure. Later, Htut et al. (2018) test this PRPN under various configurations and datasets and further verified its empirical success for neural network latent tree learning. Williams et al. (2018) also validate the effectiveness of two latent tree based models but found some issues such as being biased towards producing shallow trees, inconsistencies during negation handling, and a tendency to consider the last two words of a sentence as constituents.", "cite_spans": [ { "start": 124, "end": 144, "text": "Socher et al. (2011b", "ref_id": "BIBREF22" }, { "start": 145, "end": 167, "text": "Socher et al. ( , 2013", "ref_id": "BIBREF23" }, { "start": 378, "end": 395, "text": "Tai et al. (2015)", "ref_id": "BIBREF25" }, { "start": 551, "end": 569, "text": "Shen et al. (2018)", "ref_id": "BIBREF18" }, { "start": 931, "end": 953, "text": "Williams et al. (2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel recursive neural network architecture consisting of a decomposable attention framework in every branch. We call this model Tree Transformer as it is solely dependent on attention. In a subtree, the use of a composition function is justified by a claim of Socher et al. (2011b . In this work, we replace this composition function with an attention module. While Socher et al. (2011b consider only the child representations for both dependency and constituency syntax trees, in this work, for dependency trees, the attention module takes both the child and parent representations as input and produces weighted attentive copies of them. For constituency trees, as the parent vector is entirely dependent on the upward propagation, the attention module works only with the child representations. Our extensive evaluation proves that our model is better or at least on par with the existing sequential (i.e., LSTM and Transformer) and tree structured (i.e., Tree LSTM and RvNN) models.", "cite_spans": [ { "start": 289, "end": 309, "text": "Socher et al. (2011b", "ref_id": "BIBREF22" }, { "start": 395, "end": 415, "text": "Socher et al. (2011b", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our model is designed to address the following general problem. Given a dependency or constituency tree structure, the task is to traverse every subtree within it attentively and infer the root representation as a vector. Our idea is inspired by the RvNN models from Socher et al. (2013 Socher et al. ( , 2011b where a composition function is used to transform a set of child representations into one single parent representation. In this section, we describe how we use the attention module as a composition function to build our Tree Transformer. Figure 1 gives a sketch of our model.", "cite_spans": [ { "start": 267, "end": 286, "text": "Socher et al. (2013", "ref_id": "BIBREF23" }, { "start": 287, "end": 310, "text": "Socher et al. ( , 2011b", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 549, "end": 557, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "A dependency tree contains a word at every node. To traverse a subtree in a dependency tree, we look at both the parent and child representations (X d in Eqn. 1). In contrast, in a constituency tree, only leaf nodes contain words. The nonterminal vectors are calculated only after traversing each subtree. Consequently, only the child representations (X c in Eqn. 1) are considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X d = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 p v c 1v . . . c nv \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb X c = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 c 1v c 2v . . . c nv \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb", "eq_num": "(1)" } ], "section": "Proposed Model", "sec_num": "2" }, { "text": "Here, p v is the parent representation and the c iv 's are the child representations. For both of these trees, Eqn. 2 computes the attentive transformed representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "P = f (x), where x \u2208 {X d , X c } (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "Here, f is the composition function using the multi-branch attention framework (Ahmed et al., 2017) . This multi-branch attention is built upon the multi-head attention framework (Vaswani et al., 2017) which further uses scaled dot-product attention (Parikh et al., 2016) as the building block. It operates on a query Q, key K and value V as follows", "cite_spans": [ { "start": 79, "end": 99, "text": "(Ahmed et al., 2017)", "ref_id": "BIBREF0" }, { "start": 179, "end": 201, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF26" }, { "start": 250, "end": 271, "text": "(Parikh et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "Attention(Q, K, V) = softmax QK T \u221a d k V (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "where d k is the dimension of the key. As we are interested in n branches, n copies are created for each (Q, K, V), converted to a 3D tensor, and then a scaled dot-product attention is applied using", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "B i = Attention(Q i W Q i , K i W K i , V i W V i ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "where i \u2208 [1, n] and the W i 's are the parameters that are learned. Note that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "W Q i , W K i and W V i \u2208 R dm\u00d7d k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "Instead of having separate parameters for the transformation of leaves, internal nodes and parents , we keep W Q i , W K i and W V i the same for all these components. We then project each of the resultant tensors into different semantic sub-spaces and employ a residual connection Srivastava et al., 2015) around them. Lastly, we normalize the resultant outputs using a layer normalization block (Ba et al., 2016) and apply a scaling factor \u03ba to get the branch representation. All of these are summarized in Eqn. 5.", "cite_spans": [ { "start": 282, "end": 306, "text": "Srivastava et al., 2015)", "ref_id": null }, { "start": 397, "end": 414, "text": "(Ba et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "B i = LayerNorm(B i W b i + B i ) \u00d7 \u03ba i (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "Here, W b i \u2208 R n\u00d7dv\u00d7dm and \u03ba \u2208 R n are the parameters to be learned. Note that we choose", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "d k = d q = d v = d m /n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "Following this, we take each of these B's and apply a convolutional neural network (see Eqn. 6) consisting of two transformations on each position separately and identically with a ReLU activation (R) in between.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "PCNN(x) = Conv(R(Conv(x) + b 1 )) + b 2 (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "We compute the final attentive representation of these subspace semantics by doing a linearly weighted summation (see Eqn. 7) where \u03b1 \u2208 R n is learned as a model parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "BranchAttn(Q, K, V) = n i=1 \u03b1 i PCNN(B i ) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "Lastly, we employ another residual connection with the output of Eqn. 7, transform it non-linearly and perform an element-wise summation (EwS) to get the final parent representation as in Eqn. 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P = EwS(tanh((x + x)W + b))", "eq_num": "(8)" } ], "section": "Proposed Model", "sec_num": "2" }, { "text": "Here, x andx depict the input and output of the attention module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "2" }, { "text": "In this section, we present the effectiveness of our Tree Transformer model by reporting its evaluation on four NLP tasks. We present a detailed ablation study on whether positional encoding is important for trees and also demonstrate which attention module is most suitable as a composition function for the recursive architectures. Experimental Setup: We initialize the word embedding layer weights with GloVe 300dimensional word vectors (Pennington et al., 2014) . These embedding weights are not updated during training. In the multi-head attention block, the dimension of the query, key and value matrices are set to 50 and we use 6 parallel heads on each input. The multi-branch attention block is composed of 6 position-wise convolutional layers. The number of branches is also set to 6. We use two layers of convolutional neural network as the composition function for the PCNN layer. The first layer uses 341 1d kernels with no dropout and the second layer uses 300 1d kernels with dropout 0.1. During training, the model parameters are updated using the Adagrad algorithm (Duchi et al., 2011) with a fixed learning rate of 0.0002. We trained our model on an Nvidia GeForce GTX 1080 GPU and used PyTorch 0.4 for the implementation under the Linux environment. Datasets: Evaluation is done on four tasks: the Stanford Sentiment Treebank (SST) (Socher et al., 2011b) for sentiment analysis, Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014) for semantic relatedness (-R) and natural language inference (-E), and the Microsoft Research Paraphrase (MSRP) corpus (Dolan et al., 2004) for paraphrase identification.", "cite_spans": [ { "start": 440, "end": 465, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" }, { "start": 1082, "end": 1102, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF5" }, { "start": 1351, "end": 1373, "text": "(Socher et al., 2011b)", "ref_id": "BIBREF22" }, { "start": 1449, "end": 1471, "text": "(Marelli et al., 2014)", "ref_id": "BIBREF14" }, { "start": 1591, "end": 1611, "text": "(Dolan et al., 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The samples in the SST dataset are labelled for both the binary and the 5-class classification task. In this work we are using only the binary classification labels. The MSRP dataset is labelled with two classes. The samples in the SICK dataset are labelled for both the 3-class SICK-E classification (Socher et al., 2012) 58.14 \u2020 -82.90 66.91 \u2020 RNTN (Socher et al., 2013) 59.42 \u2020 -85.40 66.91 \u2020 DT-RNN 63.38 \u2020 .3822 86.60 67.51 \u2020 DT-LSTM (Tai et al., 2015) 83.11 \u2020 .2532/.2625 \u2020 85.70/85.10 \u2020 72.07 \u2020 CT-LSTM (Tai et al., 2015) 82.00 \u2020 .2734/.2891 \u2020 88.00/87.27 \u2020 70.07 \u2020 LSTM LSTM (Tai et al., 2015) 76.80 .2831 84.90 71.70 Bi-LSTM (Tai et al., 2015) 82.11 \u2020 .2736 87.50 72.70 2-layer LSTM (Tai et al., 2015) 78.54 \u2020 .2838 86.30 69.35 \u2020 2-layer Bi-LSTM (Tai et al., 2015) 79.66 \u2020 .2762 87.20 70.40 \u2020 Infersent (Conneau et al., 2017) 84 task and the SICK-R regression task which uses real-valued labels between 1 and 5. Instead of doing a regression on SICK-R to predict the score, we are using the same setup as Tai et al. (2015) who compute a target distribution p as a function of the predicted score y given by Eqn. 9.", "cite_spans": [ { "start": 301, "end": 322, "text": "(Socher et al., 2012)", "ref_id": "BIBREF20" }, { "start": 351, "end": 372, "text": "(Socher et al., 2013)", "ref_id": "BIBREF23" }, { "start": 439, "end": 457, "text": "(Tai et al., 2015)", "ref_id": "BIBREF25" }, { "start": 510, "end": 528, "text": "(Tai et al., 2015)", "ref_id": "BIBREF25" }, { "start": 583, "end": 601, "text": "(Tai et al., 2015)", "ref_id": "BIBREF25" }, { "start": 634, "end": 652, "text": "(Tai et al., 2015)", "ref_id": "BIBREF25" }, { "start": 692, "end": 710, "text": "(Tai et al., 2015)", "ref_id": "BIBREF25" }, { "start": 755, "end": 773, "text": "(Tai et al., 2015)", "ref_id": "BIBREF25" }, { "start": 812, "end": 834, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF3" }, { "start": 1014, "end": 1031, "text": "Tai et al. (2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 y \u2212 y , if i = y + 1 y \u2212 y + 1, if i = y 0, otherwise", "eq_num": "(9)" } ], "section": "Experiments", "sec_num": "3" }, { "text": "The SST dataset includes already generated dependency and constituency trees. As the other two datasets do not provide tree structures, we parsed each sentence using the Stanford dependency and constituency parser .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For the sentiment classification (SST), natural language inference (SICK-E), and paraphrase identification (MSRP) tasks, accuracy, the standard evaluation metric, is used. For the semantic relatedness task (SICK-R), we are using mean squared error (MSE) as the evaluation metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We use KL-divergence as the loss function for SICK-R to measure the distance between the predicted and target distribution. For the other three tasks, we use cross entropy as the loss function. Table 1 shows the results of the evaluation of the model on the four tasks in terms of task specific evaluation metrics. We compare our Tree Transformer against tree structured RvNNs, LSTM based, and Transformer based architectures.", "cite_spans": [], "ref_spans": [ { "start": 194, "end": 201, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "To do a fair comparison, we implemented both variants of Tree LSTM and Transformer based architectures and some of the RvNN and LSTM based models which do not have reported results for every task. Instead of assessing on transfer performance, the evaluation is performed on each corpus separately following the standard train/test/valid split.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For SICK-E, our model achieved 82.95% and 82.72% accuracy with dependency and constituency tree, respectively, which is on par with DT-LSTM (83.11%) as well as CT-LSTM (82.00%) and somewhat better than the standard Transformer (81.15%). As can be seen, all of the previous recursive architectures are somewhat inferior to the Tree Transformer results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For SICK-R, we are getting .2774 and .3012 MSE whereas the reported MSE for DT-LSTM and CT-LSTM are .2532 and .2734, respectively. However, in our implementation of those models with the same hyperparameters, we haven't been able to reproduce the reported results. Instead we ended up getting .2625 and .2891 MSE for DT-LSTM and CT-LSTM, respectively. On this task, our model is doing significantly better than the standard Transformer (.5241 MSE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "On the SST dataset, our model (86.66% Acc.) is again on par with tree LSTM (87.27% Acc.) and better than Transformer (85.38% Acc.) as well as Infersent (86.00% Acc.) 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "On the MSRP dataset, our dependency tree version (70.34% Acc.) is below DT-LSTM (72.07% Acc.). However, for the constituency tree version, we are getting better accuracy (71.73%) than CT-LSTM (70.07%). It is to be noted that all of the sequential models, i.e., Transformer, Infersent and LSTMs, are doing better compared to the tree structured models on this paraphrase identification task. Since positional encoding is a crucial part of the standard Transformer, Table 2 presents its effect on trees. In constituency trees, positional information is inherently encoded in the tree structure. However, this is not the case with dependency trees. Nonetheless, our experiments suggest that for trees, positional encoding is irrelevant information as the performance drops in all but one case. We also did an experiment to see which attention module is best suited as a composition function and report the results in Table 3 . As can be seen, in almost all the cases, multi-branch attention has much better performance compared to the other two. This gain by multi-branch attention is much more significant for CTT than for DTT. Figure 2 visualizes how our CTT model puts attention on different phrases in a tree to compute the correct sentiment. Space limitations allow only portions of the tree to be visualized. As can be seen, the sentiment is positive (+1) at the root and the model puts more attention on the right branch as it has all of the positive words, whereas the left branch (NP) is neutral (0). The bottom three trees are the phrases which contain the positive words. The model again puts more attention on the relevant branches. The words 'well' and 'sincere' are inherently positive. In the corpus the Doug Liman the director of Bourne directs the traffic well gets a nice wintry look from his locations absorbs us with the movie 's spycraft and uses Damon 's ability to be focused and sincere word 'us' is tagged as positive for this sentence.", "cite_spans": [], "ref_spans": [ { "start": 464, "end": 471, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 914, "end": 921, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 1126, "end": 1134, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "In this paper, we propose Tree Transformer which successfully encodes natural language grammar trees utilizing the modules designed for the standard Transformer. We show that we can effectively use the attention module as the composition function together with grammar information instead of just bag of words and can achieve performance on par with Tree LSTMs and even better performance than the standard Transformer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "The official implementation available at https: //github.com/facebookresearch/InferSent is used. Reported hyperparameters are used except LSTM hidden state, 1024d is chosen due to hardware limitations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is partially funded by The Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant to Robert E. Mercer. We also acknowledge the helpful comments provided by the reviewers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Weighted transformer network for machine translation", "authors": [ { "first": "Karim", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.02132" ] }, "num": null, "urls": [], "raw_text": "Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer net- work for machine translation. arXiv preprint arXiv:1711.02132.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiacob", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St John", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Guajardo-C\u00e9spedes", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuanc", "suffix": "" } ], "year": 2018, "venue": "Universal sentence encoder", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11175" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiacob, Rhomni St John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuanc, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": { "DOI": [ "10.18653/v1/D17-1070" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", "authors": [ { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "350--356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of the 20th International Conference on Computational Linguistics, pages 350-356.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Grammar induction with neural language models: An unusual replication", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Cho", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "371--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 371-373.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Skip-thought vectors", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "R", "middle": [], "last": "Ruslan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3294--3302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Ad- vances in Neural Information Processing Systems, pages 3294-3302.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Ask me anything: Dynamic memory networks for natural language processing", "authors": [ { "first": "Ankit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Ozan", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ondruska", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Ishaan", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2016, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1378--1387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Con- ference on Machine Learning, pages 1378-1387.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A structured self-attentive sentence embedding", "authors": [ { "first": "Zhouhan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Minwei", "middle": [], "last": "Feng", "suffix": "" }, { "first": "C\u00edcero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations (ICLR) Conference Track Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhouhan Lin, Minwei Feng, C\u00edcero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations (ICLR) Conference Track Proceedings.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning natural language inference using bidirectional lstm model and inner-attention", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chengjie", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.09090" ] }, "num": null, "urls": [], "raw_text": "Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An efficient framework for learning sentence representations", "authors": [ { "first": "Lajanugen", "middle": [], "last": "Logeswaran", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations (ICLR) Conference Track Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lajanugen Logeswaran and Honglak Lee. 2018. An ef- ficient framework for learning sentence representa- tions. In 6th International Conference on Learning Representations (ICLR) Conference Track Proceed- ings.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 1-8.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A decomposable attention model for natural language inference", "authors": [ { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2249--2255", "other_ids": { "DOI": [ "10.18653/v1/D16-1244" ] }, "num": null, "urls": [], "raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural language modeling by jointly learning syntax and lexicon", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhouhan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Chin-Wei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations (ICLR) Conference Track Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural language model- ing by jointly learning syntax and lexicon. In 6th International Conference on Learning Representa- tions (ICLR) Conference Track Proceedings.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "801--809", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Eric H Huang, Jeffrey Pennington, Christopher D Manning, and Andrew Y Ng. 2011a. Dynamic pooling and unfolding recursive autoen- coders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801- 809.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semantic compositionality through recursive matrix-vector spaces", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Brody", "middle": [], "last": "Huval", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1201--1211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositional- ity through recursive matrix-vector spaces. In Pro- ceedings of the 2012 Joint Conference on Empiri- cal Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201-1211.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Grounded compositional semantics for finding and describing images with sentences", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "207--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Andrej Karpathy, Quoc V. Le, Christo- pher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207-218.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Parsing natural scenes and natural language with recursive neural networks", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Cliff", "middle": [], "last": "Chiung", "suffix": "" }, { "first": "-Yu", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011b. Parsing natu- ral scenes and natural language with recursive neu- ral networks. In Proceedings of the 28th Interna- tional Conference on International Conference on Machine Learning, ICML'11, pages 129-136.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 1631-1642.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1556--1566", "other_ids": { "DOI": [ "10.3115/v1/P15-1150" ] }, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1556-1566.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Drozdov", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "6", "issue": "", "pages": "253--267", "other_ids": { "DOI": [ "10.1162/tacl_a_00019" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Andrew Drozdov*, and Samuel R. Bowman. 2018. Do latent tree learning models iden- tify meaningful structure in sentences? Transac- tions of the Association for Computational Linguis- tics, 6:253-267.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1480--1489", "other_ids": { "DOI": [ "10.18653/v1/N16-1174" ] }, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Attention over the tree structure the hidden and cell states of a parent are dependent only on the hidden and cell states of its children." }, "TABREF2": { "html": null, "type_str": "table", "text": "Performance comparison of the Tree Transformer against some state-of-the-art sentence encoders. Models that we implemented are marked with \u2020.", "content": "", "num": null }, "TABREF4": { "html": null, "type_str": "table", "text": "Effect of Positional Encoding (PE).", "content": "
", "num": null }, "TABREF6": { "html": null, "type_str": "table", "text": "Effect of different attention modules as a composition function. S: single-head attention, M: multihead attention, B: multi-branch attention.", "content": "
", "num": null } } } }