{ "paper_id": "D16-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:35:08.118369Z" }, "title": "Rationalizing Neural Predictions", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "taolei@csail.mit.edu" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "regina@csail.mit.edu" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications-rationales-that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task. 1", "pdf_parse": { "paper_id": "D16-1011", "_pdf_hash": "", "abstract": [ { "text": "Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications-rationales-that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many recent advances in NLP problems have come from formulating and training expressive and elaborate neural models. This includes models for sentiment classification, parsing, and machine translation among many others. The gains in accuracy have, however, come at the cost of interpretability since complex neural models offer little transparency concerning their inner workings. In many applications, such as medicine, predictions are used to drive critical decisions, including treatment options. It is necessary in such cases to be able to verify and under-the beer was n't what i expected, and i'm not sure it's \"true to style\", but i thought it was delicious. a very pleasant ruby red-amber color with a rela9vely brilliant finish, but a limited amount of carbona9on, from the look of it. aroma is what i think an amber ale should be -a nice blend of caramel and happiness bound together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Look: 5 stars Smell: 4 stars stand the underlying basis for the decisions. Ideally, complex neural models would not only yield improved performance but would also offer interpretable justifications -rationales -for their predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Review Ratings", "sec_num": null }, { "text": "In this paper, we propose a novel approach to incorporating rationale generation as an integral part of the overall learning problem. We limit ourselves to extractive (as opposed to abstractive) rationales. From this perspective, our rationales are simply subsets of the words from the input text that satisfy two key properties. First, the selected words represent short and coherent pieces of text (e.g., phrases) and, second, the selected words must alone suffice for prediction as a substitute of the original text. More concretely, consider the task of multi-aspect sentiment analysis. Figure 1 illustrates a product review along with user rating in terms of two categories or aspects. If the model in this case predicts five star rating for color, it should also identify the phrase \"a very pleasant ruby red-amber color\" as the rationale underlying this decision.", "cite_spans": [], "ref_spans": [ { "start": 591, "end": 599, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Review Ratings", "sec_num": null }, { "text": "In most practical applications, rationale genera-tion must be learned entirely in an unsupervised manner. We therefore assume that our model with rationales is trained on the same data as the original neural models, without access to additional rationale annotations. In other words, target rationales are never provided during training; the intermediate step of rationale generation is guided only by the two desiderata discussed above. Our model is composed of two modular components that we call the generator and the encoder. Our generator specifies a distribution over possible rationales (extracted text) and the encoder maps any such text to task specific target values. They are trained jointly to minimize a cost function that favors short, concise rationales while enforcing that the rationales alone suffice for accurate prediction.", "cite_spans": [ { "start": 594, "end": 610, "text": "(extracted text)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Review Ratings", "sec_num": null }, { "text": "The notion of what counts as a rationale may be ambiguous in some contexts and the task of selecting rationales may therefore be challenging to evaluate. We focus on two domains where ambiguity is minimal (or can be minimized). The first scenario concerns with multi-aspect sentiment analysis exemplified by the beer review corpus (McAuley et al., 2012) . A smaller test set in this corpus identifies, for each aspect, the sentence(s) that relate to this aspect. We can therefore directly evaluate our predictions on the sentence level with the caveat that our model makes selections on a finer level, in terms of words, not complete sentences. The second scenario concerns with the problem of retrieving similar questions. The extracted rationales should capture the main purpose of the questions. We can therefore evaluate the quality of rationales as a compressed proxy for the full text in terms of retrieval performance. Our model achieves high performance on both tasks. For instance, on the sentiment prediction task, our model achieves extraction accuracy of 96%, as compared to 38% and 81% obtained by the bigram SVM and a neural attention baseline.", "cite_spans": [ { "start": 331, "end": 353, "text": "(McAuley et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Review Ratings", "sec_num": null }, { "text": "Developing sparse interpretable models is of considerable interest to the broader research community (Letham et al., 2015; Kim et al., 2015) . The need for interpretability is even more pronounced with recent neural models. Efforts in this area include analyzing and visualizing state activation (Hermans and Schrauwen, 2013; Karpathy et al., 2015; , learning sparse interpretable word vectors (Faruqui et al., 2015b) , and linking word vectors to semantic lexicons or word properties (Faruqui et al., 2015a; Herbelot and Vecchi, 2015) .", "cite_spans": [ { "start": 101, "end": 122, "text": "(Letham et al., 2015;", "ref_id": "BIBREF19" }, { "start": 123, "end": 140, "text": "Kim et al., 2015)", "ref_id": "BIBREF15" }, { "start": 296, "end": 325, "text": "(Hermans and Schrauwen, 2013;", "ref_id": "BIBREF10" }, { "start": 326, "end": 348, "text": "Karpathy et al., 2015;", "ref_id": "BIBREF14" }, { "start": 394, "end": 417, "text": "(Faruqui et al., 2015b)", "ref_id": "BIBREF7" }, { "start": 485, "end": 508, "text": "(Faruqui et al., 2015a;", "ref_id": "BIBREF6" }, { "start": 509, "end": 535, "text": "Herbelot and Vecchi, 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Beyond learning to understand or further constrain the network to be directly interpretable, one can estimate interpretable proxies that approximate the network. Examples include extracting \"if-then\" rules (Thrun, 1995) and decision trees (Craven and Shavlik, 1996) from trained networks. More recently, Ribeiro et al. (2016) propose a modelagnostic framework where the proxy model is learned only for the target sample (and its neighborhood) thus ensuring locally valid approximations. Our work differs from these both in terms of what is meant by an explanation and how they are derived. In our case, an explanation consists of a concise yet sufficient portion of the text where the mechanism of selection is learned jointly with the predictor.", "cite_spans": [ { "start": 206, "end": 219, "text": "(Thrun, 1995)", "ref_id": "BIBREF28" }, { "start": 239, "end": 265, "text": "(Craven and Shavlik, 1996)", "ref_id": "BIBREF4" }, { "start": 304, "end": 325, "text": "Ribeiro et al. (2016)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Attention based models offer another means to explicate the inner workings of neural models (Bahdanau et al., 2015; Cheng et al., 2016; Martins and Astudillo, 2016; Chen et al., 2015; Xu and Saenko, 2015; Yang et al., 2015) . Such models have been successfully applied to many NLP problems, improving both prediction accuracy as well as visualization and interpretability (Rush et al., 2015; Rockt\u00e4schel et al., 2016; Hermann et al., 2015) . introduced a stochastic attention mechanism together with a more standard soft attention on image captioning task. Our rationale extraction can be understood as a type of stochastic attention although architectures and objectives differ. Moreover, we compartmentalize rationale generation from downstream encoding so as to expose knobs to directly control types of rationales that are acceptable, and to facilitate broader modular use in other applications.", "cite_spans": [ { "start": 92, "end": 115, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 116, "end": 135, "text": "Cheng et al., 2016;", "ref_id": "BIBREF3" }, { "start": 136, "end": 164, "text": "Martins and Astudillo, 2016;", "ref_id": "BIBREF22" }, { "start": 165, "end": 183, "text": "Chen et al., 2015;", "ref_id": "BIBREF2" }, { "start": 184, "end": 204, "text": "Xu and Saenko, 2015;", "ref_id": "BIBREF31" }, { "start": 205, "end": 223, "text": "Yang et al., 2015)", "ref_id": "BIBREF33" }, { "start": 372, "end": 391, "text": "(Rush et al., 2015;", "ref_id": "BIBREF27" }, { "start": 392, "end": 417, "text": "Rockt\u00e4schel et al., 2016;", "ref_id": "BIBREF26" }, { "start": 418, "end": 439, "text": "Hermann et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Finally, we contrast our work with rationale-based classification (Zaidan et al., 2007; Marshall et al., 2015; Zhang et al., 2016) which seek to improve prediction by relying on richer annotations in the form of human-provided rationales. In our work, rationales are never given during training. The goal is to learn to generate them.", "cite_spans": [ { "start": 66, "end": 87, "text": "(Zaidan et al., 2007;", "ref_id": "BIBREF34" }, { "start": 88, "end": 110, "text": "Marshall et al., 2015;", "ref_id": "BIBREF21" }, { "start": 111, "end": 130, "text": "Zhang et al., 2016)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We formalize here the task of extractive rationale generation and illustrate it in the context of neural models. To this end, consider a typical NLP task where we are provided with a sequence of words as input, namely", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive Rationale Generation", "sec_num": "3" }, { "text": "x = {x 1 , \u2022 \u2022 \u2022 , x l },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive Rationale Generation", "sec_num": "3" }, { "text": "where each x t \u2208 R d denotes the vector representation of the ith word. The learning problem is to map the input sequence x to a target vector in R m . For example, in multi-aspect sentiment analysis each coordinate of the target vector represents the response or rating pertaining to the associated aspect. In text retrieval, on the other hand, the target vectors are used to induce similarity assessments between input sequences. Broadly speaking, we can solve the associated learning problem by estimating a complex parameterized mapping enc(x) from input sequences to target vectors. We call this mapping an encoder. The training signal for these vectors is obtained either directly (e.g., multi-sentiment analysis) or via similarities (e.g., text retrieval). The challenge is that a complex neural encoder enc(x) reveals little about its internal workings and thus offers little in the way of justification for why a particular prediction was made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive Rationale Generation", "sec_num": "3" }, { "text": "In extractive rationale generation, our goal is to select a subset of the input sequence as a rationale. In order for the subset to qualify as a rationale it should satisfy two criteria: 1) the selected words should be interpretable and 2) they ought to suffice to reach nearly the same prediction (target vector) as the original input. In other words, a rationale must be short and sufficient. We will assume that a short selection is interpretable and focus on optimizing sufficiency under cardinality constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive Rationale Generation", "sec_num": "3" }, { "text": "We encapsulate the selection of words as a rationale generator which is another parameterized mapping gen(x) from input sequences to shorter sequences of words. Thus gen(x) must include only a few words and enc(gen(x)) should result in nearly the same target vector as the original input passed through the encoder or enc(x). We can think of the generator as a tagging model where each word in the input receives a binary tag pertaining to whether it is selected to be included in the rationale. In our case, the generator is probabilistic and specifies a distribution over possible selections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive Rationale Generation", "sec_num": "3" }, { "text": "The rationale generation task is entirely unsupervised in the sense that we assume no explicit annotations about which words should be included in the rationale. Put another way, the rationale is introduced as a latent variable, a constraint that guides how to interpret the input sequence. The encoder and generator are trained jointly, in an end-to-end fashion so as to function well together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive Rationale Generation", "sec_num": "3" }, { "text": "We use multi-aspect sentiment prediction as a guiding example to instantiate the two key componentsthe encoder and the generator. The framework itself generalizes to other tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Encoder enc(\u2022): Given a training instance (x, y) where x = {x t } l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "t=1 is the input text sequence of length l and y \u2208 [0, 1] m is the target m-dimensional sentiment vector, the neural encoder predicts\u1ef9 = enc(x). If trained on its own, the encoder would aim to minimize the discrepancy between the predicted sentiment vector\u1ef9 and the gold target vector y. We will use the squared error (i.e. L 2 distance) as the sentiment loss function,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "L(x, y) = \u1ef9 \u2212 y 2 2 = enc(x) \u2212 y 2 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "The encoder could be realized in many ways such as a recurrent neural network. For example, let h t = f e (x t , h t\u22121 ) denote a parameterized recurrent unit mapping input word x t and previous state h t\u22121 to next state h t . The target vector is then generated on the basis of the final state reached by the recurrent unit after processing all the words in the input sequence. Specifically,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "h t = f e (x t , h t\u22121 ), t = 1, . . . , l y = \u03c3 e (W e h l + b e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Generator gen(\u2022): The rationale generator extracts a subset of text from the original input x to function as an interpretable summary. Thus the rationale for a given sequence x can be equivalently defined in terms of binary variables", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "{z 1 , \u2022 \u2022 \u2022 , z l }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "where each z t \u2208 0, 1 indicates whether word x t is selected or not. From here on, we will use z to specify the binary selections and thus (z, x) is the actual rationale generated (selections, input). We will use generator gen(x) as synonymous with a probability distribution over binary selections, i.e., z \u223c gen(x) \u2261 p(z|x) where the length of z varies with the input x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "In a simple generator, the probability that the t th word is selected can be assumed to be conditionally independent from other selections given the input x. That is, the joint probability p(z|x) factors according to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "p(z|x) = l t=1 p(z t |x) (independent selection)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "The component distributions p(z t |x) can be modeled using a shared bi-directional recurrent neural network. Specifically, let \u2212 \u2192 f () and \u2190 \u2212 f () be the forward and backward recurrent unit, respectively, then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "\u2212 \u2192 h t = \u2212 \u2192 f (x t , \u2212\u2212\u2192 h t\u22121 ) \u2190 \u2212 h t = \u2190 \u2212 f (x t , \u2190\u2212\u2212 h t+1 ) p(z t |x) = \u03c3 z (W z [ \u2212 \u2192 h t ; \u2190 \u2212 h t ] + b z )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Independent but context dependent selection of words is often sufficient. However, the model is unable to select phrases or refrain from selecting the same word again if already chosen. To this end, we also introduce a dependent selection of words,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "p(z|x) = l t=1 p(z t |x, z 1 \u2022 \u2022 \u2022 z t\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "which can be also expressed as a recurrent neural network. To this end, we introduce another hidden state s t whose role is to couple the selections. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "p(z t |x, z 1,t\u22121 ) = \u03c3 z (W z [ \u2212 \u2192 h t ; \u2190 \u2212 h t ; s t\u22121 ] + b z ) s t = f z ([ \u2212 \u2192 h t ; \u2190 \u2212 h t ; z t ], s t\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Joint objective: A rationale in our definition corresponds to the selected words, i.e., {x k |z k = 1}. We will use (z, x) as the shorthand for this rationale and, thus, enc(z, x) refers to the target vector obtained by applying the encoder to the rationale as the input. Our goal here is to formalize how the rationale can be made short and meaningful yet function well in conjunction with the encoder. Our generator and encoder are learned jointly to interact well but they are treated as independent units for modularity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "The generator is guided in two ways during learning. First, the rationale that it produces must suffice as a replacement for the input text. In other words, the target vector (sentiment) arising from the rationale should be close to the gold sentiment. The corresponding loss function is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "L(z, x, y) = enc(z, x) \u2212 y 2 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Note that the loss function depends directly (parametrically) on the encoder but only indirectly on the generator via the sampled selection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Second, we must guide the generator to realize short and coherent rationales. It should select only a few words and those selections should form phrases (consecutive words) rather than represent isolated, disconnected words. We therefore introduce an additional regularizer over the selections", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "\u2126(z) = \u03bb 1 z + \u03bb 2 t |z t \u2212 z t\u22121 |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "where the first term penalizes the number of selections while the second one discourages transitions (encourages continuity of selections). Note that this regularizer also depends on the generator only indirectly via the selected rationale. This is because it is easier to assess the rationale once produced rather than directly guide how it is obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Our final cost function is the combination of the two, cost(z, x, y) = L(z, x, y) + \u2126(z). Since the selections are not provided during training, we minimize the expected cost:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "min \u03b8e,\u03b8g (x,y)\u2208D E z\u223cgen(x) [cost(z, x, y)]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "where \u03b8 e and \u03b8 g denote the set of parameters of the encoder and generator, respectively, and D is the collection of training instances. Our joint objective encourages the generator to compress the input text into coherent summaries that work well with the associated encoder it is trained with. Minimizing the expected cost is challenging since it involves summing over all the possible choices of rationales z. This summation could potentially be made feasible with additional restrictive assumptions about the generator and encoder. However, we assume only that it is possible to efficiently sample from the generator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Doubly stochastic gradient We now derive a sampled approximation to the gradient of the expected cost objective. This sampled approximation is obtained separately for each input text x so as to work well with an overall stochastic gradient method. Consider therefore a training pair (x, y). For the parameters of the generator \u03b8 g ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "\u2202E z\u223cgen(x) [cost(z, x, y)] \u2202\u03b8 g = z cost(z, x, y) \u2022 \u2202p(z|x) \u2202\u03b8 g = z cost(z, x, y) \u2022 \u2202p(z|x) \u2202\u03b8 g \u2022 p(z|x) p(z|x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "Using the fact", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "(log f (\u03b8)) = f (\u03b8)/f (\u03b8), we get z cost(z, x, y) \u2022 \u2202p(z|x) \u2202\u03b8 g \u2022 p(z|x) p(z|x) = z cost(z, x, y) \u2022 \u2202 log p(z|x) \u2202\u03b8 g \u2022 p(z|x) = E z\u223cgen(x) cost(z, x, y) \u2202 log p(z|x) \u2202\u03b8 g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "The last term is the expected gradient where the expectation is taken with respect to the generator distribution over rationales z. Therefore, we can simply sample a few rationales z from the generator gen(x) and use the resulting average gradient in an overall stochastic gradient method. A sampled approximation to the gradient with respect to the encoder parameters \u03b8 e can be derived similarly, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "\u2202E z\u223cgen(x) [cost(z, x, y)] \u2202\u03b8 e = z \u2202cost(z, x, y) \u2202\u03b8 e \u2022 p(z|x) = E z\u223cgen(x) \u2202cost(z, x, y) \u2202\u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "\u03bb t = \u03c3(W \u03bb x t + U \u03bb h t\u22121 + b \u03bb ) c (1) t = \u03bb t c (1) t\u22121 + (1 \u2212 \u03bb t ) (W 1 x t ) c (2) t = \u03bb t c (2) t\u22121 + (1 \u2212 \u03bb t ) (c (1) t\u22121 + W 2 x t ) h t = tanh(c (2) t + b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "RCNN has been shown to work remarkably in classification and retrieval applications (Lei et al., 2015; Lei et al., 2016) compared to other alternatives such CNNs and LSTMs. We use it for all the recurrent units introduced in our model.", "cite_spans": [ { "start": 84, "end": 102, "text": "(Lei et al., 2015;", "ref_id": "BIBREF17" }, { "start": 103, "end": 120, "text": "Lei et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder and Generator", "sec_num": "4" }, { "text": "We evaluate the proposed joint model on two NLP applications: (1) multi-aspect sentiment analysis on product reviews and (2) similar text retrieval on AskUbuntu question answering forum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Dataset We use the BeerAdvocate 2 review dataset used in prior work (McAuley et al., 2012) . 3 This dataset contains 1.5 million reviews written by the website users. The reviews are naturally multiaspect -each of them contains multiple sentences describing the overall impression or one particular aspect of a beer, including appearance, smell (aroma), palate and the taste. In addition to the written text, the reviewer provides the ratings (on a scale of 0 to 5 stars) for each aspect as well as an overall rating. The ratings can be fractional (e.g. 3.5 stars), so we normalize the scores to [0, 1] and use them as the (only) supervision for regression.", "cite_spans": [ { "start": 68, "end": 90, "text": "(McAuley et al., 2012)", "ref_id": "BIBREF23" }, { "start": 93, "end": 94, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multi-aspect Sentiment Analysis", "sec_num": "5.1" }, { "text": "McAuley et al. (2012) also provided sentencelevel annotations on around 1,000 reviews. Each sentence is annotated with one (or multiple) aspect label, indicating what aspect this sentence covers. We use this set as our test set to evaluate the precision of words in the extracted rationales. Table 1 shows several statistics of the beer review dataset. The sentiment correlation between any pair of aspects (and the overall score) is quite high, getting 63.5% on average and a maximum of 79.1% (between the taste and overall score). If directly training the model on this set, the model can be confused due to such strong correlation. We therefore perform a preprocessing step, picking \"less correlated\" examples from the dataset. 4 This gives us a de-correlated subset for each aspect, each containing about 80k to 90k reviews. We use 10k as the development set. We focus on three aspects since the fourth aspect taste still gets > 50% correlation with the overall sentiment.", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 299, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Multi-aspect Sentiment Analysis", "sec_num": "5.1" }, { "text": "Before training the joint model, it is worth assessing the neural encoder separately to check how accurately the neural network predicts the sentiment. To this end, we compare neural encoders with bigram SVM model, training medium and large SVM models using 260k and all 4 Specifically, for each aspect we train a simple linear regression model to predict the rating of this aspect given the ratings of the other four aspects. We then keep picking reviews with largest prediction error until the sentiment correlation in the selected subset increases dramatically. 1580k reviews respectively. As shown in Table 3 , the recurrent neural network models outperform the SVM model for sentiment prediction and also require less training data to achieve the performance. The LSTM and RCNN units obtain similar test error, getting 0.0094 and 0.0087 mean squared error respectively. The RCNN unit performs slightly better and uses less parameters. Based on the results, we choose the RCNN encoder network with 2 stacking layers and 200 hidden states.", "cite_spans": [], "ref_spans": [ { "start": 605, "end": 612, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": null }, { "text": "To train the joint model, we also use RCNN unit with 200 states as the forward and backward recurrent unit for the generator gen(). The dependent generator has one additional recurrent layer. For this layer we use 30 states so the dependent version still has a number of parameters comparable to the independent version. The two versions of the generator have 358k and 323k parameters respectively. Figure 2 shows the performance of our joint dependent model when trained to predict the sentiment of all aspects. We vary the regularization \u03bb 1 and \u03bb 2 to show various runs that extract different amount of text as rationales. Our joint model gets performance close to the best encoder run (with full text) when few words are extracted. a beer that is not sold in my neck of the woods , but managed to get while on a roadtrip . poured into an imperial pint glass with a generous head that sustained life throughout . nothing out of the ordinary here , but a good brew s9ll . body was kind of heavy , but not thick . the hop smell was excellent and en9cing . very drinkable very dark beer . pours a nice finger and a half of creamy foam and stays throughout the beer . smells of coffee and roasted malt . has a major coffee-like taste with hints of chocolate . if you like black coffee , you will love this porter . creamy smooth mouthfeel and definitely gets smoother on the palate once it warms . it 's an ok porter but i feel there are much beAer one 's out there .", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 407, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": null }, { "text": "poured into a sniBer . produces a small coffee head that reduces quickly . black as night . preAy typical imp . roasted malts hit on the nose . a liAle sweet chocolate follows . big toasty character on the taste . in between i 'm geDng plenty of dark chocolate and some biAer espresso . it finishes with hop biAerness . nice smooth mouthfeel with perfect carbona9on for the style . overall a nice stout i would love to have again , maybe with some age on it .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": null }, { "text": "i really did not like this . it just seemed extremely watery . i dont ' think this had any carbona9on whatsoever . maybe it was flat , who knows ? but even if i got a bad brew i do n't see how this would possibly be something i 'd get 9me and 9me again . i could taste the hops towards the middle , but the beer got preAy nasty towards the boAom . i would never drink this again , unless it was free . i 'm kind of upset i bought this . a : poured a nice dark brown with a tan colored head about half an inch thick , nice red/garnet accents when held to the light . liAle clumps of lacing all around the glass , not too shabby . not terribly impressive though s : smells like a more guinness-y guinness really , there are some roasted malts there , signature guinness smells , less burnt though , a liAle bit of chocolate \u2026 \u2026 m : rela9vely thick , it is n't an export stout or imperial stout , but s9ll is preAy heBy in the mouth , very smooth , not much carbona9on . not too shabby d : not quite as drinkable as the draught , but s9ll not too bad . i could easily see drinking a few of these . Rationale Selection To evaluate the supporting rationales for each aspect, we train the joint encodergenerator model on each de-correlated subset. We set the cardinality regularization \u03bb 1 between values {2e \u2212 4, 3e \u2212 4, 4e \u2212 4} so the extracted rationale texts are neither too long nor too short. For simplicity, we set \u03bb 2 = 2\u03bb 1 to encourage local coherency of the extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": null }, { "text": "For comparison we use the bigram SVM model and implement an attention-based neural network model. The SVM model successively extracts unigram or bigram (from the test reviews) with the highest feature. The attention-based model learns a normalized attention vector of the input tokens (using similarly the forward and backward RNNs), then the model averages over the encoder states accordingly to the attention, and feed the averaged vector to the output layer. Similar to the SVM model, the attention-based model can selects words based on their attention weights. The smell (aroma) aspect is the target aspect. Table 2 presents the precision of the extracted rationales calculated based on sentence-level aspect annotations. The \u03bb 1 regularization hyper-parameter is tuned so the two versions of our model extract similar number of words as rationales. The SVM and attention-based model are constrained similarly for comparison. Figure 4 further shows the precision when different amounts of text are extracted. Again, for our model this corresponds to changing the \u03bb 1 regularization. As shown in the table and the figure, our encoder-generator networks extract text pieces describing the target aspect with high precision, ranging from 80% to 96% across the three aspects appearance, smell and palate. The SVM baseline performs poorly, achieving around 30% accuracy. The attention-based model achieves reasonable but worse performance than the rationale generator, suggesting the potential of directly modeling rationales as explicit extraction. Figure 5 shows the learning curves of our model for the smell aspect. In the early training epochs, both the independent and (recurrent) dependent selection models fail to produce good rationales, getting low precision as a result. After a few epochs of exploration however, the models start to achieve high accuracy. We observe that the dependent version learns more quickly in general, but both versions obtain close results in the end.", "cite_spans": [], "ref_spans": [ { "start": 613, "end": 620, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 931, "end": 939, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 1550, "end": 1558, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": null }, { "text": "Finally we conduct a qualitative case study on the extracted rationales. Figure 3 presents several reviews, with highlighted rationales predicted by the model. Our rationale generator identifies key phrases or adjectives that indicate the sentiment of a particular aspect.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 81, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Sentiment Prediction", "sec_num": null }, { "text": "Dataset For our second application, we use the real-world AskUbuntu 5 dataset used in recent work (dos Santos et al., 2015; Lei et al., 2016) . This set contains a set of 167k unique questions (each consisting a question title and a body) and 16k useridentified similar question pairs. Following previous work, this data is used to train the neural encoder that learns the vector representation of the input question, optimizing the cosine distance (i.e. cosine similarity) between similar questions against random non-similar ones. We use the \"one-versusall\" hinge loss (i.e. positive versus other negatives) for the encoder, similar to (Lei et al., 2016) . During development and testing, the model is used to score 20 candidate questions given each query question, and a total of 400\u00d720 query-candidate question pairs are annotated for evaluation 6 .", "cite_spans": [ { "start": 98, "end": 123, "text": "(dos Santos et al., 2015;", "ref_id": "BIBREF5" }, { "start": 124, "end": 141, "text": "Lei et al., 2016)", "ref_id": "BIBREF18" }, { "start": 638, "end": 656, "text": "(Lei et al., 2016)", "ref_id": "BIBREF18" }, { "start": 850, "end": 851, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Similar Text Retrieval on QA Forum", "sec_num": "5.2" }, { "text": "Task/Evaluation Setup The question descriptions are often long and fraught with irrelevant details. In this set-up, a fraction of the original question text should be sufficient to represent its content, and be used for retrieving similar questions. Therefore, we will evaluate rationales based on the accuracy of the question retrieval task, assuming that better rationales achieve higher performance. To put this performance in context, we also report the accuracy when full body of a question is used, as well as titles alone. The latter constitutes an upper bound on 5 askubuntu.com 6 https://github.com/taolei87/askubuntu the model performance as in this dataset titles provide short, informative summaries of the question content. We evaluate the rationales using the mean average precision (MAP) of retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Text Retrieval on QA Forum", "sec_num": "5.2" }, { "text": "Results Table 4 presents the results of our rationale model. We explore a range of hyper-parameter values 7 . We include two runs for each version. The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 words on average). We also show the results of different runs in Figure 6 . The rationales achieve the MAP up to 56.5%, getting close to using the titles. The models also outperform the baseline of using the noisy question bodies, indicating the the models' capacity of extracting short but important fragments. Figure 7 shows the rationales for several questions in the AskUbuntu domain, using the recurrent version with around 10% extraction. Interestingly, the model does not always select words from the question title. The reasons are that the question body can contain the same or even complementary information useful for retrieval. Indeed, some rationale fragments shown in the figure are error messages, i accidentally removed the ubuntu soBware centre , when i was actually trying to remove my ubuntu one applica9ons . although i do n't remember directly uninstalling the centre , i think dele9ng one of those packages might have triggered it . i can not look at history of applica9on changes , as the soBware centre is missing . please advise on how to install , or rather reinstall , ubuntu soBware centre on my computer . how do i install ubuntu soBware centre applica9on ? i know this will be an odd ques9on , but i was wondering if anyone knew how to install the ubuntu installer package in an ubuntu installa9on . to clarify , when you boot up to an ubuntu livecd , it 's got the installer program available so that you can install ubuntu to a drive . naturally , this program is not present in the installed ubuntu . is there , though , a way to download and install it like other packages ? invariably , someone will ask what i 'm trying to do , and the answer \u2026 install installer package on an installed system ? what is the easiest way to install all the media codec available for ubuntu ? i am having issues with mul9ple applica9ons promp9ng me to install codecs before they can play my files . how do i install media codecs ? what should i do when i see report this ? an unresolvable problem occurred while ini9alizing the package informa9on . please report this bug against the 'update-manager ' package and include the following error message : e : encountered a sec9on with no package : header e : problem with mergelist e : the package lists or status file could not be parsed or opened . please any one give the solu9on for this whenever i try to convert the rpm file to deb file i always get this problem error : : not an rpm package ( or package manifest ) error execu9ng `` lang=c rpm -qp --queryformat % { name } ' '' : at line 489 thanks conver9ng rpm file to debian fle how do i mount a hibernated par99on with windows 8 in ubuntu ? i ca n't mount my other par99on with windows 8 , i have ubuntu 12.10 amd64 : error moun9ng /dev/sda1 at : command-line `mount -t `` n[s ' ' -o `` uhelper=udisks2 , nodev , nosuid , uid=1000 , gid=1000 , dmask=0077 , fmask=0177 '' `` /dev/sda1 '' `` '' ' exited with non-zero exit status 14 : windows is hibernated , refused to mount . failed to mount '/dev/sda1 ' : opera9on not permiAed the n[s par99on is hibernated . please resume and shutdown windows properly , or mount the volume read-only with the 'ro ' mount op9on which are typically not in the titles but very useful to identify similar questions.", "cite_spans": [ { "start": 3152, "end": 3203, "text": "' -o `` uhelper=udisks2 , nodev , nosuid , uid=1000", "ref_id": null } ], "ref_spans": [ { "start": 8, "end": 15, "text": "Table 4", "ref_id": null }, { "start": 371, "end": 379, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 618, "end": 626, "text": "Figure 7", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Similar Text Retrieval on QA Forum", "sec_num": "5.2" }, { "text": "We proposed a novel modular neural framework to automatically generate concise yet sufficient text fragments to justify predictions made by neural networks. We demonstrated that our encoder-generator framework, trained in an end-to-end manner, gives rise to quality rationales in the absence of any explicit rationale annotations. The approach could be modified or extended in various ways to other applications or types of data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Choices of enc(\u2022) and gen(\u2022). The encoder and generator can be realized in numerous ways without changing the broader algorithm. For instance, we could use a convolutional network (Kim, 2014; Kalchbrenner et al., 2014) , deep averaging network (Iyyer et al., 2015; Joulin et al., 2016) or a boosting classifier as the encoder. When rationales can be expected to conform to repeated stereotypical patterns in the text, a simpler encoder consistent with this bias can work better. We emphasize that, in this paper, rationales are flexible explanations that may vary substantially from instance to another. On the generator side, many additional constraints could be imposed to further guide acceptable rationales.", "cite_spans": [ { "start": 180, "end": 191, "text": "(Kim, 2014;", "ref_id": "BIBREF16" }, { "start": 192, "end": 218, "text": "Kalchbrenner et al., 2014)", "ref_id": "BIBREF13" }, { "start": 244, "end": 264, "text": "(Iyyer et al., 2015;", "ref_id": "BIBREF11" }, { "start": 265, "end": 285, "text": "Joulin et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Dealing with Search Space. Our training method employs a REINFORCE-style algorithm (Williams, 1992) where the gradient with respect to the parameters is estimated by sampling possible rationales.", "cite_spans": [ { "start": 83, "end": 99, "text": "(Williams, 1992)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Additional constraints on the generator output can be helpful in alleviating problems of exploring potentially a large space of possible rationales in terms of their interaction with the encoder. We could also apply variance reduction techniques to increase stability of stochastic training (cf. (Weaver and Tao, 2001; Mnih et al., 2014; ).", "cite_spans": [ { "start": 296, "end": 318, "text": "(Weaver and Tao, 2001;", "ref_id": "BIBREF29" }, { "start": 319, "end": 337, "text": "Mnih et al., 2014;", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Our code and data are available at https://github. com/taolei87/rcnn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.beeradvocate.com 3 http://snap.stanford.edu/data/ web-BeerAdvocate.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u03bb1 \u2208 {.008, .01, .012, .015}, \u03bb2 = {0, \u03bb1, 2\u03bb1}, dropout \u2208 {0.1, 0.2}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Prof. Julian McAuley for sharing the review dataset and annotations. We also thank MIT NLP group and the reviewers for their helpful comments. The work is supported by the Arabic Language Technologies (ALT) group at Qatar Computing Research Institute (QCRI) within the IYAS project. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multiple object recognition with visual attention", "authors": [ { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Volodymyr", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2015. Multiple object recognition with visual atten- tion. In Proceedings of the International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Abccnn: An attention based convolutional neural network for visual question answering", "authors": [ { "first": "Kan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liang-Chieh", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Haoyuan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ram", "middle": [], "last": "Nevatia", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.05960" ] }, "num": null, "urls": [], "raw_text": "Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. Abc- cnn: An attention based convolutional neural net- work for visual question answering. arXiv preprint arXiv:1511.05960.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Long short-term memory-networks for machine reading", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1601.06733" ] }, "num": null, "urls": [], "raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine read- ing. arXiv preprint arXiv:1601.06733.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Extracting tree-structured representations of trained networks", "authors": [ { "first": "W", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Jude", "middle": [ "W" ], "last": "Craven", "suffix": "" }, { "first": "", "middle": [], "last": "Shavlik", "suffix": "" } ], "year": 1996, "venue": "Advances in neural information processing systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark W Craven and Jude W Shavlik. 1996. Extract- ing tree-structured representations of trained networks. In Advances in neural information processing systems (NIPS).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning hybrid representations to retrieve semantically equivalent questions", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Luciano", "middle": [], "last": "Barbosa", "suffix": "" }, { "first": "Dasha", "middle": [], "last": "Bogdanova", "suffix": "" }, { "first": "Bianca", "middle": [], "last": "Zadrozny", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "694--699", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero dos Santos, Luciano Barbosa, Dasha Bogdanova, and Bianca Zadrozny. 2015. Learning hybrid rep- resentations to retrieve semantically equivalent ques- tions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), pages 694-699, Beijing, China, July. Association for Com- putational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "K", "middle": [], "last": "Sujay", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Hovy", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015a. Retrofitting word vectors to semantic lexicons. In Pro- ceedings of NAACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sparse overcomplete word vector representations", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015b. Sparse overcom- plete word vector representations. In Proceedings of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Building a shared world: mapping distributional to modeltheoretic semantic spaces", "authors": [ { "first": "Aur\u00e9lie", "middle": [], "last": "Herbelot", "suffix": "" }, { "first": "Eva", "middle": [ "Maria" ], "last": "Vecchi", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aur\u00e9lie Herbelot and Eva Maria Vecchi. 2015. Build- ing a shared world: mapping distributional to model- theoretic semantic spaces. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1684--1692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684-1692.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Training and analysing deep recurrent neural networks", "authors": [ { "first": "Michiel", "middle": [], "last": "Hermans", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schrauwen", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "190--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michiel Hermans and Benjamin Schrauwen. 2013. Training and analysing deep recurrent neural net- works. In Advances in Neural Information Processing Systems, pages 190-198.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Deep unordered composition rivals syntactic methods for text classification", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A convolutional neural network for modelling sentences", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for mod- elling sentences. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguis- tics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Visualizing and understanding recurrent networks", "authors": [ { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Fei-Fei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.02078" ] }, "num": null, "urls": [], "raw_text": "Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mind the gap: A generative approach to interpretable feature selection and extraction", "authors": [ { "first": "B", "middle": [], "last": "Kim", "suffix": "" }, { "first": "F", "middle": [], "last": "Shah", "suffix": "" }, { "first": "", "middle": [], "last": "Doshi-Velez", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B Kim, JA Shah, and F Doshi-Velez. 2015. Mind the gap: A generative approach to interpretable feature se- lection and extraction. In Advances in Neural Infor- mation Processing Systems.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Empiricial Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Molding cnns for text: non-linear, non-consecutive convolutions", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing (EMNLP).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semi-supervised question retrieval with gated convolutions", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Hrishikesh", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "Katerina", "middle": [], "last": "Tymoshenko", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Hrishikesh Joshi, Regina Barzilay, Tommi Jaakkola, Katerina Tymoshenko, Alessandro Mos- chitti, and Llu\u00eds M\u00e0rquez. 2016. Semi-supervised question retrieval with gated convolutions. In Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model", "authors": [ { "first": "Benjamin", "middle": [], "last": "Letham", "suffix": "" }, { "first": "Cynthia", "middle": [], "last": "Rudin", "suffix": "" }, { "first": "Tyler", "middle": [ "H" ], "last": "Mccormick", "suffix": "" }, { "first": "David", "middle": [], "last": "Madigan", "suffix": "" } ], "year": 2015, "venue": "Annals of Applied Statistics", "volume": "9", "issue": "3", "pages": "1350--1371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, and David Madigan. 2015. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Annals of Applied Statistics, 9(3):1350-1371.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Visualizing and understanding neural models in nlp", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proceedings of NAACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Robotreviewer: evaluation of a system for automatically assessing bias in clinical trials", "authors": [ { "first": "J", "middle": [], "last": "Iain", "suffix": "" }, { "first": "Jo\u00ebl", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "Byron C", "middle": [], "last": "Kuiper", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 2015, "venue": "Journal of the American Medical Informatics Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iain J Marshall, Jo\u00ebl Kuiper, and Byron C Wallace. 2015. Robotreviewer: evaluation of a system for automati- cally assessing bias in clinical trials. Journal of the American Medical Informatics Association.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Ram\u00f3n", "middle": [], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Fernandez Astudillo", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F. T. Martins and Ram\u00f3n Fernandez Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. CoRR, abs/1602.02068.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning attitudes and attributes from multi-aspect reviews", "authors": [ { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2012, "venue": "Data Mining (ICDM), 2012 IEEE 12th International Conference on", "volume": "", "issue": "", "pages": "1020--1025", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi-aspect re- views. In Data Mining (ICDM), 2012 IEEE 12th In- ternational Conference on, pages 1020-1025. IEEE.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Recurrent models of visual attention", "authors": [ { "first": "Volodymyr", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Heess", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. 2014. Recurrent models of visual attention. In Advances in Neural Information Processing Systems (NIPS).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "why should i trust you?\": Explaining the predictions of any classifier", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\": Explaining the pre- dictions of any classifier. In ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining (KDD).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom", "authors": [ { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2016, "venue": "In International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Her- mann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom. 2016. Rea- soning about entailment with neural attention. In In- ternational Conference on Learning Representations.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Sumit", "middle": [], "last": "Alexander M Rush", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Extracting rules from artificial neural networks with distributed representations", "authors": [ { "first": "", "middle": [], "last": "Sebastian Thrun", "suffix": "" } ], "year": 1995, "venue": "Advances in neural information processing systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Thrun. 1995. Extracting rules from artifi- cial neural networks with distributed representations. In Advances in neural information processing systems (NIPS).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The optimal reward baseline for gradient-based reinforcement learning", "authors": [ { "first": "Lex", "middle": [], "last": "Weaver", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Tao", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lex Weaver and Nigel Tao. 2001. The optimal reward baseline for gradient-based reinforcement learning. In Proceedings of the Seventeenth conference on Uncer- tainty in artificial intelligence.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", "authors": [ { "first": "J", "middle": [], "last": "Ronald", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Machine learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Ask, attend and answer: Exploring question-guided spatial attention for visual question answering", "authors": [ { "first": "Huijuan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.05234" ] }, "num": null, "urls": [], "raw_text": "Huijuan Xu and Kate Saenko. 2015. Ask, attend and answer: Exploring question-guided spatial atten- tion for visual question answering. arXiv preprint arXiv:1511.05234.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Show, attend and tell: Neural image caption generation with visual attention", "authors": [ { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhudinov", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neu- ral image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning (ICML).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Stacked attention networks for image question answering", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.02274" ] }, "num": null, "urls": [], "raw_text": "Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2015. Stacked attention net- works for image question answering. arXiv preprint arXiv:1511.02274.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Using \"annotator rationales\" to improve machine learning for text categorization", "authors": [ { "first": "Omar", "middle": [], "last": "Zaidan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Christine", "middle": [ "D" ], "last": "Piatko", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "260--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Zaidan, Jason Eisner, and Christine D. Piatko. 2007. Using \"annotator rationales\" to improve ma- chine learning for text categorization. In Proceedings of Human Language Technology Conference of the North American Chapter of the Association of Com- putational Linguistics, pages 260-267.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Rationale-augmented convolutional neural networks for text classification", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Iain", "middle": [ "James" ], "last": "Marshall", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Zhang, Iain James Marshall, and Byron C. Wallace. 2016. Rationale-augmented convolutional neural net- works for text classification. CoRR, abs/1605.04469.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "An example of a review with ranking in two categories. The rationale for Look prediction is shown in bold.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Mean squared error of all aspects on the test set (yaxis) when various percentages of text are extracted as rationales (x-axis). 220k training data is used.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Examples of extracted rationales indicating the sentiments of various aspects. The extracted texts for appearance, smell and palate are shown in red, blue and green color respectively. The last example is shortened for space.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Precision (y-axis) when various percentages of text are extracted as rationales (x-axis) for the appearance aspect.", "num": null }, "FIGREF5": { "uris": null, "type_str": "figure", "text": "Learning curves of the optimized cost function on the development set and the precision of rationales on the test set.", "num": null }, "FIGREF6": { "uris": null, "type_str": "figure", "text": "Retrieval MAP on the test set when various percentages of the texts are chosen as rationales. Data points correspond to models trained with different hyper-parameters.", "num": null }, "FIGREF7": { "uris": null, "type_str": "figure", "text": "Examples of extracted rationales of questions in the AskUbuntu domain.", "num": null }, "TABREF1": { "text": "Statistics of the beer review dataset.", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF3": { "text": "Precision of selected rationales for the first three aspects. The precision is evaluated based on whether the selected words are in the sentences describing the target aspect, based on the sentence-level annotations. Best training epochs are selected based on the objective value on the development set (no sentence annotation is used).", "content": "
SVM SVMD 260k 1580kd --l -2.5M 0.0154 MSE |\u03b8| -7.3M 0.0100
LSTM RCNN 260k 200 323k 0.0087 260k 200 2 644k 0.0094
", "num": null, "type_str": "table", "html": null }, "TABREF4": { "text": "Comparing neural encoders with bigram SVM model. MSE is the mean squared error on the test set. D is the amount of data used for training and development. d stands for the hidden dimension, l denotes the depth of network and |\u03b8| denotes the number of parameters (number of features for SVM).", "content": "", "num": null, "type_str": "table", "html": null } } } }