{ "paper_id": "U17-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:11:22.901975Z" }, "title": "Stock Market Prediction with Deep Learning: A Character-based Neural Language Model for Event-based Trading", "authors": [ { "first": "Leonardo", "middle": [], "last": "Dos", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University", "location": {} }, "email": "" }, { "first": "Santos", "middle": [], "last": "Pinheiro", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University", "location": {} }, "email": "lpinheiro@cmcrc.com" }, { "first": "Mark", "middle": [], "last": "Dras", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University", "location": {} }, "email": "mark.dras@mq.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the Ecient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor's 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.", "pdf_parse": { "paper_id": "U17-1001", "_pdf_hash": "", "abstract": [ { "text": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the Ecient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor's 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Predicting stock market behavior is an area of strong appeal for both academic researchers and industry practitioners alike, as it is both a challenging task and could lead to increased profits. Predicting stock market behavior from the arrival of new information is an even more interesting area, as economists frequently test it to challenge the E cient Market Hypothesis (EMH) (Malkiel, 2003) : a strict form of the EMH holds that any news is incorporated into prices without delay, while other interpretations hold that incorporation takes place over time.", "cite_spans": [ { "start": 380, "end": 395, "text": "(Malkiel, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In practice, the analysis of text data such as news announcements and commentary on events is one major source of market information and is widely used and analyzed by investors (Oberlechner and Hocking, 2004) .", "cite_spans": [ { "start": 178, "end": 209, "text": "(Oberlechner and Hocking, 2004)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Financial news conveys novel information to broad market participants and a fast reaction to the release of new information is an important component of trading strategies (Leinweber and Sisk, 2011) .", "cite_spans": [ { "start": 172, "end": 198, "text": "(Leinweber and Sisk, 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "But despite the great interest, attempts to forecast stock prices from unstructured text data have had limited success and there seems to be much room for improvement. This can be in great part attributed to the di culty involved in extracting the relevant information from the text. So far most approaches to analyzing financial text data are based on bag-ofwords, noun phrase and/or named entity feature extraction combined with manual feature selection, but the capacity of these methods to extract meaningful information from the data is limited as much information about the structure of text is lost in the process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, the trend for extracting features from text data has shifted away from manual feature engineering and there has been a resurgence of interest in neural networks due to their power for learning useful representations directly from data (Bengio et al., 2013) . Even though deep learning has had great success in learning representations from text data (e.g. Mikolov et al. (2013a) , Mikolov et al. (2013b) and Kiros et al. (2015) ), successful applications of deep learning in textual analysis of financial news have been few, even though it has been demonstrated that its application to event-driven stock prediction is a promising area of research (Ding et al., 2015) .", "cite_spans": [ { "start": 252, "end": 273, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF0" }, { "start": 373, "end": 395, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF21" }, { "start": 398, "end": 420, "text": "Mikolov et al. (2013b)", "ref_id": "BIBREF22" }, { "start": 425, "end": 444, "text": "Kiros et al. (2015)", "ref_id": "BIBREF14" }, { "start": 665, "end": 684, "text": "(Ding et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finding the most informative representation of the data in a text classification problem is still an open area of research. In the last few years a range of di\u21b5erent neural networks architectures have been proposed for text classification, each one with strong results on di\u21b5er-ent benchmarks (e.g. Socher et al. (2013) , Kim (2014) and Kumar et al. (2016) ), and each one proposing di\u21b5erent ways to encode the textual information.", "cite_spans": [ { "start": 299, "end": 319, "text": "Socher et al. (2013)", "ref_id": "BIBREF35" }, { "start": 322, "end": 332, "text": "Kim (2014)", "ref_id": "BIBREF11" }, { "start": 337, "end": 356, "text": "Kumar et al. (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the most commonly used architectures for modeling text data is the Recurrent Neural Network (RNN). One technique to improve the training of RNNs, proposed by Dai and Le (2015) and widely used, is to pre-train the RNN with a language model. In this work this approach outperformed training the same model from random initialization and achieved state of the art in several benchmarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another strong trend in deep learning for text is the use of a word embedding layer as the main representation of the text. While this approach has notable advantages, word-level language models do not capture sub-word information, may inaccurately estimate embeddings for rare words, and can poorly represent domains with long-tailed frequency distributions. These were motivations for characterlevel language models, which Kim et al. (2016) and Radford et al. (2017) showed are capable of learning high level representations despite their simplicity. These motivations seem applicable in our domain: character-level representations can for example generalise across numerical data like percentages (e.g. the terms 5% and 9%) and currency (e.g. $1,29), and can handle the large number of infrequently mentioned named entities. Characterlevel models are also typically much more compact.", "cite_spans": [ { "start": 425, "end": 442, "text": "Kim et al. (2016)", "ref_id": "BIBREF12" }, { "start": 447, "end": 468, "text": "Radford et al. (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we propose an automated trading system that, given the release of news information about a company, predicts changes in stock prices. The system is trained to predict both changes in the stock price of the company mentioned in the news article and in the corresponding stock exchange index (S&P 500). We also test this system for both intraday changes, considering a window of one hour after the release of the news, and for changes between the closing price of the current trading session and the closing price of the next day session. This comparative analysis allow us to infer whether the incorporation of new information is instantaneous or if it occurs gradually over time. Our model consists of a recurrent neural network pre-trained by a character level language model. The remainder of the paper is structured as follows: In Section 2, we describe event-driven trading and review the relevant literature. In Section 3 we describe our model and the experimental setup used in this work. Section 5 presents and discuss the results. Finally, in Section 6 we summarize our work and suggest directions for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, with the advances in computational power and in the ability of computers to process massive amounts of data, algorithmic trading has emerged as a strong trend in investment management (Ruta, 2014) . This, combined with the advances in the fields of machine learning and natural language processing (NLP), has been pushing the use of unstructured text data as source of information for investment strategies as well (Fisher et al., 2016) .", "cite_spans": [ { "start": 201, "end": 213, "text": "(Ruta, 2014)", "ref_id": "BIBREF33" }, { "start": 432, "end": 453, "text": "(Fisher et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "The area of NLP with the biggest influence in stock market prediction so far has been sentiment analysis, or opinion mining (Pang et al., 2008) . Earlier work by Tetlock (2007) used sentiment analysis to analyze the correlation between sentiment in news articles and market prices, concluding that media pessimism may a\u21b5ect both market prices and trading volume. Similarly, Bollen et al. (2011) used a system to measure collective mood through Twitter feeds and showed it to be highly predictive of the Dow Jones Industrial Average closing values. Following these results, other work has also social media information for stock market forecasting (Nguyen et al., 2015; Oliveira et al., 2017, for example) .", "cite_spans": [ { "start": 124, "end": 143, "text": "(Pang et al., 2008)", "ref_id": "BIBREF28" }, { "start": 162, "end": 176, "text": "Tetlock (2007)", "ref_id": "BIBREF36" }, { "start": 647, "end": 668, "text": "(Nguyen et al., 2015;", "ref_id": null }, { "start": 669, "end": 704, "text": "Oliveira et al., 2017, for example)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "With respect to direct stock price forecasting, from news articles, many systems based on feature selection have been proposed in the literature. Schumaker et al. (2012) built a system to evaluate the sentiment in financial news articles using a Support Vector Regression learner with features extracted from noun phrases and scored on a positive/negative subjectivity scale, but the results had limited success. Yu et al. (2013) achieved better accuracy with a selection mechanism based on a contextual entropy model which expanded a set of seed words by discovering similar emotion words and their corresponding intensities from online stock market news articles. Hagenau et al. (2013) also achieved good results by applying Chi-square and Binormal separation feature selection with ngram features. As with all sentiment analysis, scope of negation can be an issue: Pr\u00f6llochs et al. 2016recently proposed a reinforcement learning method to predict negation scope and showed that it improved the accuracy on a dataset from the financial news domain.", "cite_spans": [ { "start": 413, "end": 429, "text": "Yu et al. (2013)", "ref_id": "BIBREF38" }, { "start": 666, "end": 687, "text": "Hagenau et al. (2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "A di\u21b5erent approach to incorporate news into stock trading strategies was proposed by Nuij et al. (2014) , which used an evolutionary algorithm to combine trading rules using technical indicators and events extracted from news with expert-defined impact scores. While far from using an optimal way to extract information from financial text data their results concluded that the news events were a component of optimal trading strategies.", "cite_spans": [ { "start": 86, "end": 104, "text": "Nuij et al. (2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "As elsewhere in NLP, deep learning methods have been used to tackle financial market trading. A key approach that has informed the model and evaluation framework of this paper is that of Ding et al. (2014) , which used a two-layer feed forward neural network as well as a linear SVM, treating the question of whether stocks would rise or fall as a classification problem; they found that the deep learning model had a higher accuracy. They also compared bag-of-words as input with structured events extracted from financial news via open information extraction (open IE), with the structured input performing better. They found that prediction accuracy was better for the following day's price movement than for the following week, which was in turn better than the following year, as expected.", "cite_spans": [ { "start": 187, "end": 205, "text": "Ding et al. (2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "In their subsequent work, Ding et al. (2015) used a neural tensor network to learn embeddings of both words and structured events as inputs to their prediction models. They then applied a multichannel deep convolutional network -the channels corresponding to events at di\u21b5erent timescales -to predict changes in the Standard & Poor's 500 stock (S&P 500) index and in individual stock prices. This work was followed by Vargas et al. 2017who combined recurrent and convolution layers with pre-trained word vectors to also pre-dict changes to the S&P 500 index. The architecture here was also multichannel, and incorporated a technical analysis 1 input channel. The results from both pieces of work outperformed the former manual feature engineering approaches.", "cite_spans": [ { "start": 26, "end": 44, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "To the best of our knowledge, characterlevel sequence modeling has not been applied to stock price forecasting so far; neither has the use of language model pre-training. We note that the event models of Ding et al. (2014) and Ding et al. (2015) make use of generalization and back-o\u21b5 techniques to deal with data sparsity in terms of named entities etc, which as mentioned earlier characterlevel representations could help address. Also, character-level inputs are potentially complementary to other sorts such as word-level inputs or event representations, in particular with the multichannel architectures used for the work described above: research such as that of Kim (2014) has shown that multiple input representation can be usefully combined, and work using this kind of model such as Ruder et al. (2016) has specifically done this for character-level and word-level or other inputs. In this work, we aim to investigate whether this kind of character-level input may capture useful information for stock price prediction.", "cite_spans": [ { "start": 204, "end": 222, "text": "Ding et al. (2014)", "ref_id": "BIBREF3" }, { "start": 227, "end": 245, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" }, { "start": 669, "end": 679, "text": "Kim (2014)", "ref_id": "BIBREF11" }, { "start": 793, "end": 812, "text": "Ruder et al. (2016)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Event-based Trading", "sec_num": "2" }, { "text": "Following Ding et al. 2014, we have a twopart model. The first builds a representation for the input, which for us is the characterlevel language model. The second is the recurrent neural network used for the prediction, a classifier that takes the input and predicts whether the price will rise or fall in the chosen timeframe. Both models process text as a sequence of UTF-8 encoded bytes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model design and Training Details", "sec_num": "3" }, { "text": "Existing pre-trained embeddings typically come from general domains (Google News,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural language model", "sec_num": "3.1" }, { "text": "Fundamental analysis looks at fundamental properties of companies (e.g. earnings) to predict stock price movements; the previously described work in this section could be seen as carrying out a kind of fundamental analysis based on information from news reports. Technical analysis looks at past price movements as a guide to future ones. Wikipedia, etc), but these word embeddings often fail to capture rich domain specific vocabularies. We therefore train our own embeddings on financial domain news text consisting of news articles from Reuters an Bloomberg. The data is further described in Section 4. For the language model we used a character embedding with 256 units followed by a single layer LSTM (Hochreiter and Schmidhuber, 1997 ) with 1024 units. The characters are first encoded as bytes to simplify the embedding look-up process. The model looks up the corresponding character embedding and then updates its hidden state and predicts a probability distribution over the next possible byte. Individual text paragraphs are prepended with to simulate a starting token and appended with <\\s>to simulate an end token. Figure 1a shows a representation of this network.", "cite_spans": [ { "start": 706, "end": 739, "text": "(Hochreiter and Schmidhuber, 1997", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 1130, "end": 1139, "text": "Figure 1a", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "The model was trained for 10 epochs on mini-batches of 256 subsequences of length 256. The character embeddings and the LSTM weights are then saved and used to initialize the first two layers of deep neural network for classification. The model is trained with stochastic gradient descent (SGD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "The second neural network has the same two layers as the language model, but with one additional fully connected layer with 512 units using a Leaky Relu activation (Maas et al., 2013) . Only the last output of the LSTM layer is used to connect to the fully connected layer, the rationale being that this final state should encode a full representation the text sentence.", "cite_spans": [ { "start": 164, "end": 183, "text": "(Maas et al., 2013)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "RNN for Stock Prediction", "sec_num": "3.2" }, { "text": "After the embedding look-up and hidden state update, the model goes through the fully connected layer and then predicts the probability of a positive direction price change for the stock price. This model is trained with Adam (Kingma and Ba, 2014) for 50 epochs. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RNN for Stock Prediction", "sec_num": "3.2" }, { "text": "Data We evaluated our model on a dataset of financial news collected from Reuters and Bloomberg over the period from October 2006 to November 2013. This dataset was made available by Ding et al. (2014) . Stock price data for all S&P 500 companies and for the S&P 500 index were obtained from Thomson Reuters Tick History. 2 Following Radinsky et al. 2012and Ding et al. (2014) we focus on the news headlines instead of the full content of the news articles for prediction since they found it produced better results. With this data we tested price response to news releases and daily responses, both shortly after ('intraday') and at end of day ('interday') as described below, and both for the stocks mentioned in the news article and for the index. Summary statistics of the data are shown in Table 1 .", "cite_spans": [ { "start": 183, "end": 201, "text": "Ding et al. (2014)", "ref_id": "BIBREF3" }, { "start": 358, "end": 376, "text": "Ding et al. (2014)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 795, "end": 802, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For intraday prediction, we filtered the news that contained only the name of one company belonging to the S&P 500 index and conducted our experiments on predicting whether the last price after one hour would be higher than the first price after the news release, using the timestamp for the news release. We also tested the S&P 500 index in the same time window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In the interday prediction we used a setup similar to Ding et al. (2015) and Vargas et al. (2017) , in which we concatenated all news articles from the same company on each day and predicted if the closing price in the day t + 1 would increase when compared with closing price on day t, and similarly for the S&P 500 index.", "cite_spans": [ { "start": 54, "end": 72, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" }, { "start": 77, "end": 97, "text": "Vargas et al. (2017)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Models We compare the model described in Section 3 with several baselines. For the other work using the same dataset (Ding et al., 2015; Vargas et al., 2017) , we give the results from the respective papers; we use the same experimental setup as they did. These models do not have intraday results, as the authors of those papers did not have stock data at more finegrained intervals than daily. Only the models presented in Ding et al. (2015) have results for individual stocks, in addition to the S&P 500 Index.", "cite_spans": [ { "start": 117, "end": 136, "text": "(Ding et al., 2015;", "ref_id": "BIBREF4" }, { "start": 137, "end": 157, "text": "Vargas et al., 2017)", "ref_id": "BIBREF37" }, { "start": 425, "end": 443, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We also reimplemented the model used by Luss and d'Aspremont (2015) , which was a competitive baseline for Ding et al. (2015) . In this model bags-of-words are used to represent the news documents and Support Vector Machines (SVMs) are used for prediction. We thus have both interday and intraday results for this model.", "cite_spans": [ { "start": 40, "end": 67, "text": "Luss and d'Aspremont (2015)", "ref_id": "BIBREF18" }, { "start": 107, "end": 125, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In the results, the following notation identifies each model: \u2022 BW-SVM: bag-of-words and support machines (SVMs) prediction model (Luss and d'Aspremont, 2015) .", "cite_spans": [ { "start": 130, "end": 158, "text": "(Luss and d'Aspremont, 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "\u2022 E-NN: structured events tuple input and standard neural network prediction model (Ding et al., 2014) .", "cite_spans": [ { "start": 83, "end": 102, "text": "(Ding et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "\u2022 WB-CNN: sum of each word in a document as input and CNN prediction model (Ding et al., 2015) .", "cite_spans": [ { "start": 75, "end": 94, "text": "(Ding et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "\u2022 EB-CNN: event embedding prediction model (Ding et al., 2015) .", "cite_spans": [ { "start": 43, "end": 62, "text": "(Ding et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Following Lavrenko et al. (2000) and Ding et al. (2015) we also test the profitability of our proposed model. We follow a slightly di\u21b5erent strategy, though. As in Ding et al. (2015) we perform a market simulation considering the behavior of a fictitious trader. This trader will use the predictions of the model to invest $10,000 worth in a stock if the model indicates the price will rise and will hold the position until the end of the current session, selling at the closing price. The same strategy is used for short-selling if the model indicates that an individual stock price will fall. Di\u21b5erently from Ding et al. (2015) we do not consider profit taking behavior in our simulation. Rather, we plot the behaviour to visualize what happens over time, instead of presenting a single aggregate number. For this simulation we considered only the predictions on a portfolio consisting of the S&P 500 Index constituent companies. We compare the results of this strategy with the S&P 500 Index performance over the same period.", "cite_spans": [ { "start": 10, "end": 32, "text": "Lavrenko et al. (2000)", "ref_id": "BIBREF16" }, { "start": 37, "end": 55, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" }, { "start": 164, "end": 182, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Language Model We first look at the quality of the representations learned by the character language model. Considering it was trained exclusively on a dataset of financial news we wondered how well this model would be able to reproduce the information dependencies present in the data such as the time of the events and currency information. The language model seemed capable of reproducing these dependencies. In Table 2 we show some sample text generated by the language model. While semantically incorrect, the representations learned by the model seem able to reproduce to some extent the grammatical structure of the language, as well as understand the entities present in the training dataset and the structure of numerical data. Table 3 shows the experimental results of the model S&P 500 Index prediction on the test dataset, in terms of accuracy of predicting stock price movement, while Table 4 shows the test results of individual company predictions. Similarly to Ding et al. (2015) , individual stock prediction performs better than index prediction. Overall, our character-level language model pretraining performs at least as well as all of the other models with the exception of EB-CNN, but with the advantage over EB-CNN of being substantially simpler in implementation in terms of not having a module for modelling events. In general, while the other proposed models use more complex architectures and external features such as technical indicators and structured event detection, our approach leverages only on language model pre-training.", "cite_spans": [ { "start": 977, "end": 995, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 415, "end": 422, "text": "Table 2", "ref_id": null }, { "start": 737, "end": 744, "text": "Table 3", "ref_id": "TABREF1" }, { "start": 898, "end": 905, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "The higher performance of the EB-CNN architecture of Ding et al. (2015) is likely to be due to the neural tensor network component that takes as input word embeddings and returns a representation of events; this provides a boost of 3+% over their comparable CNN model that does not explicitly incorporate events (WB-CNN in our table). This event component is potentially compatible with our approach; they could be combined, for example, by feeding our character-based input to an event component, or as noted earlier via multi-channel inputs, along the lines of Kim (2014) . Figure 2 we report the results of the market simulation. Overall, the model is able to outperform the index consistently, despite having greater variance. While the model do not consider trading frictions such as transaction costs and market impact, we believe this these results highlight the viability of the strategy. Exploration of advanced market microstructure implications are beyond the scope of this paper. E cient Markets Hypothesis One interesting aspect of these results is the superior performance of the daily prediction over intraday prediction.", "cite_spans": [ { "start": 53, "end": 71, "text": "Ding et al. (2015)", "ref_id": "BIBREF4" }, { "start": 563, "end": 573, "text": "Kim (2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 576, "end": 584, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model Accuracy", "sec_num": null }, { "text": "In terms of what this might suggest for the EMH, Malkiel (2003) notes:", "cite_spans": [ { "start": 44, "end": 63, "text": "EMH, Malkiel (2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Market Simulation In", "sec_num": null }, { "text": "It was generally believed that securities markets were extremely e cient in reflecting information about individual stocks and about the stock market as a whole. The accepted view was that when information arises, the news spreads very quickly and is incorporated into the prices of securities without delay.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Market Simulation In", "sec_num": null }, { "text": "In fact, in the original formulation of the EMH, Fama (1970) remarks that \"at any time prices fully reflect all available information\" [italics added], implying instantaneous incorporation of information into prices. Other work such as Grossman and Stiglitz (1980) has argued that there are informational ine ciencies in the market that lead to delays in that information being incorporated into prices. Malkiel (2003) reviews some of the reasons for underreacting to new information, which include judgement biases by traders, such as conservatism (Edwards, 1968) , \"the Table 2 : Random samples from the character language model Copper for the region will increase the economy as rising inflation to recover from the property demand proposals for the region's largest economy and a share price of 10 percent of the nation's bonds to contain the company to spend as much as $1.3 billion to $1.2 billion in the same period a year earlier.<\\s> (Reuters) -The Bank of America Corp ( NBA.N ) said on Wednesday as proposals are seeking to be completed by the end of the year, the biggest shareholder of the stock of a statement to buy the company's casino and the country's biggest economy. \"The U.S. is a way that the credit crisis will be a proposal to get a strong results of the budget deficit in the next month,\" said Toyota Motor Chief Executive O cer Tom Berry said in a telephone interview.<\\s> The U.S. is considering a second straight month in the U.S. and Europe to report the stock of the nation's currency and the previous consecutive month. The company will sell 4.5 billion euros ($3.6 billion) of bonds in the first quarter of 2012, according to the median estimate of analysts surveyed by Bloomberg.<\\s> Figure 2 : Equity plot of trading using the proposed event-based strategy. In terms of looking at the e\u21b5ect of news announcements, there historically haven't been the tools to analyse vast quantities of text to evaluate e\u21b5ects on stock prices. With deep learning, and the online availability of stock prices at fine-grained intervals, it is now possible to look empirically at how long it takes information to be incorporated by assessing how predictable stock prices are as a function of news announcements. Previous work discussed in Section 2 had observed that it was still possible to predict at levels better than chance for up to a year out, although most strongly as time horizons were shorter. However, our preliminary results from our two models with intraday results show that predictability does not decrease monotonically: information is more typically incorporated later than the first hour.", "cite_spans": [ { "start": 49, "end": 60, "text": "Fama (1970)", "ref_id": "BIBREF6" }, { "start": 236, "end": 264, "text": "Grossman and Stiglitz (1980)", "ref_id": "BIBREF8" }, { "start": 404, "end": 418, "text": "Malkiel (2003)", "ref_id": "BIBREF20" }, { "start": 549, "end": 564, "text": "(Edwards, 1968)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 572, "end": 579, "text": "Table 2", "ref_id": null }, { "start": 1726, "end": 1734, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Market Simulation In", "sec_num": null }, { "text": "This paper presented the use of a simple LSTM neural network with character level embeddings for stock market forecasting using only financial news as predictors. Our results suggest that the use of character level embeddings is promising and competitive with more complex models which use technical indicators and event extraction methods in addition to the news articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Character embeddings models are simpler and more memory e cient than word embeddings and are also able to keep sub-word infor-mation. With character embeddings the risk of seeing unknown tokens in the test set is diminished, since the data sparsity is much lower than with word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In the future we consider testing the use of character embeddings with more complex architectures and possibly the addition of other sources of information to create richer feature sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In addition, while previous work has found that including the body text of the news performs worse than just the headline, there may be useful information to extract from the body text, perhaps along the lines of Pang and Lee (2004) , which improves sentiment analysis results by snipping out irrelevant text using a graph-theoretic minimum cut approach.", "cite_spans": [ { "start": 213, "end": 232, "text": "Pang and Lee (2004)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Other directions include looking at predicting price movements at a range of time horizons, in order to gauge empirically how quickly information is absorbed in the market, and relate this to the finance literature on the topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/philipperemy/financial-newsdataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the Capital Markets CRC for providing financial support for this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Representation learning: A review and new perspectives", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "35", "issue": "8", "pages": "1798--1828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Aaron Courville, and Pascal Vin- cent. 2013. Representation learning: A re- view and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(8):1798-1828.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Twitter mood predicts the stock market", "authors": [ { "first": "Johan", "middle": [], "last": "Bollen", "suffix": "" }, { "first": "Huina", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2011, "venue": "Journal of Computational Science", "volume": "2", "issue": "1", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Jour- nal of Computational Science 2(1):1-8.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semisupervised sequence learning", "authors": [ { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Dai", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "3079--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi- supervised sequence learning. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 28 , Curran Associates, Inc., pages 3079-3087. http://papers.nips.cc/paper/5949-semi- supervised-sequence-learning.pdf.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using structured events to predict stock price movement: An empirical investigation", "authors": [ { "first": "Xiao", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Junwen", "middle": [], "last": "Duan", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1415--1425", "other_ids": { "DOI": [ "10.3115/v1/D14-1148" ] }, "num": null, "urls": [], "raw_text": "Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2014. Using structured events to pre- dict stock price movement: An empirical in- vestigation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP). Association for Computational Linguistics, pages 1415-1425. https://doi.org/10.3115/v1/D14-1148.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deep learning for event-driven stock prediction", "authors": [ { "first": "Xiao", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Junwen", "middle": [], "last": "Duan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)", "volume": "", "issue": "", "pages": "2327--2333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In Proceedings of the Interna- tional Joint Conference on Artificial Intelligence (IJCAI). pages 2327-2333.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "editor, Formal Representation of Human Judgement", "authors": [ { "first": "W", "middle": [], "last": "Edwards", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Edwards. 1968. Conservatism in human infor- mation processing. In B. Kleinmutz, editor, For- mal Representation of Human Judgement, Wi- ley, New York.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "E cient Capital Markets: A Review of Theory and Empirical Work", "authors": [ { "first": "Eugene", "middle": [], "last": "Fama", "suffix": "" } ], "year": 1970, "venue": "Journal of Finance", "volume": "25", "issue": "", "pages": "383--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Fama. 1970. E cient Capital Markets: A Review of Theory and Empirical Work. Journal of Finance 25:383-417.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing in accounting, auditing and finance: a synthesis of the literature with a roadmap for future research. Intelligent Systems in Accounting", "authors": [ { "first": "E", "middle": [], "last": "Ingrid", "suffix": "" }, { "first": "Margaret", "middle": [ "R" ], "last": "Fisher", "suffix": "" }, { "first": "Mark", "middle": [ "E" ], "last": "Garnsey", "suffix": "" }, { "first": "", "middle": [], "last": "Hughes", "suffix": "" } ], "year": 2016, "venue": "Management", "volume": "23", "issue": "3", "pages": "157--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ingrid E Fisher, Margaret R Garnsey, and Mark E Hughes. 2016. Natural language processing in accounting, auditing and finance: a synthesis of the literature with a roadmap for future re- search. Intelligent Systems in Accounting, Fi- nance and Management 23(3):157-214.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the Impossibility of Informationally E cient Markets", "authors": [ { "first": "J", "middle": [], "last": "Sanford", "suffix": "" }, { "first": "Joseph", "middle": [ "E" ], "last": "Grossman", "suffix": "" }, { "first": "", "middle": [], "last": "Stiglitz", "suffix": "" } ], "year": 1980, "venue": "American Economic Review", "volume": "70", "issue": "", "pages": "393--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanford J. Grossman and Joseph E. Stiglitz. 1980. On the Impossibility of Informationally E cient Markets. American Economic Review 70:393- 408.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automated news reading: Stock price prediction based on financial news using context-capturing features", "authors": [ { "first": "Michael", "middle": [], "last": "Hagenau", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Liebmann", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 2013, "venue": "Decision Support Systems", "volume": "55", "issue": "3", "pages": "685--697", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Hagenau, Michael Liebmann, and Dirk Neumann. 2013. Automated news reading: Stock price prediction based on financial news using context-capturing features. Decision Sup- port Systems 55(3):685-697.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735-1780.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Convolutional neural networks for sentence classification pages", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification pages 1746- 1751. https://doi.org/10.3115/v1/D14-1181.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Character-aware neural language models", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2741--2749", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neu- ral language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intel- ligence. pages 2741-2749.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Pro- ceedings of the 3rd International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Skipthought vectors", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "R", "middle": [], "last": "Ruslan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems 28", "volume": "", "issue": "", "pages": "3294--3302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdi- nov, Richard Zemel, Raquel Urtasun, Anto- nio Torralba, and Sanja Fidler. 2015. Skip- thought vectors. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, ed- itors, Advances in Neural Information Process- ing Systems 28 . Curran Associates, Inc., pages 3294-3302. http://papers.nips.cc/paper/5950- skip-thought-vectors.pdf.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ask me anything: Dynamic memory networks for natural language processing", "authors": [ { "first": "Ankit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Ozan", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ondruska", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Ishaan", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 33rd International Conference on Machine Learning", "volume": "48", "issue": "", "pages": "1378--1387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gul- rajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural lan- guage processing. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceed- ings of The 33rd International Conference on Machine Learning. PMLR, New York, New York, USA, volume 48 of Proceedings of Machine Learning Research, pages 1378-1387. http://proceedings.mlr.press/v48/kumar16.html.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Language models for financial news recommendation", "authors": [ { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Schmill", "suffix": "" }, { "first": "Dawn", "middle": [], "last": "Lawrie", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Ogilvie", "suffix": "" }, { "first": "David", "middle": [], "last": "Jensen", "suffix": "" }, { "first": "James", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Ninth International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "389--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Lavrenko, Matt Schmill, Dawn Lawrie, Paul Ogilvie, David Jensen, and James Allan. 2000. Language models for financial news recommen- dation. In Proceedings of the Ninth Interna- tional Conference on Information and Knowl- edge Management. ACM, pages 389-396.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Eventdriven trading and the \"new news", "authors": [ { "first": "David", "middle": [], "last": "Leinweber", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Sisk", "suffix": "" } ], "year": 2011, "venue": "The Journal of Portfolio Management", "volume": "38", "issue": "1", "pages": "110--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Leinweber and Jacob Sisk. 2011. Event- driven trading and the \"new news\". The Journal of Portfolio Management 38(1):110-124.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Predicting abnormal returns from news using text classification", "authors": [ { "first": "Ronny", "middle": [], "last": "Luss", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Quantitative Finance", "volume": "15", "issue": "6", "pages": "999--1012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronny Luss and Alexandre d'Aspremont. 2015. Predicting abnormal returns from news us- ing text classification. Quantitative Finance 15(6):999-1012.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rectifier nonlinearities improve neural network acoustic models", "authors": [ { "first": "L", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "", "middle": [], "last": "Maas", "suffix": "" }, { "first": "Y", "middle": [], "last": "Awni", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the ICML Workshop on Deep Learning for Audio, Speech, and Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neu- ral network acoustic models. In Proceedings of the ICML Workshop on Deep Learning for Au- dio, Speech, and Language Processing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The e cient market hypothesis and its critics", "authors": [ { "first": "G", "middle": [], "last": "Burton", "suffix": "" }, { "first": "", "middle": [], "last": "Malkiel", "suffix": "" } ], "year": 2003, "venue": "The Journal of Economic Perspectives", "volume": "17", "issue": "1", "pages": "59--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burton G Malkiel. 2003. The e cient market hy- pothesis and its critics. The Journal of Eco- nomic Perspectives 17(1):59-82.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Ecient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Je\u21b5rey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Cor- rado, and Je\u21b5rey Dean. 2013a. E - cient estimation of word representations in vector space. CoRR abs/1301.3781.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Je\u21b5", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems 26", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Je\u21b5 Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26 . Curran Associates, Inc., pages 3111-3119. http://papers.nips.cc/paper/5021-distributed- representations-of-words-and-phrases-and-their- compositionality.pdf.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sentiment analysis on social media for stock movement prediction", "authors": [], "year": 2015, "venue": "Expert Systems with Applications", "volume": "42", "issue": "24", "pages": "9603--9611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Hai Nguyen, Kiyoaki Shirai, and Julien Vel- cin. 2015. Sentiment analysis on social media for stock movement prediction. Expert Systems with Applications 42(24):9603-9611.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An automated framework for incorporating news into stock trading strategies", "authors": [ { "first": "Wijnand", "middle": [], "last": "Nuij", "suffix": "" }, { "first": "Viorel", "middle": [], "last": "Milea", "suffix": "" }, { "first": "Frederik", "middle": [], "last": "Hogenboom", "suffix": "" }, { "first": "Flavius", "middle": [], "last": "Frasincar", "suffix": "" }, { "first": "Uzay", "middle": [], "last": "Kaymak", "suffix": "" } ], "year": 2014, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "26", "issue": "4", "pages": "823--835", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wijnand Nuij, Viorel Milea, Frederik Hogenboom, Flavius Frasincar, and Uzay Kaymak. 2014. An automated framework for incorporating news into stock trading strategies. IEEE Trans- actions on Knowledge and Data Engineering 26(4):823-835.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Information sources, news, and rumors in financial markets: Insights into the foreign exchange market", "authors": [ { "first": "Thomas", "middle": [], "last": "Oberlechner", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Hocking", "suffix": "" } ], "year": 2004, "venue": "Journal of Economic Psychology", "volume": "25", "issue": "3", "pages": "407--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Oberlechner and Sam Hocking. 2004. In- formation sources, news, and rumors in financial markets: Insights into the foreign exchange mar- ket. Journal of Economic Psychology 25(3):407- 424.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The impact of microblogging data for stock market prediction: Using twitter to predict returns, volatility, trading volume and survey sentiment indices", "authors": [ { "first": "Nuno", "middle": [], "last": "Oliveira", "suffix": "" }, { "first": "Paulo", "middle": [], "last": "Cortez", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Areal", "suffix": "" } ], "year": 2017, "venue": "Expert Systems with Applications", "volume": "73", "issue": "", "pages": "125--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nuno Oliveira, Paulo Cortez, and Nelson Areal. 2017. The impact of microblogging data for stock market prediction: Using twitter to pre- dict returns, volatility, trading volume and sur- vey sentiment indices. Expert Systems with Ap- plications 73:125-144.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", "volume": "", "issue": "", "pages": "271--278", "other_ids": { "DOI": [ "10.3115/1218955.1218990" ] }, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2004. A Sentimental Ed- ucation: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. In Proceedings of the 42nd Meeting of the Associ- ation for Computational Linguistics (ACL'04), Main Volume. Barcelona, Spain, pages 271-278. https://doi.org/10.3115/1218955.1218990.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Opinion mining and sentiment analysis", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "Foundations and Trends R in Information Retrieval", "volume": "2", "issue": "1-2", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, et al. 2008. Opinion min- ing and sentiment analysis. Foundations and Trends R in Information Retrieval 2(1-2):1- 135.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Negation scope detection in sentiment analysis: Decision support for news-driven trading", "authors": [ { "first": "Nicolas", "middle": [], "last": "Pr\u00f6llochs", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Feuerriegel", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 2016, "venue": "Decision Support Systems", "volume": "88", "issue": "", "pages": "67--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicolas Pr\u00f6llochs, Stefan Feuerriegel, and Dirk Neumann. 2016. Negation scope detection in sentiment analysis: Decision support for news-driven trading. Decision Support Systems 88:67-75.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Learning to generate reviews and discovering sentiment", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "J\u00f3zefowicz", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Rafal J\u00f3zefowicz, and Ilya Sutskever. 2017. Learning to gen- erate reviews and discovering sen- timent. CoRR abs/1704.01444. http://arxiv.org/abs/1704.01444.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning causality for news events prediction", "authors": [ { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" }, { "first": "Sagie", "middle": [], "last": "Davidovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 21st International Conference on World Wide Web", "volume": "", "issue": "", "pages": "909--918", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kira Radinsky, Sagie Davidovich, and Shaul Markovitch. 2012. Learning causality for news events prediction. In Proceedings of the 21st International Conference on World Wide Web. ACM, pages 909-918.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Parsa", "middle": [], "last": "Gha\u21b5ari", "suffix": "" }, { "first": "John", "middle": [ "G" ], "last": "Breslin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Parsa Gha\u21b5ari, and John G. Breslin. 2016. Character-level and Multi-channel Convolutional Neu- ral Networks for Large-scale Author- ship Attribution. CoRR abs/1609.06686.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Automated trading with machine learning on big data", "authors": [ { "first": "Dymitr", "middle": [], "last": "Ruta", "suffix": "" } ], "year": 2014, "venue": "IEEE International Congress on Big Data. IEEE", "volume": "", "issue": "", "pages": "824--830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dymitr Ruta. 2014. Automated trading with ma- chine learning on big data. In IEEE Interna- tional Congress on Big Data. IEEE, pages 824- 830.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Evaluating sentiment in financial news articles", "authors": [ { "first": "P", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Yulei", "middle": [], "last": "Schumaker", "suffix": "" }, { "first": "Chun-Neng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hsinchun", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2012, "venue": "Decision Support Systems", "volume": "53", "issue": "3", "pages": "458--464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert P Schumaker, Yulei Zhang, Chun-Neng Huang, and Hsinchun Chen. 2012. Evaluating sentiment in financial news articles. Decision Support Systems 53(3):458-464.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1631-1642. https://aclanthology.info/pdf/D/D13/D13- 1170.pdf.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Giving content to investor sentiment: The role of media in the stock market", "authors": [ { "first": "C", "middle": [], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Tetlock", "suffix": "" } ], "year": 2007, "venue": "The Journal of Finance", "volume": "62", "issue": "3", "pages": "1139--1168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul C Tetlock. 2007. Giving content to investor sentiment: The role of media in the stock mar- ket. The Journal of Finance 62(3):1139-1168.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Deep learning for stock market prediction from financial news articles", "authors": [ { "first": "R", "middle": [], "last": "Manuel", "suffix": "" }, { "first": "Beatriz", "middle": [], "last": "Vargas", "suffix": "" }, { "first": "Alexandre G", "middle": [], "last": "Slp De Lima", "suffix": "" }, { "first": "", "middle": [], "last": "Evsuko\u21b5", "suffix": "" } ], "year": 2017, "venue": "IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)", "volume": "", "issue": "", "pages": "60--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel R Vargas, Beatriz SLP de Lima, and Alexandre G Evsuko\u21b5. 2017. Deep learning for stock market prediction from financial news ar- ticles. In IEEE International Conference on Computational Intelligence and Virtual Envi- ronments for Measurement Systems and Appli- cations (CIVEMSA). IEEE, pages 60-65.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Using a contextual entropy model to expand emotion words and their intensity for the sentiment classification of stock market news", "authors": [ { "first": "Liang-Chih", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jheng-Long", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pei-Chann", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Hsuan-Shou", "middle": [], "last": "Chu", "suffix": "" } ], "year": 2013, "venue": "Knowledge-Based Systems", "volume": "41", "issue": "", "pages": "89--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang-Chih Yu, Jheng-Long Wu, Pei-Chann Chang, and Hsuan-Shou Chu. 2013. Using a contextual entropy model to expand emotion words and their intensity for the sentiment clas- sification of stock market news. Knowledge- Based Systems 41:89-97.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "Figure 1bdisplays this architecture.", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "(a) Network architecture for the language model. In each step the output of the LSTM layer predicts the probability distribution of the next character. (b) Networks architecture for the stock prediction network. Only at the final processing of the text the output of the LSTM is used to predict the direction of the stock price.", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "CharB-LSTM (ours): character embedding input and LSTM followed by fully connected prediction model.\u2022 WI-RCNN: word embedding and technical indicators input and RCNN prediction model (Vargas et al., 2017). \u2022 SI-RCNN: sentence embedding and technical indicators input and RCNN predic-tion model (Vargas et al., 2017).", "type_str": "figure" }, "TABREF0": { "content": "
DataTrainingValidationTest
Time Interval02/10/2006 -18/06/2012 19/06/2012 -21/02/2013 22/02/2013 -21/11/2013
Documents157,03352,34451,476
Total bytes736,427,755232,440,500245,771,999
News average per day126124124
News average per company911878897
", "text": "Statistics of Dataset", "html": null, "num": null, "type_str": "table" }, "TABREF1": { "content": "
ModelInterday Intraday
BW-SVM56.42%53.22%
CharB-LSTM (ours)63.34%59.86%
WI-RCNN61.29%*
SI-RCNN63.09%*
WB-CNN61.73%*
E-NN58.94%*
EB-CNN64.21%*
", "text": "Results of S&P 500 Index prediction", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "content": "
ModelInterday Intraday
BW-SVM58.74%54.22%
CharB-LSTM (ours)64.74%61.68%
WB-CNN61.47%*
EB-CNN65.48%*
slow updating of models in the face of new
evidence\".
", "text": "Results of Individual stock prediction", "html": null, "num": null, "type_str": "table" } } } }