ACL-OCL / Base_JSON /prefixE /json /econlp /2021.econlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:53:53.495672Z"
},
"title": "From Stock Prediction to Financial Relevance: Repurposing Attention Weights to Assess News Relevance Without Manual Annotations",
"authors": [
{
"first": "Luciano",
"middle": [
"Del"
],
"last": "Corro",
"suffix": "",
"affiliation": {},
"email": "corrogg@mpi-inf.mpg.de"
},
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": "",
"affiliation": {},
"email": "jhoffart@mpi-inf.mpg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a method to automatically identify financially relevant news using stock price movements and news headlines as input. The method repurposes the attention weights of a neural network initially trained to predict stock prices to assign a relevance score to each headline, eliminating the need for manually labeled training data. Our experiments on the four most relevant US stock indices and 1.5M news headlines show that the method ranks relevant news highly, positively correlated with the accuracy of the initial stock price prediction task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a method to automatically identify financially relevant news using stock price movements and news headlines as input. The method repurposes the attention weights of a neural network initially trained to predict stock prices to assign a relevance score to each headline, eliminating the need for manually labeled training data. Our experiments on the four most relevant US stock indices and 1.5M news headlines show that the method ranks relevant news highly, positively correlated with the accuracy of the initial stock price prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Events such as lawsuits, the unveiling of a newly discovered technology, the introduction of new legislation, or previous market movements can have a significant impact on stock prices. A quick and informed reaction to such an event is crucial for financial analysts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Information overload is pervasive in the financial industry, hindering the analysts' ability to incorporate the most relevant events into their decision process. One of the main natural language understanding challenges across many industries is to prioritize incoming information, reducing the risk of missing important events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel method to identify relevant financial news. The key insight is that this can be achieved without relying on manually created relevance judgments, instead leveraging the correlation of news events and stock prices. The core idea is to train an attention-based neural network on the stock prediction task, using the price movement as label. The input of the network is a set of events in the form of news headlines (embedded using BERT (Devlin et al., 2019) ), and the output is the price movement of a specific stock index with respect to the previous day (i.e., DOWN, * equal contribution STAY, UP), mediated by an attention layer (Bahdanau et al., 2015) . The layer acts as an input selector, computing the weight for each headline on a given day. These weights are repurposed to score and thus rank news headlines according to financial relevance. As each weight solely depends on the headline itself, we can use it to compare headlines across the entire dataset.",
"cite_spans": [
{
"start": 468,
"end": 489,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 665,
"end": 688,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluated our method on the most prominent US stock indices (S&P500, Dow Jones, Russell 1000, and Nasdaq) and 1.5 million headlines (1994-2010) from Gigaword (Graff et al., 2003) . A first automatic evaluation confirmed a positive correlation between stock prediction accuracy and relevance scores (via the attention weight). In a second, manual evaluation, we labeled 1000 headlines and found that the method ranks relevant events highly: A network trained on the Dow Jones stock index prices, for example, resulted in 89% relevant events in the top 200 ranks, compared to only 19% relevant events in a uniform sample of headlines.",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Graff et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Stock price prediction from news. The feasibility of predicting stock prices from news has been debated (Merello et al., 2018) as the news can affect the price before it is published. In our case this is not an issue as it only matters that the price change is reflected in the news, not the timing; news about price movements are indeed relevant. Multiple approaches explored alternatives to extract price signals from news such as sentiment analysis, semantic parsing, etc (Gid\u00f3falvi, 2001; Schumaker and Chen, 2009; Xie et al., 2013; Li et al., 2014; Peng and Jiang, 2016 ). Here we use contextualized embeddings plus attention to extract those signals. Extraction of financial events. Previous work focused on the explicit representation of events, either in a canonical or semi-canonical way (Ding et al., 2014 (Ding et al., , 2015 (Ding et al., , 2016 Peng and Jiang, 2016; Jacobs et al., 2018) , or non-canonicalized (Ein-Dor et al., 2019; Shi et al., 2019) . Events are represented as structured facts (Ding et al., 2014) , via embeddings (Ding et al., 2015) or keywords (Shi et al., 2019) , and are usually pre-selected based on explicit company mentions (Shi et al., 2019) . Most approaches differ from ours in that events are input for stock prediction as ultimate goal, while we use stock prediction to identify relevant events. Our end-toend approach allows us to automatically select the relevant events avoiding involved preprocessing, compromise on the representation of the event, or the use of an underlying extraction system. Attention as explanation. A debate has erupted around the idea of using the attention mechanism (Bahdanau et al., 2015) to explain output. Serrano and Smith (2019) and Jain and Wallace (2019) concluded that attention weights should not be used to explain a decision, and Wiegreffe and Pinter (2019) developed a set of tests to determine weight consistency. In our specific case, we found that the results on the stock price prediction are related to the attention weights performance as relevance scores, and have merit when used for ranking. However, we acknowledge the need to go into deeper analysis to understand score stability in future work.",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "(Merello et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 475,
"end": 492,
"text": "(Gid\u00f3falvi, 2001;",
"ref_id": "BIBREF6"
},
{
"start": 493,
"end": 518,
"text": "Schumaker and Chen, 2009;",
"ref_id": "BIBREF13"
},
{
"start": 519,
"end": 536,
"text": "Xie et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 537,
"end": 553,
"text": "Li et al., 2014;",
"ref_id": "BIBREF10"
},
{
"start": 554,
"end": 574,
"text": "Peng and Jiang, 2016",
"ref_id": "BIBREF12"
},
{
"start": 797,
"end": 815,
"text": "(Ding et al., 2014",
"ref_id": "BIBREF2"
},
{
"start": 816,
"end": 836,
"text": "(Ding et al., , 2015",
"ref_id": "BIBREF3"
},
{
"start": 837,
"end": 857,
"text": "(Ding et al., , 2016",
"ref_id": "BIBREF4"
},
{
"start": 858,
"end": 879,
"text": "Peng and Jiang, 2016;",
"ref_id": "BIBREF12"
},
{
"start": 880,
"end": 900,
"text": "Jacobs et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 924,
"end": 946,
"text": "(Ein-Dor et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 947,
"end": 964,
"text": "Shi et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1010,
"end": 1029,
"text": "(Ding et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 1047,
"end": 1066,
"text": "(Ding et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 1079,
"end": 1097,
"text": "(Shi et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1164,
"end": 1182,
"text": "(Shi et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1641,
"end": 1664,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 1684,
"end": 1708,
"text": "Serrano and Smith (2019)",
"ref_id": "BIBREF14"
},
{
"start": 1816,
"end": 1843,
"text": "Wiegreffe and Pinter (2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The idea is to make use of universal key attention mechanism as in Yang et al. (2016) to learn a headline relevance weight by predicting the stock price movement. Once the network is trained we can use the unnormalized weights of the attention layer as a global relevance score for the news headlines.",
"cite_spans": [
{
"start": 67,
"end": 85,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "As input we use all daily headlines and their categories. The output is DOWN, STAY, UP with respect to the next trading session open price. The full network is displayed in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 181,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "Each headline hl 1 , . . . , hl k consisting of a (padded) sequence of N tokens {w i } i=1,...,N , is encoded into vectors {hhl i } i=1,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "..,N of length 768 via the BERT-base-uncased model pooled output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "hhl i = BERT(hl i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "Each headline comes with a category label hc 1 , hc 2 , . . . (details in Section 4) embedded into a randomly initialized vector of length 30",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "hhc i = embed(hc i ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "Both vectors are concatenated ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "h i = hhl i \u2295 hhc i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "h p i = FF ELU (h i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "Following Yang et al. (2016) , an attention layer computes normalized weights for each headline in the input day via a universal key,",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "H p d := {h p i } i=1,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "..,k , and aggregates them according to those weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "h a d = Attention(H p d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "The final label l i (DOWN, STAY, UP) is computed using a feed forward layer with softmax activation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "l i = FF softmax (h a d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "Every input layer is normalized, and weights are initialized using He. The dropout rate is 0.25. All weights are fine-tuned on the task. We ran on 3 Tesla V100 with a total batch size of 15. The model has a total of \u223c110M parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the attention layer for event scoring",
"sec_num": "3"
},
{
"text": "Dataset. AP headlines of English Gigaword (Graff et al., 2003) and the most prominent US stock indices: S&P500, Dow Jones, Nasdaq, and Russell 1 , totaling 3777 trading sessions (1994-2010) and 1,532,260 headlines, with a daily average of 405.68, a standard deviation of 134.49, a minimum of 1 and a maximum of 1213. News Classification. We trained a classifier on TagMyNews (Vitale et al., 2012) to classify the headlines into 6 categories: 'business', 'entertainment', 'health', 'sci-tech', 'sport', 'us' and 'world'. The input of the BERT based model is a single headline and the output a class score (dropout=0.25, batch size=120, maximum headline length=15 tokens). The best model F1 was 0.85 (20% test size), in line with state-of-the-art (Zeng et al., 2018) . We assigned a single class to each headline (0.5 threshold). Preprocessing. Given resource constraints, we are limited to 115 headlines per day, with a maximum length of 15 tokens. To account for more than 115 headlines, we created stratified subsets on headline categories to generate several data points per day. We discarded days with less than 25 headlines for the four most prominent categories, dropping 511 data points (13.53%), and removed headlines with less than 20 characters. To assign price movement labels, we set thresholds that minimize the distance between the majority and minority class to balance the distributions. We searched for a symmetric threshold between [0.1%, 1%] at 0.1 intervals, see Table 2 for the final values. ",
"cite_spans": [
{
"start": 42,
"end": 62,
"text": "(Graff et al., 2003)",
"ref_id": "BIBREF7"
},
{
"start": 375,
"end": 396,
"text": "(Vitale et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 745,
"end": 764,
"text": "(Zeng et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 1482,
"end": 1489,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The goal is to understand if the attention layer's unnormalized weights can be used to generate meaningful global news relevance scores; we understand meaningful as a score that favors news reflecting stronger price movement signals, either ex-ante or ex-post the price change, as in both cases the news would be relevant; we do not control for endogeneity. We ran two experiments: one to understand which news categories provide better signals, and the second to check if they effectively receive higher scores. We also performed a manual evaluation over the top 200 headlines for each index plus a uniform random sample, totaling 1000 headlines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event relevance",
"sec_num": "4.1"
},
{
"text": "Categories with stronger signals. We ran the network on each news category separately to predict the price movement. We selected the model with the maximum accuracy over 20 epochs. Table 3 shows the results for all categories. 'business' headlines are more informative, achieving the highest accuracy on most of the stock indices. Interestingly, 'sci-tech' news is the best category for Nasdaq, which specializes in technology. However, the top accuracy for this index lags well behind the others. This fact will be reflected in the relevance scores in the following experiment. Table 3 ; we expect 'business' headlines to have a relatively higher score. We trained the network on the entire set of news and used the attention layer to score 271,520 headlines across all categories in the test set. We selected the model with the minimum loss, patience of two epochs. Table 4 shows the results for the top 10, 100, 1,000, and 2,500 headlines. It shows the fraction of business headlines up to that rank, and the increase compared to the fraction of business news in the whole set. For the indices with higher accuracy in the previous experiment (S&P500, Dow Jones, and Russell 1000), the scores significantly skew the distribution at the top ranks towards business news by between 534.65%-471.09% at Rank 10 and between 419.31%-84.14% at Rank 2500. For Nasdaq, with a previously lower performance, the scores do not seem to provide a clear pattern, indicating that the stock prediction performance might reflect the quality of scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 579,
"end": 586,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 868,
"end": 875,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Event relevance",
"sec_num": "4.1"
},
{
"text": "Stock Index @Rank 10 @Rank 100 @Rank 1000 @Rank 2500 S&P500 90.00% / +471.09% 91.82% / +482.62% 87.50% / +455.22% 81.30% / +415.88% Dow Jones 85.00% / +439.36% 63.00% / +299.76% 40.95% / +159.84% 29.02% / +84.14% Russell 100 100.00% / +534.54% 92.33% / +485.90% 87.40% / +454.59% 81.84% / +419.31% Nasdaq 0.06% / -61.93% 14.20% / -9.90% 16.46% / +4.45% 16.512% / +4.78% 4.2 Anecdotal data Table 5 shows the top 26 headlines over the whole timespan, ranked using the unnormalized attention layer weights of the model trained for S&P500 with all news categories. The examples show that the model scores market-relevant headlines highly. We see mostly headlines reflecting general market trends. Results for an iconic date, October 3, 2008, in which the US House passed the 2008 bailout show the same trend (Table 6 ). As for a single day, specific news about stock movements are not many, top-ranking has space for other relevant economic or political events.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 804,
"end": 812,
"text": "(Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Event relevance",
"sec_num": "4.1"
},
{
"text": "We labeled the top 200 test set headlines for each index plus 200 uniformly sampled. Two annotators classified them as relevant or non-relevant. In total, there were only 19 (1.9%) discrepancies that were resolved via mutual agreement. Table 7 shows the relevance results and the high inter-annotator agreement. As before, financially relevant news score higher for the best performing indices compared to Nasdaq and the uniform sample. ",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 243,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Manual evaluation",
"sec_num": "4.3"
},
{
"text": "We presented an exploratory analysis to rank financially relevant events without manually labeled data. We showed that when a simple neural network is able to extract informative signals from news, the attention layer was able to score higher the most relevant news. Future work needs to focus on a more fine-grained analysis of the data and understanding the stability of the scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "https://finance.yahoo.com/quote/ %5EGSPC/history?p=%5EGSPC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT, pages 4171-4186.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using structured events to predict stock price movement: An empirical investigation",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junwen",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1415--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2014. Using structured events to predict stock price movement: An empirical investigation. In EMNLP, pages 1415-1425.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep learning for event-driven stock prediction",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junwen",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2015,
"venue": "IJCAI, IJCAI'15",
"volume": "",
"issue": "",
"pages": "2327--2333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock predic- tion. In IJCAI, IJCAI'15, page 2327-2333.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Knowledge-driven event embedding for stock prediction",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junwen",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "2133--2142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2016. Knowledge-driven event embedding for stock prediction. In COLING, pages 2133-2142.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Financial event extraction using Wikipedia-based weak supervision",
"authors": [
{
"first": "Ariel",
"middle": [],
"last": "Liat Ein-Dor",
"suffix": ""
},
{
"first": "Orith",
"middle": [],
"last": "Gera",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Toledo-Ronen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Halfon",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Sznajder",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Dankin",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Bilu",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2019,
"venue": "Second Workshop on Economics and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liat Ein-Dor, Ariel Gera, Orith Toledo-Ronen, Alon Halfon, Benjamin Sznajder, Lena Dankin, Yonatan Bilu, Yoav Katz, and Noam Slonim. 2019. Financial event extraction using Wikipedia-based weak super- vision. In Second Workshop on Economics and Nat- ural Language Processing, pages 10-15.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using news articles to predict stock price movements",
"authors": [
{
"first": "Gy\u0151z\u0151",
"middle": [],
"last": "Gid\u00f3falvi",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gy\u0151z\u0151 Gid\u00f3falvi. 2001. Using news articles to predict stock price movements.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "English gigaword. Linguistic Data Consortium",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4(1):34.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Economic event detection in company-specific news text",
"authors": [
{
"first": "Gilles",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2018,
"venue": "First Workshop on Economics and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gilles Jacobs, Els Lefever, and V\u00e9ronique Hoste. 2018. Economic event detection in company-specific news text. In First Workshop on Economics and Natural Language Processing, pages 1-10.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Attention is not explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "3543--3556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In NAACL-HLT, pages 3543-3556.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "News impact on stock price return via sentiment analysis. Know.-Based Syst",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianping",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaotie",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "69",
"issue": "",
"pages": "14--23",
"other_ids": {
"DOI": [
"10.1016/j.knosys.2014.04.022"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Li, Haoran Xie, Li Chen, Jianping Wang, and Xiaotie Deng. 2014. News impact on stock price return via sentiment analysis. Know.-Based Syst., 69(1):14-23.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Investigating timing and impact of news on the stock market",
"authors": [
{
"first": "S",
"middle": [],
"last": "Merello",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ratto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Luca",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Data Mining Workshops (ICDMW)",
"volume": "",
"issue": "",
"pages": "1348--1354",
"other_ids": {
"DOI": [
"10.1109/ICDMW.2018.00191"
]
},
"num": null,
"urls": [],
"raw_text": "S. Merello, A. Picasso Ratto, Y. Ma, O. Luca, and E. Cambria. 2018. Investigating timing and impact of news on the stock market. In 2018 IEEE In- ternational Conference on Data Mining Workshops (ICDMW), pages 1348-1354.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Leverage financial news to predict stock price movements using word embeddings and deep neural networks",
"authors": [
{
"first": "Yangtuo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "374--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangtuo Peng and Hui Jiang. 2016. Leverage financial news to predict stock price movements using word embeddings and deep neural networks. In NAACL- HLT, pages 374-379.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Textual analysis of stock market prediction using breaking financial news: The azfin text system",
"authors": [
{
"first": "P",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Hsinchun",
"middle": [],
"last": "Schumaker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Trans. Inf. Syst",
"volume": "27",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert P. Schumaker and Hsinchun Chen. 2009. Tex- tual analysis of stock market prediction using break- ing financial news: The azfin text system. ACM Trans. Inf. Syst., 27(2).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Is attention interpretable? In ACL",
"authors": [
{
"first": "Sofia",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "2931--2951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In ACL, pages 2931-2951.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deepclue: Visual interpretation of text-based deep stock prediction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhiyang",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Binder",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE TKDE",
"volume": "31",
"issue": "",
"pages": "1094--1108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Shi, Zhiyang Teng, Le Wang, Yue Zhang, and Alexander Binder. 2019. Deepclue: Visual inter- pretation of text-based deep stock prediction. IEEE TKDE, 31:1094-1108.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Classification of short texts by deploying topical annotations",
"authors": [
{
"first": "Daniele",
"middle": [],
"last": "Vitale",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Ferragina",
"suffix": ""
},
{
"first": "Ugo",
"middle": [],
"last": "Scaiella",
"suffix": ""
}
],
"year": 2012,
"venue": "ECIR",
"volume": "",
"issue": "",
"pages": "376--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniele Vitale, Paolo Ferragina, and Ugo Scaiella. 2012. Classification of short texts by deploying top- ical annotations. In ECIR, page 376-387.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In EMNLP-IJCNLP, pages 11- 20.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic frames to predict stock price movement",
"authors": [
{
"first": "Boyi",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [
"G"
],
"last": "Creamer",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "873--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boyi Xie, Rebecca J. Passonneau, Leon Wu, and Ger- m\u00e1n G. Creamer. 2013. Semantic frames to predict stock price movement. In ACL, pages 873-883.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL-HLT, pages 1480-1489.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Topic memory networks for short text classification",
"authors": [
{
"first": "Jichuan",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Cuiyun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Lyu",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "3120--3131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jichuan Zeng, Jing Li, Yan Song, Cuiyun Gao, Michael R. Lyu, and Irwin King. 2018. Topic memory networks for short text classification. In EMNLP, pages 3120-3131.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Headlines (hl 1 , hl 2 , \u2026) of a single day, grouped in batches hhl 1 hhl 2 BERT Automatically identified category of hl 1 , hl 2 , \u2026: hc 1 , hc 2 Neural Network Layout and projected to a vector h p i of length 100 by a fully connected feed forward with ELU activation",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "shows the distribution per category.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Category</td><td>Number of articles</td><td>%</td></tr><tr><td>world</td><td>596,899</td><td>38,96</td></tr><tr><td>sport</td><td>275,585</td><td>17.99</td></tr><tr><td>business</td><td>231,083</td><td>15.08</td></tr><tr><td>us</td><td>211,570</td><td>13.81</td></tr><tr><td>unclassified</td><td>66,891</td><td>4.37</td></tr><tr><td>entertainment</td><td>54,607</td><td>3.56</td></tr><tr><td>sci-tech</td><td>54,057</td><td>3.53</td></tr><tr><td>health</td><td>41,568</td><td>2.70</td></tr></table>",
"html": null
},
"TABREF1": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF3": {
"text": "Thresholds and class distributions",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF4": {
"text": "News Category S&P500 Dow Jon. Russ. 1000 Nasd.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>business</td><td>57.88</td><td>61.97</td><td>55.92</td><td>43.64</td></tr><tr><td>us</td><td>40.13</td><td>42.02</td><td>39.59</td><td>38.45</td></tr><tr><td>world</td><td>41.83</td><td>39.66</td><td>38.73</td><td>44.89</td></tr><tr><td>sports</td><td>38.94</td><td>36.36</td><td>38.94</td><td>44.09</td></tr><tr><td>sci-tech</td><td>36.74</td><td>37.05</td><td>36.90</td><td>44.96</td></tr><tr><td>entertainment</td><td>34.57</td><td>37.98</td><td>38.14</td><td>42.33</td></tr><tr><td>health</td><td>34.57</td><td>34.26</td><td>35.81</td><td>40.78</td></tr><tr><td>all</td><td>52.92</td><td>54.99</td><td>54.49</td><td>45.22</td></tr></table>",
"html": null
},
"TABREF5": {
"text": "Max stock prediction accuracy per categoryScoring news headlines. Now we need to understand if the attention weights are consistent with results in",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF6": {
"text": "Percentage of 'business' news at rank / Percentage increase compared to base distribution of 15.76%",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Rank Headline</td><td colspan=\"2\">Rank Headline</td></tr><tr><td>1</td><td>Dow Drops 176; Nasdaq Tumbles 179</td><td>14</td><td>Stocks, Dollar Lower on Strong Economic Report ...</td></tr><tr><td>2</td><td colspan=\"2\">U.S. stocks drop as bond market signals slowdown; Dow ... 15</td><td>Dow Down 60.50; Nasdaq Off 8.78</td></tr><tr><td>3</td><td>U.S. stocks drop on profit-taking, poor Time Warner ...</td><td>16</td><td>Stocks dip as traders await Fed meeting details</td></tr><tr><td>4</td><td>Dollar Lower, Stocks Fall in Early Trading</td><td>17</td><td>Dow Drops 17; Nasdaq Down Fraction</td></tr><tr><td>5</td><td>Stocks fall despite manufacturing pickup ...</td><td>18</td><td>Dollar Weaker, Stocks Fall ...</td></tr><tr><td>6</td><td>Nasdaq Falls 95; Dow Up 8</td><td>19</td><td>Dollar, Stocks Traded Lower Eds ...</td></tr><tr><td>7</td><td>Stocks Fall, Dollar Traded Higher ...</td><td>20</td><td>Stocks Fall Back, Dollar Lower ...</td></tr><tr><td>8</td><td>Stocks fall in early trading</td><td>21</td><td>Stocks fall on concerns over Wall Street and local ec...</td></tr><tr><td>9</td><td>U.S. stocks turn lower as investors take profits from ...</td><td>22</td><td>Dollar, Stocks Lower in Early Tokyo Trading ...</td></tr><tr><td>10</td><td>Dow closes below 10,000; Nasdaq at lowest level ...</td><td>23</td><td>Nasdaq Ends Down 147; Dow Up 25</td></tr><tr><td>11</td><td>Stocks lower on Wall Street amid mixed global picture ...</td><td>24</td><td>U.S. stocks end mostly lower after GDP report ...</td></tr><tr><td>12</td><td>Financial shares fall on delay, restatement of results</td><td>25</td><td>Dow Up 70.20; Nasdaq Falls 71.28</td></tr><tr><td>13</td><td>Stocks Plunge on Profit-Taking, Dollar Inches Higher ...</td><td>26</td><td>London Shares Lower ...</td></tr></table>",
"html": null
},
"TABREF7": {
"text": "Top results for S&P500",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Rank Headline</td></tr><tr><td>1</td><td>Latam stocks lower on slowdown concerns</td></tr><tr><td>2</td><td>Stocks end lower amid worries after House OKs plan</td></tr><tr><td>3</td><td>Latam stocks plunge on slowdown concerns</td></tr><tr><td>4</td><td>World markets drop on worries of US-led slowdown</td></tr><tr><td>5</td><td>French economy enters recession</td></tr><tr><td>6</td><td>US economy sheds most jobs since 2003</td></tr><tr><td>7</td><td>Manhattan apartment sales drop further</td></tr><tr><td>8</td><td>India ' s key stock index drops 4 percent</td></tr><tr><td>9</td><td>Japan stocks slide on worries about US economy</td></tr><tr><td>10</td><td>Hon Kong stocks drop on US worries</td></tr><tr><td>11</td><td>Credit markets still tight after bailout approval</td></tr><tr><td>12</td><td>US cuts off family planning group in Africa</td></tr><tr><td>13</td><td>Employers cut 159,000 jobs, most in over 5 years</td></tr><tr><td>14</td><td>Russian shares fall sharply</td></tr><tr><td>15</td><td>US Congress OKs bailout bill and Bush signs its</td></tr></table>",
"html": null
},
"TABREF8": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Top 15/219 -S&amp;P500 -October 3, 2008</td></tr></table>",
"html": null
},
"TABREF10": {
"text": "Manual evaluation on top 200 headlines",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}