{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:24:04.643131Z" }, "title": "Fraunhofer IAIS at FinCausal 2020, Tasks 1 & 2: Using Ensemble Methods and Sequence Tagging to Detect Causality in Financial Documents", "authors": [ { "first": "Maren", "middle": [], "last": "Pielka", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "maren.pielka@iais.fraunhofer.de" }, { "first": "Anna", "middle": [], "last": "Ladi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Clayton", "middle": [], "last": "Chapman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Eduardo", "middle": [], "last": "Brito", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Rajkumar", "middle": [], "last": "Ramamurthy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Paul", "middle": [], "last": "Mayer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Abdul", "middle": [], "last": "Wahab", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Rafet", "middle": [], "last": "Sifa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" }, { "first": "Christian", "middle": [], "last": "Bauckhage", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fraunhofer IAIS", "location": { "postCode": "53757", "settlement": "Sankt Augustin", "country": "Germany, Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The FinCausal 2020 shared task aims to detect causality on financial news and identify those parts of the causal sentences related to the underlying cause and effect. We apply ensemble-based and sequence tagging methods for identifying causality, and extracting causal subsequences. Our models yield promising results on both sub-tasks, with the prospect of further improvement given more time and computing resources. With respect to task 1, we achieved an F1 score of 0.9429 on the evaluation data, and a corresponding ranking of 12/14. For task 2, we were ranked 6/10, with an F1 score of 0.76 and an ExactMatch score of 0.1912. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 1 The data set both for task 1 and task 2 is split into a \"trial\" and \"practice\" test. To account for any differences in the distribution of the data in the two subsets, we decided to join the two sub-datasets.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The FinCausal 2020 shared task aims to detect causality on financial news and identify those parts of the causal sentences related to the underlying cause and effect. We apply ensemble-based and sequence tagging methods for identifying causality, and extracting causal subsequences. Our models yield promising results on both sub-tasks, with the prospect of further improvement given more time and computing resources. With respect to task 1, we achieved an F1 score of 0.9429 on the evaluation data, and a corresponding ranking of 12/14. For task 2, we were ranked 6/10, with an F1 score of 0.76 and an ExactMatch score of 0.1912. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 1 The data set both for task 1 and task 2 is split into a \"trial\" and \"practice\" test. To account for any differences in the distribution of the data in the two subsets, we decided to join the two sub-datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Despite the many advances in the field of Natural Language Processing (NLP) in recent years, many challenges dealing with contextual relationships still persist, one of which being causality. Causality, as conveyed in written English, relates a textual description of a cause to a description of its effect(s). While the identification of subordinating conjunctions -such as \"because\", \"since\" and \"if\" -can help in identifying relevant pieces of text forming causal relationships, often richer contextual information is needed. The goal of the FinCausal 2020 shared task (Mariko et al., 2020) is to explore causal relationships from financial and economic context and to determine the likelihood of whether or not a given text contains a causal relationship and then detect the cause and effect parts of the sentence.", "cite_spans": [ { "start": 572, "end": 593, "text": "(Mariko et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The data sets are composed of 23808 paragraphs from financial news. 1 The data sets contain paragraphs, with an average size of 214 (+/-161) characters, each with annotations relevant to the corresponding task. For task 1, the size of the complete data is 22058 paragraphs, each of which is annotated with a \"1\" (a causal relationship exists in this paragraph -7.2% of the paragraphs), or \"0\" (no causal relationship in the paragraph -92.8% of the paragraphs). The data set for task 2 has a size of 1750 paragraphs, with every paragraph containing exactly one cause and one effect character sequence. These are annotated as character ranges. Since we treat task 2 as a sequence tagging task, we transform these annotations to token level annotations, so that every token in the paragraph gets the label CAUSE, EFFECT or, 0 (for all tokens which belong neither to the cause nor the effect part). This yields a label distribution of 40.8% CAUSE, 40.8% EFFECT, and 18.4% \"0\". For the experimentation and the development of the models for task 1 and task 2, a 70% training -30% validation split of the respective data-set (joined practice and trial) was used.", "cite_spans": [ { "start": 68, "end": 69, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Tasks 1 and 2 were treated independently. The former was treated as a document classification task (a document being a paragraph in this case) and the latter as a sequence tagging task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": "3" }, { "text": "The goal of task 1 is to identify whether a paragraph contains a causal relation or not. The labels are binary, with a 1 indicating the presence of (a) causal statement(s), and a 0 otherwise. For the best performing models, no data pre-processing was used, other than ignoring punctuation and single-character tokens. We explore two approaches: The first is a paragraph embedding paired with shallow Machine Learning (ML) models, while the second is embedding the individual tokens and using a one-dimensional Convolutional Neural Network (CNN) as a classifier. For the first strategy, the text data is transformed into a feature matrix using the scikit-learn (Pedregosa et al., 2011) , version 0.23.1 implementation of TF-IDF. This matrix is then used by several classical ML models, including SVMs, Logistic Regression and Random Forests. The best performers were found to be an SVM Classifier 2 and an XGBoost model. XGBoost, or eXtreme Gradient Boosting, is a fairly recent algorithm based on gradient boosting techniques (Chen and Guestrin, 2016) . Additionally, a voting classifier model was made from these two models, which predicts the class label based on the argmax of the sums of the prediction confidence values from the SVM classifier and the XGBoost models. We adjust the parameters of our models using the RandomizedSearchCV class from scikit-learn 3 . These models were trained on an Intel i7-8750H CPU. After training, a Voting Ensemble classifier from scikit-learn was built using the trained models as parameters. This ensemble used \"soft\" voting, that is, the probabilities output by the SGD and XGBoost were averaged and the result was the output of the ensemble.", "cite_spans": [ { "start": 660, "end": 684, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF9" }, { "start": 1026, "end": 1051, "text": "(Chen and Guestrin, 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Task 1: Causality Detection", "sec_num": "3.1" }, { "text": "The second strategy is based on the idea of using CNNs for NLP (Yin et al., 2017) . Adjusting the filter size on a one-dimensional CNN also acts similarly to an N-gram, and CNNs are generally very fast at analyzing problems with large feature matrices. This model was implemented using Tensorflow (Abadi et al., 2015) and Keras (Chollet and others, 2015), versions 2.1.0 and 2.3.1 respectively. The data was processed using the Keras tokenizer and then fed into a one-dimensional CNN. The network consists of an embedding layer that uses the 200-dimensional Word2Vec (Le and Mikolov, 2014) embedding from GoogleNews, a convolutional layer, and 2 dense layers. The models were optimized using manual parameter tuning 4 . This model was trained using an Nvidia GTX 1060 Max-Q with 6GB of GPU memory.", "cite_spans": [ { "start": 63, "end": 81, "text": "(Yin et al., 2017)", "ref_id": "BIBREF14" }, { "start": 297, "end": 317, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Task 1: Causality Detection", "sec_num": "3.1" }, { "text": "The goal of task 2 is to identify the parts of a sentence, that correspond to cause and effect, respectively. It is thus a sequence tagging task in which the tokens of a sentence must be assigned either a CAUSE, EFFECT, or 0 tag. One approach to tackle sequence tagging tasks is the use of sequential models such as a Recurrent Neural Network (RNN). We employ the Flair sequence tagger by (Akbik et al., 2018) , using ElMo (Peters et al., 2018) and fine-tuned BERT embeddings. For BERT, the transformers library by (Wolf et al., 2019) , version 3.0.2 was used.", "cite_spans": [ { "start": 389, "end": 409, "text": "(Akbik et al., 2018)", "ref_id": "BIBREF1" }, { "start": 423, "end": 444, "text": "(Peters et al., 2018)", "ref_id": "BIBREF10" }, { "start": 515, "end": 534, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Task 2: Causality Extraction", "sec_num": "3.2" }, { "text": "Using the Flair Sequence Tagger We utilize the Flair Sequence tagger (Akbik et al., 2018) to produce token-level predictions for the causality extraction task. The framework consists of a recurrent neural model with a Conditional Random Field (CRF) and a Long Short Term Memory (LSTM) layer trained for token classification. Learning rate scheduling is applied during training, meaning that the learning rate will be reduced whenever the validation loss does not decrease after 3 epochs. Once the learning rate falls below 0.0001 following this policy, training is stopped.", "cite_spans": [ { "start": 69, "end": 89, "text": "(Akbik et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Task 2: Causality Extraction", "sec_num": "3.2" }, { "text": "As a first fine-tuning step, we evaluate different pre-trained word embeddings (ElMo, BERT, Flair, GloVe, and FastText) that are integrated in the Flair framework. We find that ElMo embeddings obtained from the full-sized model (see (Peters et al., 2018) our data, so we use them as word embeddings in the following evaluation steps. The model is being further improved by optimizing the hyperparameters, yielding a batch size of 32, an initial learning rate of 0.1, and a hidden size of 500 neurons. In addition, we adjust the class weights of the model to compensate for the imbalanced label distribution (see section 2). Thus, the weights for the CAUSE and EFFECT classes are decreased to 0.25, and the weight for the 0 class is increased to 0.5. As an alternative embedding method, we also incorporate the fine-tuned BERT model delivered as a baseline for task 1 5 to the Flair Sequence Tagger. The models were trained on an Nvidia Tesla V100-SXM2 with 32 GB of GPU memory.", "cite_spans": [ { "start": 233, "end": 254, "text": "(Peters et al., 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Task 2: Causality Extraction", "sec_num": "3.2" }, { "text": "Post-processing In addition to the token classification by the Flair sequence tagger, we apply some post-processing to the output to further improve the results. Our first approach is based on the observation, that the classifier sometimes correctly recognizes large parts of a CAUSE or EFFECT sequence, but still predicts 0 for some single tokens in between. Since in most of the cases, every CAUSE or EF-FECT is a coherent sequence, it makes sense to account for that using rule-based post-processing. This is done by filling in every \"hole\" of up to three tokens in a consequent sequence of CAUSE or EFFECT predictions with the surrounding label. An alternative post-processing approach that was tested, was to smooth the output class probabilities over consecutive tokens, using an average filter with a window size of 3 tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task 2: Causality Extraction", "sec_num": "3.2" }, { "text": "The reported results were evaluated on the holdout test set, which was not included in the validation set. The evaluation metrics are reported as defined by the shared task organizers. 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "For task 1, the best results in terms of F1 score were achieved by the SVM classifier, while the Ensemble of the SVM classifier and the XGBoost classifier performed best in terms of precision and recall (see Table 1 ). The CNN model performed not significantly different than the less complex ML models.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Task 1", "sec_num": "4.1" }, { "text": "For task 2, the best scores with respect to the three common metrics are achieved by the Flair sequence tagger model using fine-tuned BERT embeddings, balanced class weights, and applying the first postprocessing approach (filling in \"holes\" of predicted sequences). Interestingly, our baseline model (ElMo) outperforms the tuned models on ExactMatch. Especially adding balanced class weights seems to cause a drop in the ExactMatch metric. This could potentially indicate some overfitting effect of the simpler model. Table 2 shows two examples that illustrate this behavior. In example 1 (Table 2a) , while the simpler model predicts the sequence exactly right, the model with fine-tuning added makes one mistake in between. Example 2 (Table 2b) , however, shows that the second model is better at separating cause and effect in a non-straightforward formulated sentence, even though it does not get the labels exactly right. Generally, the fine-tuned model tends to leave larger \"holes\" between \"cause\" and \"effect\" sequences, than the simpler model does.", "cite_spans": [], "ref_spans": [ { "start": 519, "end": 526, "text": "Table 2", "ref_id": null }, { "start": 590, "end": 600, "text": "(Table 2a)", "ref_id": null }, { "start": 737, "end": 747, "text": "(Table 2b)", "ref_id": null } ], "eq_spans": [], "section": "Task 2", "sec_num": "4.2" }, { "text": "Sentence They fell (...) to 4p on Wednesday as analysts lowered price targets and cut forecasts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task 2", "sec_num": "4.2" }, { "text": "Model 1 prediction E E (...) E E E E 0 C C C C C C C Model 2 prediction E E (...) E E E 0 0 C C C C C C C True label E E (...) E E E E 0 C C C C C C C (a) Example 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task 2", "sec_num": "4.2" }, { "text": "Sentence Suppose (...) they could borrow at -0.25% NLY would be getting paid $262 million. Table 2 : Example predictions of model 1 (ElMo) and model 2 (fine-tuned BERT, balanced, postprocessing), on two sentences from our validation split (part of the practice data set). \"C\" stands for \"cause\", \"E\" for \"effect\". Due to space constraints, only the relevant part of the sentence is displayed. Table 3 : Task 2 results. The Flair sequence tagger model was used to produce all of those results.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 2", "ref_id": null }, { "start": 393, "end": 400, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Task 2", "sec_num": "4.2" }, { "text": "Model 1 prediction E (...) E E E E E E E E E E E E Model 2 prediction C (...) 0 C C C E E E E E E E E True label C (...) C C C C 0 E E E E E E E (b) Example 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task 2", "sec_num": "4.2" }, { "text": "Our present system relies heavily on the choice of embeddings, as well as on large pre-trained language models. This could possibly be changed if we had a more extensive and consistent data set for training. One of our observations in this regard was, that the provided data included a number of irregularly formatted paragraphs (for example headlines or bullet points). We believe that our algorithms would benefit from further analysis and possible cleaning of such instances. Additionally, the current data does not allow us to explore other approaches that are based on syntactic features. Regarding task 1, the results of the CNN could be potentially improved by additional hyperparameter tuning. The CNN model showed good results already in early epochs, but was not able to outperform the less complex ML models, which were extensively tuned. With respect to task 2, it would have been interesting to test more combinations of embeddings, data augmentation, and post-processing, which was not possible due to limited time and computing resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Identifying causality in text can be a helpful tool for many empirical use cases. In the context of financial document analysis, it could be used to identify important facts and developments in the annual report of a company. We can potentially extend our previous work on various downstream-tasks on financial reports by incorporating causality, for instance for key performance indicator extraction (Brito et al., 2019b) , contradiction detection , or content-based text classification and consistency checks (Sifa et al., 2019a) . We can also exploit causality detection in the context of text summarization. Causal sentences may indicate content richness, which is useful not only to extract the most relevant sentences of an original text within an extractive summarization setting (as presented in (Brito et al., 2019a) ) but also when we evaluate the quality of a generated summary from a wide range of features (Biesner et al., 2020) . In our future work, we are planning to address such applications, building on our experiments and insights from this challenge.", "cite_spans": [ { "start": 401, "end": 422, "text": "(Brito et al., 2019b)", "ref_id": "BIBREF5" }, { "start": 511, "end": 531, "text": "(Sifa et al., 2019a)", "ref_id": null }, { "start": 804, "end": 825, "text": "(Brito et al., 2019a)", "ref_id": "BIBREF4" }, { "start": 919, "end": 941, "text": "(Biesner et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Outlook", "sec_num": "6" }, { "text": "For the SVM classifier, the scikit-learn implementation was used.3 For the TFIDF-SVM the adjusted parameters are: ngram range = (1, 2) and smooth idf = f alse for TFIDF, and alpha = 1e\u22126, loss = log for SVM. For TFIDF-XGBoost: ngram range = (1, 1) for TFIDF, and booster = gbtree, learning rate = 0.3, max depth = 6, and n estimators = 100 for XGBoost.4 The best configuration was: a tokenizer that ignored all punctuation and symbols, and set all words to lowercase, and for the CNN: a 1D convolutional layer of 64x3 with relu, a dropout layer with rate = 0.1, a global 1D maxpooling layer, a dense layer with an output of 16 with relu, another dropout layer with rate = 0.1, and a final dense layer with an output of 1 with sigmoid activation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/yseop/YseopLab/tree/develop/FNP_2020_FinCausal/baseline/task1 6 See website of shared task: https://competitions.codalab.org/competitions/23748", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Largescale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Levenberg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large- scale machine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Contextual String Embeddings for Sequence Labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "COLING 2018, 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual String Embeddings for Sequence Labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Hybrid ensemble predictor as quality metric for German text summarization: Fraunhofer IAIS at GermEval 2020 task 3", "authors": [ { "first": "David", "middle": [], "last": "Biesner", "suffix": "" }, { "first": "Eduardo", "middle": [], "last": "Brito", "suffix": "" }, { "first": "Lars", "middle": [ "Patrick" ], "last": "Hillebrand", "suffix": "" }, { "first": "Rafet", "middle": [], "last": "Sifa", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Biesner, Eduardo Brito, Lars Patrick Hillebrand, and Rafet Sifa. 2020. Hybrid ensemble predictor as quality metric for German text summarization: Fraunhofer IAIS at GermEval 2020 task 3. In Proceedings of the 5th", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Swiss Text Analytics Conference (SwissText) & 16th Conference on Natural Language Processing (KONVENS)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swiss Text Analytics Conference (SwissText) & 16th Conference on Natural Language Processing (KONVENS).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Towards supervised extractive text summarization via RNN-based sequence classification", "authors": [ { "first": "Eduardo", "middle": [], "last": "Brito", "suffix": "" }, { "first": "Max", "middle": [], "last": "L\u00fcbbering", "suffix": "" }, { "first": "David", "middle": [], "last": "Biesner", "suffix": "" }, { "first": "Lars", "middle": [ "Patrick" ], "last": "Hillebrand", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bauckhage", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.06121" ] }, "num": null, "urls": [], "raw_text": "Eduardo Brito, Max L\u00fcbbering, David Biesner, Lars Patrick Hillebrand, and Christian Bauckhage. 2019a. Towards supervised extractive text summarization via RNN-based sequence classification. arXiv preprint arXiv:1911.06121.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A hybrid AI tool to extract key performance indicators from financial reports for benchmarking", "authors": [ { "first": "Eduardo", "middle": [], "last": "Brito", "suffix": "" }, { "first": "Rafet", "middle": [], "last": "Sifa", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bauckhage", "suffix": "" }, { "first": "R\u00fcdiger", "middle": [], "last": "Loitz", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the ACM Symposium on Document Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduardo Brito, Rafet Sifa, Christian Bauckhage, R\u00fcdiger Loitz, Uwe Lohmeier, and Christin P\u00fcnt. 2019b. A hybrid AI tool to extract key performance indicators from financial reports for benchmarking. In Proceedings of the ACM Symposium on Document Engineering 2019.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "XGBoost: A Scalable Tree Boosting System. CoRR, abs/1603.02754. Francois Chollet et al. 2015. Keras", "authors": [ { "first": "Tianqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. CoRR, abs/1603.02754. Francois Chollet et al. 2015. Keras.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Q. V. Le and T. Mikolov. 2014. Distributed representations of sentences and documents. CoRR, abs/1405.4053.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hugues de Mazancourt, and Mahmoud El-Haj. 2020. The Financial Document Causality Detection Shared Task (FinCausal 2020)", "authors": [ { "first": "Dominique", "middle": [], "last": "Mariko", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Abi Akl", "suffix": "" }, { "first": "Estelle", "middle": [], "last": "Labidurie", "suffix": "" }, { "first": "Stephane", "middle": [], "last": "Durfort", "suffix": "" } ], "year": null, "venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominique Mariko, Hanna Abi Akl, Estelle Labidurie, Stephane Durfort, Hugues de Mazancourt, and Mah- moud El-Haj. 2020. The Financial Document Causality Detection Shared Task (FinCausal 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020, Barcelona, Spain.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Scikitlearn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit- learn: Machine learning in Python. Journal of Machine Learning Research.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proc. of NAACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Rajkumar Ramamurthy, Lars Hillebrand, Birgit Kirsch, and Thiago Bell. 2019a. Towards automated auditing with machine learning", "authors": [ { "first": "Rafet", "middle": [], "last": "Sifa", "suffix": "" }, { "first": "Max", "middle": [], "last": "L\u00fcbbering", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "N\u00fctten", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bauckhage", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Warning", "suffix": "" }, { "first": "Benedikt", "middle": [], "last": "F\u00fcrst", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Khameneh", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Thom", "suffix": "" }, { "first": "Ilgar", "middle": [], "last": "Huseynov", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kahlert", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Schlums", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Ladi", "suffix": "" }, { "first": "Hisham", "middle": [], "last": "Ismail", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Kliem", "suffix": "" }, { "first": "R\u00fcdiger", "middle": [], "last": "Loitz", "suffix": "" }, { "first": "Maren", "middle": [], "last": "Pielka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the ACM Symposium on Document Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafet Sifa, Max L\u00fcbbering, Ulrich N\u00fctten, Christian Bauckhage, Ulrich Warning, Benedikt F\u00fcrst, Tim Khameneh, Daniel Thom, Ilgar Huseynov, Roland Kahlert, Jennifer Schlums, Anna Ladi, Hisham Ismail, Bernd Kliem, R\u00fcdiger Loitz, Maren Pielka, Rajkumar Ramamurthy, Lars Hillebrand, Birgit Kirsch, and Thiago Bell. 2019a. Towards automated auditing with machine learning. In Proceedings of the ACM Symposium on Document Engineering 2019. ACM.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards contradiction detection in German: A translation-driven approach", "authors": [ { "first": "Rafet", "middle": [], "last": "Sifa", "suffix": "" }, { "first": "Maren", "middle": [], "last": "Pielka", "suffix": "" }, { "first": "Rajkumar", "middle": [], "last": "Ramamurthy", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Ladi", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Hillebrand", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bauckhage", "suffix": "" } ], "year": 2019, "venue": "Proc. of IEEE SSCI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafet Sifa, Maren Pielka, Rajkumar Ramamurthy, Anna Ladi, Lars Hillebrand, and Christian Bauckhage. 2019b. Towards contradiction detection in German: A translation-driven approach. In Proc. of IEEE SSCI 2019.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cis- tac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Comparative Study of CNN and RNN for Natural Language Processing", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Sch\u00fctze. 2017. Comparative Study of CNN and RNN for Natural Language Processing. CoRR, abs/1702.01923.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "content": "
ModelF1 scoreRecallPrecision
XGBoost0.942374 0.948957 0.943619
SVM classifier 0.942909 0.947604 0.942031
Ensemble0.937722 0.949093 0.950175
CNN0.942066 0.945979 0.940644
Table 1: Task 1 results.
", "num": null, "type_str": "table", "text": "for implementation details) yield the best results on" } } } }