{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:24:01.344258Z" }, "title": "NTUNLPL at FinCausal 2020, Task 2: Improving Causality Detection Using Viterbi Decoder", "authors": [ { "first": "Pei-Wei", "middle": [], "last": "Kao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "country": "Taiwan" } }, "email": "" }, { "first": "Chung-Chi", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "country": "Taiwan" } }, "email": "cjchen@nlg.csie.ntu.edu.tw" }, { "first": "Hen-Hsen", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chengchi University", "location": { "country": "Taiwan" } }, "email": "hhhuang@nccu.edu.tw" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "country": "Taiwan" } }, "email": "hhchen@ntu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to provide an explanation of machine learning models, causality detection attracts lots of attention in the artificial intelligence research community. In this paper, we explore the causeeffect detection in financial news and propose an approach, which combines the BIO scheme with the Viterbi decoder for addressing this challenge. Our approach is ranked the first in the official run of cause-effect detection (Task 2) of the FinCausal-2020 shared task. We not only report the implementation details and ablation analysis in this paper, but also publish our code for academic usage.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In order to provide an explanation of machine learning models, causality detection attracts lots of attention in the artificial intelligence research community. In this paper, we explore the causeeffect detection in financial news and propose an approach, which combines the BIO scheme with the Viterbi decoder for addressing this challenge. Our approach is ranked the first in the official run of cause-effect detection (Task 2) of the FinCausal-2020 shared task. We not only report the implementation details and ablation analysis in this paper, but also publish our code for academic usage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Adopting causality information as features can benefit lots of applications such as question answering (Sharp et al., 2016) , event prediction (Balashankar et al., 2019) , and medical text mining (Khoo et al., 2000) . In the financial domain, causality detection can be applied to stock movement prediction (Balashankar et al., 2019) and supporting financial services (Izumi and Sakaji, 2019) . To better explain the causality that occurs between financial events, cause-effect detection is a fundamental research issue.", "cite_spans": [ { "start": 103, "end": 123, "text": "(Sharp et al., 2016)", "ref_id": "BIBREF8" }, { "start": 143, "end": 169, "text": "(Balashankar et al., 2019)", "ref_id": "BIBREF0" }, { "start": 196, "end": 215, "text": "(Khoo et al., 2000)", "ref_id": "BIBREF3" }, { "start": 307, "end": 333, "text": "(Balashankar et al., 2019)", "ref_id": "BIBREF0" }, { "start": 368, "end": 392, "text": "(Izumi and Sakaji, 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Taking a close look to financial documents, we find that there may exist multiple causal events and multiple causal chains in a paragraph. In such a case, traditional extraction methods like discourse parser are not feasible. In order to deal with this issue, we formulate the cause-effect detection task as a sequence labeling problem and propose an approach using BIO scheme and Viterbi decoder. We find that the proposed approach has the ability to identify multiple causal events and multiple causal chains in a given short paragraph, and also gives a better event span boundary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are two-fold as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We propose an approach to cause-effect detection for financial news that could better identify multiple causal events and event spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We release the code of the best-performing model for future research. 1 2 Pre-processing 3 Methods", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Conditional Random Field (CRF): CRF (Lafferty et al., 2001 ) is a popular model for sequential labeling, which considers the neighboring labels during calculation. We use the default parameter settings provided by Mariko et al. (2020) to train a baseline model for comparison.", "cite_spans": [ { "start": 38, "end": 60, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF4" }, { "start": 216, "end": 236, "text": "Mariko et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": "\u2022 Bidirectional Encoder Representations from Transformers (BERT): The pretrained text encoder BERT generally performs well in many NLP tasks (Devlin et al., 2018) . In this paper, we use the BERT-base model, which consists of 12 Transformer layers with the hidden dimension of 768. It is pre-trained on Masked Language Model Task and Next Sentence Prediction Task via a large cross domain corpus and is well-known for its simplification of fine-tuning downstream tasks. We implement this baseline model by using the package provided by huggingface 3 (Wolf et al., 2019) .", "cite_spans": [ { "start": 141, "end": 162, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF1" }, { "start": 550, "end": 569, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": "For clarity of the model structure, we add a linear layer above the BERT-base model, and finetune it into a token classifier as our baseline for experiment. Given a sequence of tokens in the source documents", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": "x = [x 1 , x 2 , ..., x n ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": ", the classifier will generate the target sequence of labels y = [y 1 , y 2 , ..., y n ], y i \u2208 {O, C, E} for baseline target, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": "y i \u2208 {O, B \u2212 C, I \u2212 C, B \u2212 E, I \u2212 E} for BIO scheme target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": "The training parameters of max length, batch size, and epoch is set as 350, 4 and 4, respectively. The initial learning rate is set to 5e-05, and we use cross entropy as the loss function. The final training time with one GTX TITAN X and Core i7-6700 is approximately 15 minutes, and requires 4GB GPU RAM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.1" }, { "text": "With the POS tags labeled by the Stanford CoreNLP Stanza (Manning et al., 2014; Qi et al., 2020) toolkit, we first represent the POS information as a one-hot vector and concatenate it with the output of BERT's last hidden state output, and then send it to the final linear layer. The concatenated tensor is subject to predict the final label of the token.", "cite_spans": [ { "start": 57, "end": 79, "text": "(Manning et al., 2014;", "ref_id": "BIBREF5" }, { "start": 80, "end": 96, "text": "Qi et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "POS Feature", "sec_num": "3.2" }, { "text": "Viterbi decoding is a dynamical programming algorithm that allows us to find the path with the global optimal probability. The probability of the most possible path ending in state t with observation i is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi Decoder", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p t (i, x) = e t (i) max k (p k (j, x \u2212 1) \u00d7 p kt ) ,", "eq_num": "(1)" } ], "section": "Viterbi Decoder", "sec_num": "3.3" }, { "text": "where e t (i) represents the emission probability to observe element i in state t, p k (j, x \u2212 1) represents the probability of the most possible path ending at position x\u22121 in state k with element j, and p kt represents the transition probability from state k to t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi Decoder", "sec_num": "3.3" }, { "text": "Our proposed Viterbi decoder is added upon the fine-tuned BERT classifier only during evaluation. We consider the final output of the linear classifier in BERT as the emission matrix, and pre-define the transition matrix based on the BIO scheme. The Viterbi decoder will compute recursively to find the most probable sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Viterbi Decoder", "sec_num": "3.3" }, { "text": "We perform five-fold cross-validation to verify our experimental results during the self-evaluation period. Table 1 shows the results of the self-evaluation, where exact match stands for the accuracy of both predicted cause and effect are exactly matched with the labels. Table 2 shows the results of the blind test round. The baseline BERT-base model outperforms the CRF model by approximately 0.2 in terms of F1 score and 0.4 in terms of exact match ratio, showing the advantage of the pre-trained model. Compared with CRF, we notice that BERT can better distinguish multiple causal events and causal chains given the same input text. We also find that adding POS features does not significantly improve the performance. Besides, using the BIO tagging scheme alone decreases the performance. However, combining the BIO tagging scheme with the Viterbi decoder achieves a great improvement in exact match ratio by rising 5%. Table 3 shows the performances of the top-3 teams in the official round. The F1-score, Precision, and Recall of the runner-up models are similar to those of the proposed approach. However, the exact match ratio of our proposed approach outperforms those of other participants, which is 8.7% better than that of the method proposed by the second-place team. 5 Error Analysis Figure 1 shows an instance for error analysis, where our approach outperforms the baseline models in the self-evaluation experiment. Here we use the last partition of the training data as validation set for example. In this case, there are two causal chains. The CRF model does not find any causal event. The BERT model only succeeds in tagging one of the causal chains, but fails to tag the other effect span \"That stake...\". In contrast, our approach successfully labels the correct causal events, which shows that it can better identify the proper event span and achieve a higher exact match ratio. Figure 2 shows an instance that our best model fails to correctly identify the causal events. In this example, our model could not extract the two correct causal events in one sentence, suggesting that our system still has room for improvement on detecting multiple causal events in a single sentence. ", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 1", "ref_id": null }, { "start": 272, "end": 279, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 925, "end": 932, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1299, "end": 1307, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1901, "end": 1909, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "This paper presents our approach to causality detection, which is ranked first in the Task 2 of FinCausal-2020. The ablation analysis shows the effectiveness of the proposed approach. The error analysis also supports that the proposed approach performs better in multiple causal events and multiple causal chains cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In the future, we plan to adopt the proposed approach to the documents in other domains. We also plan to adopt the extracted causality from financial documents to improve the performance of downstream tasks such as stock movement prediction and financial argument mining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/huggingface/transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We experiment on the FinCausal-2020 dataset (Mariko et al., 2020), which consists of two subtasks, including causal meanings detection (Task 1) and cause-effect detection (Task 2). The numbers of training instances are 22,058 and 1,750 for Task 1 and Task 2, respectively. This work only focuses on Task 2.We use the Stanford CoreNLP Stanza (Manning et al., 2014; Qi et al., 2020 ) toolkit 2 to tokenize each sentence and generate the part-of-speech (POS) tag for each token. For the examples with multiple causal events, we recognize them by their indices and add a special number token before each example to treat them as different model inputs. As for causal relations tagging, we use \"B, I, O\" (Begin, Inside, and Outside) and \"C, E\" (Cause and Effect) labels to represent the positional information of the words and the semantic roles of the causal events.", "cite_spans": [ { "start": 341, "end": 363, "text": "(Manning et al., 2014;", "ref_id": "BIBREF5" }, { "start": 364, "end": 379, "text": "Qi et al., 2020", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null }, { "text": "This research was partially supported by Ministry of Science and Technology, Taiwan, under grants MOST 109-2218-E-009-014, MOST 109-2634-F-002-040, and MOST 109-2634-F-002-034, and by Academia Sinica, Taiwan, under grant AS-TP-107-M05.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Identifying predictive causal factors from news streams", "authors": [ { "first": "Ananth", "middle": [], "last": "Balashankar", "suffix": "" }, { "first": "Sunandan", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Fraiberger", "suffix": "" }, { "first": "Lakshminarayanan", "middle": [], "last": "Subramanian", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2338--2348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ananth Balashankar, Sunandan Chakraborty, Samuel Fraiberger, and Lakshminarayanan Subramanian. 2019. Identifying predictive causal factors from news streams. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2338-2348, Hong Kong, China, November. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Economic causal-chain search using text mining technology", "authors": [ { "first": "Kiyoshi", "middle": [], "last": "Izumi", "suffix": "" }, { "first": "Hiroki", "middle": [], "last": "Sakaji", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Financial Technology and Natural Language Processing", "volume": "", "issue": "", "pages": "61--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiyoshi Izumi and Hiroki Sakaji. 2019. Economic causal-chain search using text mining technology. In Proceed- ings of the First Workshop on Financial Technology and Natural Language Processing, pages 61-65, Macao, China, August.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Extracting causal knowledge from a medical database using graphical patterns", "authors": [ { "first": "S", "middle": [ "G" ], "last": "Christopher", "suffix": "" }, { "first": "Syin", "middle": [], "last": "Khoo", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Chan", "suffix": "" }, { "first": "", "middle": [], "last": "Niu", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "336--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher S. G. Khoo, Syin Chan, and Yun Niu. 2000. Extracting causal knowledge from a medical database using graphical patterns. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 336-343, Hong Kong, October. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando Cn", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hugues de Mazancourt, and Mahmoud El-Haj. 2020. The financial document causality detection shared task (fincausal 2020)", "authors": [ { "first": "Dominique", "middle": [], "last": "Mariko", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Abi Akl", "suffix": "" }, { "first": "Estelle", "middle": [], "last": "Labidurie", "suffix": "" }, { "first": "Stephane", "middle": [], "last": "Durfort", "suffix": "" } ], "year": null, "venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominique Mariko, Hanna Abi Akl, Estelle Labidurie, Stephane Durfort, Hugues de Mazancourt, and Mahmoud El-Haj. 2020. The financial document causality detection shared task (fincausal 2020). In The 1st Joint Work- shop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020), Barcelona, Spain.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Stanza: A Python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Creating causal embeddings for question answering with minimal supervision", "authors": [ { "first": "Rebecca", "middle": [], "last": "Sharp", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Hammond", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "138--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Sharp, Mihai Surdeanu, Peter Jansen, Peter Clark, and Michael Hammond. 2016. Creating causal embed- dings for question answering with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 138-148, Austin, Texas, November. Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cis- tac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Instance for error analysis that our approach succeeds.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Instance for error analysis that our approach fails.", "type_str": "figure", "uris": null }, "TABREF1": { "num": null, "text": "Blind test results.", "type_str": "table", "content": "