{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:53:47.787786Z" }, "title": "Corporate Bankruptcy Prediction with Domain-Adapted BERT", "authors": [ { "first": "Alex", "middle": [ "Gunwoo" ], "last": "Kim", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sangwon", "middle": [], "last": "Yoon", "suffix": "", "affiliation": {}, "email": "swyoon@artificial.sc" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This study performs BERT-based analysis, which is a representative contextualized language model, on corporate disclosure data to predict impending bankruptcies. Prior literature on bankruptcy prediction mainly focuses on developing more sophisticated prediction methodologies with financial variables. However, in our study, we focus on improving the quality of input dataset. Specifically, we employ BERT model to perform sentiment analysis on MD&A disclosures. We show that BERT outperforms dictionary-based predictions and Word2Vec-based predictions under time-discrete logistic hazard model, k-nearest neighbor (kNN-5), and linear kernel support vector machine (SVM). Further, instead of pretraining the BERT model from scratch, we apply self-learning with confidence-based filtering to corporate disclosure data. We achieve the accuracy rate of 91.56% and demonstrate that the domain adaptation procedure brings a significant improvement in prediction accuracy.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This study performs BERT-based analysis, which is a representative contextualized language model, on corporate disclosure data to predict impending bankruptcies. Prior literature on bankruptcy prediction mainly focuses on developing more sophisticated prediction methodologies with financial variables. However, in our study, we focus on improving the quality of input dataset. Specifically, we employ BERT model to perform sentiment analysis on MD&A disclosures. We show that BERT outperforms dictionary-based predictions and Word2Vec-based predictions under time-discrete logistic hazard model, k-nearest neighbor (kNN-5), and linear kernel support vector machine (SVM). Further, instead of pretraining the BERT model from scratch, we apply self-learning with confidence-based filtering to corporate disclosure data. We achieve the accuracy rate of 91.56% and demonstrate that the domain adaptation procedure brings a significant improvement in prediction accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Predicting imminent corporate bankruptcies has been of great importance both in academia and in industry. Early studies on bankruptcy prediction focuses on identifying financial variables that precede impending insolvencies. Altman (1968) finds out that z-score, a composite measure of several financial variables, predicts imminent insolvencies. Since then, numerous papers document additional financial variables that seem to predict bankruptcies (Ding et al., 2012; Bharath and Shumway, 2008; Dwyer et al., 2004) . Among 39 distinct financial variables, Tian et al. (2015) choose seven key variables that effectively predict bankruptcies within 12 months by LASSO. However, in contrast to the fact that the majority of * Equal contribution. corporate disclosures contain non-financial information, textual disclosures have received relatively less attention. Following Li (2008) 's call for research on textual corporate disclosures, there have been numerous attempts (Tetlock et al., 2008; Li, 2010; Mayew et al., 2015) to analyze the textual sentiments of corporate disclosures. They commonly find that textual non-financial information has orthogonal informational value to the existing financial information. However, the majority of the analyses are based on the dictionary-based approach suggested by Loughran and McDonald (2011) . In our study, we perform a BERT-based analysis on corporate disclosure data. BERT (Devlin et al., 2018) is the pre-trained language model based on the self-attention mechanism of Transformers (Vaswani et al., 2017) . BERT and its improved versions such as GLUE , SQUAD (Rajpurkar et al., 2016) , and RACE (Lai et al., 2017) , have achieved state-of-the-art results in several NLP downstream tasks. In this research, we analyze the management, discussion, and analysis (MD&A) section of corporate disclosures and extract its context-specific sentiment. We then predict bankruptcies that occur within 12 months from the issuance of annual reports using the sentiment variables produced by BERT-based model. The reasons why we choose MD&A sections to be our target of BERT-based analysis are as follows. First, managers are obliged to express their opinions regarding the future performance of firms in MD&A sections. Therefore, MD&A is a rich source of information to analyze managerial assessment regarding a firm's ability to operate as a going concern. Second, negative future predictions are likely to be accompanied by other positive explanations (see Appendix A, Jung and Kwon, 1988) . Therefore, even though humans can interpret implicit negative nuance in the written disclosures, the traditional dictionary-based approach likely leads to an erroneous conclusion. Lastly, MD&A sections are required when preparing 10-K filings for all firms. Therefore, we mitigate the sample selection bias by confining our analysis to observations archived in SEC filings. Our paper makes several contributions to the existing line of literature. To our best knowledge, this is by far the first study to predict corporate outcomes other than stock market returns with BERT-based sentiment analysis. Unsophisticated investors have difficulty in understanding corporate disclosures since the disclosures are complex in nature (Bartov et al., 2000) . Therefore, the dictionary-based approach displays a trivial limitation in analyzing disclosure texts. We expect that context-specific linguistic analysis will accurately examine the contextual sentiment of corporate disclosures. Specifically, by comparing the ability to predict impending bankruptcies, we show that BERT-based analysis outperforms analyses based on dictionary (key word lists) and word level embedding. Second, there is no BERT model trained on corporate disclosures and the open-source BERT model which is trained on the closest domain is Fin-BERT (Araci, 2019) . Fin-BERT is trained using financial news data. However, since the corporate disclosures and the financial news texts are in a similar but different domain, Fin-BERT is not perfectly suitable for interpreting corporate disclosures. We need to ensure that the data distributions of the training domain and the test domain are the same to improve the performance of machine learning models. Violation of this requirement, which is known as domain shift (Shimodaira, 2000) , leads to underperformance of models (Glorot et al., 2011) . Language models that are trained in two stages of pre-training and fine-tuning such as BERT, satisfy this assumption only when they are pre-trained and fine-tuned with a subset of their domain. Domain shift harms BERT model performance substantially (Lee et al., 2020; Beltagy et al., 2019) . The most trivial way to overcome this problem is to pre-train BERT language model from scratch. However, language model pre-training is highly time and resource-consuming and it is inefficient to pre-train language models for only specific tasks. Another way to deal with domain shift in BERT application is to fine-tune the model with labeled data from the target domain. However, in reality, labeled data for fine-tuning is often not available. In such cases, unsupervised domain adaption is a good alternative Kundu et al., 2020) . In this paper, we apply self-learning, one of the key methodologies of unsupervised domain adaption. We show that if the distance between the source and target domains is close enough, supervising a BERT-based classification model with selfgenerated pseudo-labels filtered with confidence level leads to a significant improvement in performance.", "cite_spans": [ { "start": 449, "end": 468, "text": "(Ding et al., 2012;", "ref_id": "BIBREF10" }, { "start": 469, "end": 495, "text": "Bharath and Shumway, 2008;", "ref_id": "BIBREF5" }, { "start": 496, "end": 515, "text": "Dwyer et al., 2004)", "ref_id": "BIBREF11" }, { "start": 557, "end": 575, "text": "Tian et al. (2015)", "ref_id": "BIBREF47" }, { "start": 872, "end": 881, "text": "Li (2008)", "ref_id": "BIBREF24" }, { "start": 971, "end": 993, "text": "(Tetlock et al., 2008;", "ref_id": "BIBREF46" }, { "start": 994, "end": 1003, "text": "Li, 2010;", "ref_id": "BIBREF25" }, { "start": 1004, "end": 1023, "text": "Mayew et al., 2015)", "ref_id": "BIBREF30" }, { "start": 1310, "end": 1338, "text": "Loughran and McDonald (2011)", "ref_id": "BIBREF28" }, { "start": 1423, "end": 1444, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" }, { "start": 1533, "end": 1555, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF49" }, { "start": 1610, "end": 1634, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF36" }, { "start": 1646, "end": 1664, "text": "(Lai et al., 2017)", "ref_id": "BIBREF22" }, { "start": 2508, "end": 2528, "text": "Jung and Kwon, 1988)", "ref_id": "BIBREF17" }, { "start": 3256, "end": 3277, "text": "(Bartov et al., 2000)", "ref_id": "BIBREF3" }, { "start": 3846, "end": 3859, "text": "(Araci, 2019)", "ref_id": "BIBREF1" }, { "start": 4312, "end": 4330, "text": "(Shimodaira, 2000)", "ref_id": "BIBREF41" }, { "start": 4369, "end": 4390, "text": "(Glorot et al., 2011)", "ref_id": "BIBREF14" }, { "start": 4643, "end": 4661, "text": "(Lee et al., 2020;", "ref_id": "BIBREF23" }, { "start": 4662, "end": 4683, "text": "Beltagy et al., 2019)", "ref_id": "BIBREF4" }, { "start": 5199, "end": 5218, "text": "Kundu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In his seminal study, Altman (1968) finds that financial variables disclosed in annual reports predict bankruptcies. Shumway (2001) shows that in addition to financial statement-related variables, stock market-related variables such as market capitalization and stock price are also associated with future bankruptcies. However, considering that financial variables convey imperfect corporate information (Tennyson et al., 1990) , prior literature extracts information from narrative disclosures. Cecchini et al. (2010) employ a complex vector space model to predict bankruptcies with MD&A disclosures. However, they remain silent on whether textual information has additional predictive ability to financial variables. Mayew et al. (2015) find that narrative disclosures indeed contain information which is orthogonal to the information provided by financial variables. They utilize words lists provided by Loughran and McDonald (2011) to analyze general tone of MD&A disclosures. Related to prediction methodology, Wilson and Sharda (1994) use neural network with financial variables to predict bankruptcies. Premachandra et al. (2011) introduce data envelopment analysis (DEA) and show that bankrupt firms exhibit relatively lower operating efficiency. Shin et al. (2005) find that SVM is effective in predicting notable corporate events including bankruptcies and develop an adaptive fuzzy k-nearest neighbor method for insolvency prediction. Overall, prior literature has been successful in developing machine learning models that predict bankruptcies with considerable accuracy. However, few research focuses on improving the quality of input variables. Specifically, less effort has been made to produce precise semantic tone analysis with narrative disclosures.", "cite_spans": [ { "start": 117, "end": 131, "text": "Shumway (2001)", "ref_id": "BIBREF43" }, { "start": 405, "end": 428, "text": "(Tennyson et al., 1990)", "ref_id": "BIBREF45" }, { "start": 497, "end": 519, "text": "Cecchini et al. (2010)", "ref_id": "BIBREF6" }, { "start": 908, "end": 936, "text": "Loughran and McDonald (2011)", "ref_id": "BIBREF28" }, { "start": 1017, "end": 1041, "text": "Wilson and Sharda (1994)", "ref_id": "BIBREF51" }, { "start": 1256, "end": 1274, "text": "Shin et al. (2005)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Related Studies 2.1 Bankruptcy predictions", "sec_num": "2" }, { "text": "The most traditional method of text classification is the dictionary-based approach. The Harvard Psychological Dictionary is the most commonly used source in open domain text classification. Loughran and McDonald (2011) propose a dictionary specialized in the finance domain. However, dictionary-based approach has a limitation that it is difficult to create a dictionary that covers all the keywords needed for text classification and that the frequency of certain keywords does not necessarily contain sufficient information to classify sentences. Therefore, methods based on word embedding are suggested as alternatives. Word embedding assigns a vector which encodes the meaning of the word to each text. Text classification methods based on word embedding include frequency-based methods as Tf-Idf (Salton and Buckley, 1988) and prediction-based embedding methods as Word2Vec (Mikolov et al., 2013) . Word2vec, in particular, places each word in a vector space which approximates its semantic space. This algebraic transformation allows the vector operations among words. Therefore, we may set a word vector as the initial value of neural network and further classify sentences by exploring their contextual information. Kim et al. (2014) prove that CNN structure, combined with Word2Vec embedding, can be used to classify sentences. Language models based on Recurrent Neural Network(RNN) (Liu et al., 2016) and its variations (Zhou et al., 2015; are also used on text classification. However, recently, Transformer-based language models as BERT (Devlin et al., 2018) and GPT-2(Radford et al., 2019) outperform RNN-based methods and have drawn attention with their performance in generic benchmarks. These models apply self-attention to generate contextualized embedding. Especially BERT, the origin of many SOTA (State-Of-The-Art) models, pre-trains contextual embedding model with masked LM tasks and sentence prediction tasks, and is then fine-tuned to be applied to downstream tasks.", "cite_spans": [ { "start": 191, "end": 219, "text": "Loughran and McDonald (2011)", "ref_id": "BIBREF28" }, { "start": 802, "end": 828, "text": "(Salton and Buckley, 1988)", "ref_id": "BIBREF39" }, { "start": 880, "end": 902, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF31" }, { "start": 1225, "end": 1242, "text": "Kim et al. (2014)", "ref_id": "BIBREF18" }, { "start": 1393, "end": 1411, "text": "(Liu et al., 2016)", "ref_id": "BIBREF26" }, { "start": 1431, "end": 1450, "text": "(Zhou et al., 2015;", "ref_id": "BIBREF54" }, { "start": 1550, "end": 1571, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" }, { "start": 1576, "end": 1603, "text": "GPT-2(Radford et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text classification", "sec_num": "2.2" }, { "text": "In its early stage, domain adaptation takes a form of semi-supervised learnings. Semi-supervised domain adaptation is used when labeled data exists in the target domain but when its amount is not sufficient. For instance, Saenko et al. (2010) and Kulis et al. (2011) use metric learnings to solve domain shifts. In specific, they adopt methods to learn task-specific distance metrics with labeled data and assign labels to unlabeled data based on the learned distance. However, in reality, we may not be able to find domains with labeled data. In such a case, unsupervised domain adaptation (UDA) can be an attractive alternative. The subspace-based methods consider both source and target domain a subspace of single domain space. On the other hand, a more popular approach in UDA is to consider source and target domain separate spaces and try to align the distributions of these. Some works first compare the mean of samples from each domain in Hilbert space and assign a weight to each sample of source domain (Gretton et al., 2012) or select samples in the source domain to minimize the maximum mean discrepancy of the two domains (Gong et al., 2013) . But when source and target domains are significantly different, we may not expect these methods to perform well. To deal with this problem, other studies (Pan et al., 2010; Baktashmotlagh et al., 2013; Sun et al., 2016) map data from both domains to a latent space to deal with this problem. Recently, with the advent of deep learning, feature extraction from raw data becomes an important process in every task. And the models that learn domain invariant features have become the mainstream in UDA (Ganin et al., 2016; Saito et al., 2018; Long et al., 2017) . However, these methods require the data from the source domain to extract domain invariant features. Therefore, self-learning can be an alternative since the source domain data is not required in the setting. The most important consideration in self-learning is how to generate or filter accurate pseudo-labels. propose confidence-based filtering and similaritybased pseudo-labeling method in the image classification task. However, their methodology cannot be directly applied to NLP tasks since word embedding is more implicit and multidimensional than image features. Recently, Yoon et al. (2021) prove that fine-tuning the original model with pseudolabels that are filtered based on confidence level increases accuracy in the target domain in the token classification task. To our best knowledge, our research is the first to show that self-learning without using samples from the source domain significantly improves model performance in sentence classification tasks.", "cite_spans": [ { "start": 222, "end": 242, "text": "Saenko et al. (2010)", "ref_id": "BIBREF37" }, { "start": 247, "end": 266, "text": "Kulis et al. (2011)", "ref_id": "BIBREF20" }, { "start": 1014, "end": 1036, "text": "(Gretton et al., 2012)", "ref_id": "BIBREF16" }, { "start": 1136, "end": 1155, "text": "(Gong et al., 2013)", "ref_id": "BIBREF15" }, { "start": 1312, "end": 1330, "text": "(Pan et al., 2010;", "ref_id": "BIBREF33" }, { "start": 1331, "end": 1359, "text": "Baktashmotlagh et al., 2013;", "ref_id": "BIBREF2" }, { "start": 1360, "end": 1377, "text": "Sun et al., 2016)", "ref_id": "BIBREF44" }, { "start": 1657, "end": 1677, "text": "(Ganin et al., 2016;", "ref_id": "BIBREF13" }, { "start": 1678, "end": 1697, "text": "Saito et al., 2018;", "ref_id": "BIBREF38" }, { "start": 1698, "end": 1716, "text": "Long et al., 2017)", "ref_id": null }, { "start": 2300, "end": 2318, "text": "Yoon et al. (2021)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Domain adaptation", "sec_num": "2.3" }, { "text": "3.1 Sentiment analysis 1 3.1.1 Dictionary-based approach Loughran and McDonald (2011) develop word lists specifically suited for 10-K filings. They provide word lists that contain negative words and positive words, respectively. Following their methodology to calculate the tone of textual disclosures, we count the numbers of positive and negative words in each MD&A section and scale them by the number of total words in each section (DICTPOS and DICTNEG). Although the analysis provides valuerelevant information, the measures are comparatively less accurate in that they do not consider context-specific tone of the texts. We calculate the tone variables with Python.", "cite_spans": [ { "start": 57, "end": 85, "text": "Loughran and McDonald (2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Methodoloy", "sec_num": "3" }, { "text": "Word2Vec is a prediction-based word embedding method which trains by predicting center words with context words (CBoW) or vice versa (Skip-Gram). After training, each word in a corpus corresponds one-on-one to a vector that contains its semantic information. Kim et al. (2014) achieve a remarkable performance on text classification by employing a structure with 1-layer convolutional neural network (CNN) and a fully connected output layer to classify sentences. This model takes pre-trained Word2Vec embedding as its input and the width of the filter in CNN equals the dimension of the word embedding. In our research, we replicate the CNN-static model of Kim et al. (2014) in which the Word2Vec model freezes during the training. We use Word2Vec weight trained on the 10-K corpus of 1996-2013 (Tsai et al., 2016) , and train the network with the financial sentiment analysis dataset provided by Malo et al. (2014) which consists of 4,846 sentences. The model takes each sentence as input and assigns probability to each of three classes: positive, negative, and neutral. We sum the probabilities of all sentences in a document and normalize them to calculate the sentiment score of each document (W2VPOS and W2VNEG). We use nltk sentence tokenizer 2 to split each document to sentences, gensim package 3 to load Word2Vec embedding, and Pytorch to implement CNN-based classifier. We use Cross-Entropy loss function and Adam optimizer. We train model for 60 epochs, with batch size 50. We set sentence length to 50 words in both training phase and inference phase.", "cite_spans": [ { "start": 259, "end": 276, "text": "Kim et al. (2014)", "ref_id": "BIBREF18" }, { "start": 658, "end": 675, "text": "Kim et al. (2014)", "ref_id": "BIBREF18" }, { "start": 796, "end": 815, "text": "(Tsai et al., 2016)", "ref_id": "BIBREF48" }, { "start": 898, "end": 916, "text": "Malo et al. (2014)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Word2Vec", "sec_num": "3.1.2" }, { "text": "BERT is a pre-trained language model with bidirectional transformers, which can be applied to downstream tasks after supervised fine-tuning with relatively low resources. We utilize the model structure based on the original BERT model (Devlin et al., 2018) and the fine-tuned weight of Fin-BERT (Araci, 2019) trained for financial sentiment analysis. Fin-BERT is pre-trained on the subset of Reuters TRC2 dataset which includes financial press articles and fine-tuned on the financial sentiment analysis dataset provided by Malo et al. (2014) , which is identical to the dataset that we use to train the network of Word2Vec model. Similarly, the model takes each sentence as its input and assigns probability to each of three classes: positive, negative, and neutral. We sum the probabilities of all sentences in a document and normalize them to calculate the sentiment score of each document (BERTPOS and BERTNEG). Similarly, we use nltk sentence tokenizer and Huggingface Transformers package 4 with Pytorch to implement BERT. We set max sentence length to 512, which is the max length limit of BERT.", "cite_spans": [ { "start": 235, "end": 256, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" }, { "start": 295, "end": 308, "text": "(Araci, 2019)", "ref_id": "BIBREF1" }, { "start": 524, "end": 542, "text": "Malo et al. (2014)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "BERT", "sec_num": "3.1.3" }, { "text": "We apply self-learning, one of the unsupervised domain-adaptation methods, to our BERT-based model. The self-learning procedure follows the three-step approach. First, we generate pseudolabels with sentences from MD&A sections. Then we filter out \"reliable\" samples based on the selfconfidence of the sentences. Since it is well known that erroneous labels may deteriorate the performance of the models, we filter the samples with high self-confidence. Specifically, we proxy for self-confidence with self-entropy (Zou et al., 2018; Saporta et al., 2020) , Lastly, we perform supervised learning using the newly-obtained pseudo labels. Refer to Figure 1 for visual representation of the algorithm. We use the following equation to calculate self-entropy: Figure 1 : This figure portrays the pipeline of our domain adaptation method. First, we randomly sample 1,200 documents from the corporate fillings from 1995 to 2020. We label this set of narrative disclosures X. Then we generate pseudo-labels Y by applying a BERT-based classifier to every sentence in X (denoted as s i ). To prevent noisy pseudo-labels from harming the model performance, we filter out only the 'reliable' samples (X,Y ) by their normalized self-entropy. Then we supervise the BERT-based classifier with reliable samples.", "cite_spans": [ { "start": 514, "end": 532, "text": "(Zou et al., 2018;", "ref_id": "BIBREF56" }, { "start": 533, "end": 554, "text": "Saporta et al., 2020)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 645, "end": 653, "text": "Figure 1", "ref_id": null }, { "start": 755, "end": 763, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Unsupervised domain adaptation", "sec_num": "3.1.4" }, { "text": "H(s i ) = \u2212 1 log M 2 n=0 l n (s i ) log l n (s i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised domain adaptation", "sec_num": "3.1.4" }, { "text": ",where s i denotes each sentence and l n (s i ) denotes the probability that s i belongs to class n (n = 0, 1, 2). Here, we calculate l n (s i ) with the BERT-based classification model that we use in section 3.1.3. Then we normalize the self-entropy by scaling the value with log M , which is the natural logarithm of the number of labels (here, M = 3). We define three classes 0, 1, and 2. Each class represents positive, negative, or neutral, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised domain adaptation", "sec_num": "3.1.4" }, { "text": "We obtain 59,389 cleansed MD&A filings merged with financial variables. Then we generate pseudo-labels by randomly picking 1,200 documents and doing inference on all sentences in the documents. In specific, we collect and analyze 589,858 distinct sentences. Next, we filter the results with the threshold of self-entropy 0.2 and discard the observations with self-entropy over 0.2. We obtain 38,703 reliable sentences through this process (6.56% of the sentence domain). We train the model for 2 epochs with batch size 32 and set the learning rate to be 5e \u22125 . We use Cross-Entropy loss and Adam optimizer. After training the model, the inference follows the same procedures as the previous two models and we generate sentiment variables DAPTPOS and DAPTNEG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised domain adaptation", "sec_num": "3.1.4" }, { "text": "We implement three basic machine-learning based classifiers (hazard time-discrete logistic model, SVM, and kNN) to evaluate the additional informativeness of textual data. Since models as DNN or RNN achieve state-of-the-art prediction accuracy only with financial variables, it is difficult to show the effect of adding textual variables. Therefore, we compare the relative performance of the baseline classifiers to highlight the incremental prediction accuracy from adding BERT-based sentiment variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification model", "sec_num": "3.2" }, { "text": "We use proportional hazards model (Fine and Gray, 1999) to calculate prediction accuracy. Shumway (2001) finds that maximum log-likelihood estimation of discrete-time logistic regression yields consistent estimates. In specific, we estimate the following discrete-time logistic regression with maximum-likelihood estimation:", "cite_spans": [ { "start": 90, "end": 104, "text": "Shumway (2001)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Hazard model", "sec_num": "3.2.1" }, { "text": "log h i (t) = \u03b2X i (t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hazard model", "sec_num": "3.2.1" }, { "text": "h i (t) refers to the risk of bankruptcy for firm i at time t. X i (t) refers to a vector of firm i at time t that consists of variables that are known to precede bankruptcies. In our study X includes financial variables and MD&A sentiments. Specifically, we first run the regression with only financial variables (FIN). Then we add dictionary-based variables (FIN, DICTPOS, and DICTNEG), Word2Vecbased variables (FIN, W2VPOS, and W2VNEG), BERT-based variables (FIN, BERTPOS, and BERT-NEG), and domain-adapted BERT-based variables (FIN, DAPTPOS, and DAPTNEG), respectively. Using the obtained coefficients, we calculate the fitted values of log h i (t) ( log h i (t)) and classify the observation to be bankrupt if log h i (t) > 0.5 and non-bankrupt otherwise. Continuous variables are winsorized at 1% level to minimize the effect of outliers on regression results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hazard model", "sec_num": "3.2.1" }, { "text": "To further enhance the classification performance, we employ k-nearest neighbors (kNN) and Support Vector Machine (SVM) algorithms, following prior literature. kNN is a simple non-parametric classification method. First, we calculate the Euclidean distance between any two pair of observations. That is for observation vectors X 1 and X 2 , we compute", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-Nearest Neighbors and Support Vector Machine", "sec_num": "3.2.2" }, { "text": "d(X 1 , X 2 ) = X 1 \u2022 X 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-Nearest Neighbors and Support Vector Machine", "sec_num": "3.2.2" }, { "text": ", where \u2022 denotes the inner product of the two vectors. Specifically, in our research, vector X i includes variables that precede insolvencies. We start from five financial variables and sequentially include sentiment variables calculated using different sentiment analysis models. Then, the algorithm computes the distances between observation X i which belongs to the test set and all other observations that belong to the train set. Next, it chooses k smallest distances from the observation X i and label the distances. In our research, we set k = 5. The algorithm classifies X i to be bankrupt if the number of bankrupt labels is greater than the number of non-bankrupt labels. Next, SVM aims at finding a hyperplane that divides the dataset into distinct categories with the largest margin. Let X i be a training data and two classes are labeled with y i as either +1 or -1. We solve the following minimization problem with respect to hyperplane w:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-Nearest Neighbors and Support Vector Machine", "sec_num": "3.2.2" }, { "text": "min 1 2 w 2 + C M i=1 \u03be i ,where y i (wX i + b) \u2265 1 \u2212 \u03be i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-Nearest Neighbors and Support Vector Machine", "sec_num": "3.2.2" }, { "text": "Here, \u03be denotes a slack variable and C is a regularization parameter. In our setting, we set C = 1e \u22125 . Further, we choose linear kernel for SVM classification. Linear kernel reduces the cost of computation but yields comparatively less accurate results. In our sample, univariate analysis results in section 4.1 indicate that linear kernel is acceptable to classify the dataset. We choose the most basic models of kNN and SVM since the primary purpose of our research is to compare the relative performance of text sentiment classification models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-Nearest Neighbors and Support Vector Machine", "sec_num": "3.2.2" }, { "text": "We report two distinct accuracy measures to compare the performance of our models. A1 is the ratio of the number of observations that are classified as non-bankrupt (CNB) under each model to the number of total non-bankrupt (NB) observations (A1 = CNB/NB). On the other hand, A2 is the ratio of the number of observations that are classified as bankrupt (CB) under each model to the number of Column (1) reports the mean of the variables when BRUPT=1. Column (2) reports the mean of the variables when BRUPT=0. The last column reports the differences in mean values (Column (1) -Column (2)). We also report t-statistics that examine the statistical significance of the differences in the parenthesis. *, **, and *** indicate that the difference is statistically significant under 1%, 5%, and 10% confidence level, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy calculation", "sec_num": "3.3" }, { "text": "BRUPT=1 BRUPT=0 Difference (t-stat) (1) (2) (1) -(2) WC -0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy calculation", "sec_num": "3.3" }, { "text": "total bankrupt observations (B) (A2 = CB/B). For hazard model, we also report adjusted R-square (Nagelkerke et al., 1991; Cox and Snell, 2018) .", "cite_spans": [ { "start": 96, "end": 121, "text": "(Nagelkerke et al., 1991;", "ref_id": "BIBREF32" }, { "start": 122, "end": 142, "text": "Cox and Snell, 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Accuracy calculation", "sec_num": "3.3" }, { "text": "R 2 = 1 \u2212 exp 2 (log l(F it) \u2212 log l(N ull)) n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy calculation", "sec_num": "3.3" }, { "text": "log l(F it) and log l(N ull) refer to the maximum log likelihoods of the fitted model and null model containing only the intercept term, respectively. Then, the equation can be rewritten as 4 Empirical Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy calculation", "sec_num": "3.3" }, { "text": "\u2212 log (1 \u2212 R 2 ) = 2 log l(F it) \u2212 log l(N ull) n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accuracy calculation", "sec_num": "3.3" }, { "text": "We first identify bankrupt firm-years from Compustat. The dataset provides us with the dates when firms file for bankruptcy and the dates when the bankruptcy procedure is complete. In our analysis, we use the dates when firms first file for bankruptcy as bankruptcy years. Then, following Altman (1968) and Mayew et al. (2015), we compute five key financial variables that are known to precede bankruptcies (WC, RE, EBITDA, MVE, and SALE). WC refers to the ratio of working capital to total asset. RE refers to retained earnings to total liability. EBITDA refers to earnings before interest, tax, depreciation, and amortization scaled by total asset. MVE is the market value of equity scaled by total liability. SALE is the ratio of sales revenue to total asset. Next, we construct our main variables by extracting MD&A sections from annual reports. Specifically, we inspect 10-K, 10-KSB, 10-K405, and 10KSB40 filings and search for MD&A sections (Item 6 or Item 7). During the collection process, we exclude html notations, tables, and page numbers. This process ensures that we analyze only the textual components from MD&A sections. Our sample period spans from 1995 to 2020 since SEC started to require firms to disclose electronic (machinereadable) filings from 1995. BRUPT is an indicator variable that equals one for observations that face bankruptcy within 365 days from their issuance of annual report, and zero otherwise. We require all financial variables and MD&A section texts for each observation and obtain 59,389 distinct observations. Among the sample, we identify 520 bankrupt firm-year observations (BRUPT=1). We acquire financial data from Compustat database and filing texts from SEC archive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "To evaluate the performance of SVM and kNN models, we split the sample into three subsets: 60% assigned to train set, 20% to validation set, and 20% to test set. However, since the data that we use is panel, randomly assigning 20% of the sample to the test set may bias our results. That is, the model may learn from future information and use it to predict the same future. To mitigate this concern, we implement time-based split. That is, we choose 104 latest bankruptcy observations from 2018 to 2020 and randomly choose 104 non-bankruptcy observations from the same time period. To further ensure that our results are not driven by random sample selection, we repeat the selection procedure 100 times and report the average accuracy with its standard deviation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "In SVM, we test 10 different hyperparameters ranging from 0 to 1 and compare their relative performances. Next, for kNN model, we experiment five different hyperparameters (k). Since the model follows the majority rule, we examine odd parameters We input the set of variables (FIN, DICT, W2V, BERT, and DAPT) in each of the three classification models. A1 equals (1 -Type I error rate) and A2 equals (1 -Type II error rate). We repeat the sampling experiment 100 times for each model and report the standard deviation of the accuracy rate in parenthesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "3, 5, 7, 9, and 11. We then set the regularization parameter C in SVM to be 1e \u22125 and the number of nearest neighbors k to be 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "To ensure that the selected variables move in accordance with BRUPT, we report the univariate analysis results depending on the variable BRUPT (See Table 1 ). As evidenced by Altman (1968) and other prior studies, we find higher WC, RE, MVE, and EBIT, and lower SALE for non-bankrupt firms. Further, we demonstrate that DICTNEG, W2VNEG, BERT-NEG, and DAPTNEG are higher for bankrupt firms and DICTPOS, W2VPOS, BERTPOS, and DAPT-POS are higher for non-bankrupt firms. This confirms that managers are likely to disclose negativetone MD&A sections before imminent bankruptcies. More importantly, untabulated tests including quadratic terms do not find any evidence that there exists non-linear relationship between BRUPT and other independent variables. Taken together, univariate analysis results imply that we may choose linear kernel for SVM classification. Table 2 and Figure 2 report the relative performance of the models. Consistent with the prior literature (Zhou et al., 2012; Wu et al., 2007) , SVM generally performs the best among the three classifiers. Also, A1 is generally higher than A2 in all model specifications, implying that the models generally predict non-bankrupt firms more accurately than bankrupt firms. Our main finding is that BERT-based analysis outperforms dictionary-based analysis and Word2Vec-based analysis. This indicates that context-specific sentiment analysis produces more accurate tone of the texts than non-context specific methods. Specifically, SVM with BERT-based sentiment variables display the bankruptcy prediction accuracy (A2) of 85.20%. Further, we observe that R-square increases as we proceed from analyzing only financial variables (16.23%) to including domainadapted BERT-based sentiment variables (26.38%).", "cite_spans": [ { "start": 963, "end": 982, "text": "(Zhou et al., 2012;", "ref_id": "BIBREF55" }, { "start": 983, "end": 999, "text": "Wu et al., 2007)", "ref_id": "BIBREF52" } ], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 858, "end": 865, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 870, "end": 878, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Comparing this result with prior literature that utilizes SVM to predict corporate bankruptcies, we obtain relatively high accuracy. For instance, Zhou et al. (2012) obtain the accuracy rate of approximately 75% by analyzing financial variables with DSSVM and GASVM models. Taken together, our results imply that textual information has predictive ability which is orthogonal to the existing set of financial variables and that adding high-quality textual information in the classifiers significantly improves the prediction accuracy. Next, we also find that domain-adaptation further improves prediction accuracy. Domain-adapted BERT-based analysis yields the best accuracy rate (A2) of 91.56% with linear SVM classifier among the models. These results strongly indicate that context-specific sentiment analysis of corporate disclosure texts provides more value-relevant information.", "cite_spans": [ { "start": 147, "end": 165, "text": "Zhou et al. (2012)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "In our study, we examine whether context-specific textual sentiment analysis (BERT) improves the accuracy of corporate bankruptcy prediction. We utilize five financial variables calculated from the stock market and annual reports that are known to precede impending insolvencies. Further, we collect and examine a large sample of MD&A narrative disclosures from 1995 to 2020 to test whether textual sentiment is helpful in predicting financial distress. We find that textual sentiment has additional predictive ability to well-known financial variables. Most importantly, we show that BERTbased analysis outperforms dictionary-based analysis suggested by Loughran and McDonald (2011) and Word2Vec-based analysis combined with convolutional neural network. Further, we acknowledge the domain shifting problem of current BERT model. To assuage such a limitation, we perform domain-adaptation to the existing financial BERT model. This approach reduces computational costs compared to pre-training the BERT model with a new corpus and, at the same time, significantly improves the prediction accuracy.", "cite_spans": [ { "start": 655, "end": 683, "text": "Loughran and McDonald (2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "A Sample MD&A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The following is an excerpt from an MD&A section of 10-K report of Learning Tree International, disclosed on September 30, 1996. Learning Tree International filed a bankruptcy in 1997.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In response to the continued strength in enrollments, the Company has further accelerated its development of new course titles, expanded its future direct mailing plans to capture additional market share and has taken steps to expand the number of classrooms in its education centers. However, there can be no assurance that the Company will be able to achieve an increase in market share after making such expenditures or will maintain its growth in revenues, profitability or market share in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Positive words are colored in RED and negative words are colored in BLUE. Humans can interpret that this document conveys negative implication. However, there are 1 negative word and 4 positive words according to Loughran and McDonald (2011) word lists. In contrast, BERTbased sentiment vector of the paragraph equals (1.0365, 2.2161, 1.1704). Normalization yields BERTPOS = 0.2343 and BERTNEG = 0.5000. Therefore, BERT-based analysis outperforms the traditional dictionary-based approach.", "cite_spans": [ { "start": 213, "end": 241, "text": "Loughran and McDonald (2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "For dictionary-based approach, we primarily utilize the following: https://github.com/rflugum/ 10K-MDA-Section.For the remainder, refer to our anonymized github: https://anonymous.4open. science/r/BankruptcyBert-CC19/ 2 https://www.nltk.org 3 https://radimrehurek.com/gensim/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors deeply appreciate helpful comments from Bok Baik and Yang Hoon Kim. Further, the authors appreciate the GPU support from Artificial Intelligence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. The journal of finance", "authors": [ { "first": "", "middle": [], "last": "Edward I Altman", "suffix": "" } ], "year": 1968, "venue": "", "volume": "23", "issue": "", "pages": "589--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward I Altman. 1968. Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. The journal of finance, 23(4):589-609.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Finbert: Financial sentiment analysis with pre-trained language models", "authors": [ { "first": "Dogu", "middle": [], "last": "Araci", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10063" ] }, "num": null, "urls": [], "raw_text": "Dogu Araci. 2019. Finbert: Financial sentiment analy- sis with pre-trained language models. arXiv preprint arXiv:1908.10063.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised domain adaptation by domain invariant projection", "authors": [ { "first": "Mahsa", "middle": [], "last": "Baktashmotlagh", "suffix": "" }, { "first": "T", "middle": [], "last": "Mehrtash", "suffix": "" }, { "first": "", "middle": [], "last": "Harandi", "suffix": "" }, { "first": "C", "middle": [], "last": "Brian", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Lovell", "suffix": "" }, { "first": "", "middle": [], "last": "Salzmann", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "769--776", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahsa Baktashmotlagh, Mehrtash T Harandi, Brian C Lovell, and Mathieu Salzmann. 2013. Unsupervised domain adaptation by domain invariant projection. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 769-776.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Investor sophistication and patterns in stock returns after earnings announcements. The Accounting Review", "authors": [ { "first": "Eli", "middle": [], "last": "Bartov", "suffix": "" }, { "first": "Itzhak", "middle": [], "last": "Suresh Radhakrishnan", "suffix": "" }, { "first": "", "middle": [], "last": "Krinsky", "suffix": "" } ], "year": 2000, "venue": "", "volume": "75", "issue": "", "pages": "43--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eli Bartov, Suresh Radhakrishnan, and Itzhak Krinsky. 2000. Investor sophistication and patterns in stock returns after earnings announcements. The Account- ing Review, 75(1):43-63.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Scibert: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10676" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scib- ert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Forecasting default with the merton distance to default model", "authors": [ { "first": "T", "middle": [], "last": "Sreedhar", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Bharath", "suffix": "" }, { "first": "", "middle": [], "last": "Shumway", "suffix": "" } ], "year": 2008, "venue": "The Review of Financial Studies", "volume": "21", "issue": "3", "pages": "1339--1369", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sreedhar T Bharath and Tyler Shumway. 2008. Fore- casting default with the merton distance to de- fault model. The Review of Financial Studies, 21(3):1339-1369.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Detecting management fraud in public companies", "authors": [ { "first": "Mark", "middle": [], "last": "Cecchini", "suffix": "" }, { "first": "Haldun", "middle": [], "last": "Aytug", "suffix": "" }, { "first": "J", "middle": [], "last": "Gary", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Koehler", "suffix": "" }, { "first": "", "middle": [], "last": "Pathak", "suffix": "" } ], "year": 2010, "venue": "Management Science", "volume": "56", "issue": "7", "pages": "1146--1160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Cecchini, Haldun Aytug, Gary J Koehler, and Praveen Pathak. 2010. Detecting management fraud in public companies. Management Science, 56(7):1146-1160.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An efficient diagnosis system for detection of parkinson's disease using fuzzy k-nearest neighbor approach. Expert systems with applications", "authors": [ { "first": "Hui-Ling", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chang-Cheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xin-Gang", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Su-Jing", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "", "volume": "40", "issue": "", "pages": "263--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui-Ling Chen, Chang-Cheng Huang, Xin-Gang Yu, Xin Xu, Xin Sun, Gang Wang, and Su-Jing Wang. 2013. An efficient diagnosis system for detection of parkinson's disease using fuzzy k-nearest neigh- bor approach. Expert systems with applications, 40(1):263-271.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Analysis of binary data", "authors": [ { "first": "E Joyce", "middle": [], "last": "David Roxbee Cox", "suffix": "" }, { "first": "", "middle": [], "last": "Snell", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Roxbee Cox and E Joyce Snell. 2018. Analysis of binary data. Routledge.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A class of discrete transformation survival models with application to default probability prediction", "authors": [ { "first": "Shaonan", "middle": [], "last": "Adam Ding", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2012, "venue": "Journal of the American Statistical Association", "volume": "107", "issue": "499", "pages": "990--1003", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Adam Ding, Shaonan Tian, Yan Yu, and Hui Guo. 2012. A class of discrete transformation survival models with application to default probability pre- diction. Journal of the American Statistical Associa- tion, 107(499):990-1003.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Moody's kmv riskcalc v3. 1 model. White paper, Moody's", "authors": [ { "first": "W", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "Ahmet", "middle": [ "E" ], "last": "Dwyer", "suffix": "" }, { "first": "Roger M", "middle": [], "last": "Kocagil", "suffix": "" }, { "first": "", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas W Dwyer, Ahmet E Kocagil, and Roger M Stein. 2004. Moody's kmv riskcalc v3. 1 model. White paper, Moody's, https://www. moodys. com/sites/products/ProductAttachments/RiskCalc, 203.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A proportional hazards model for the subdistribution of a competing risk", "authors": [ { "first": "P", "middle": [], "last": "Jason", "suffix": "" }, { "first": "Robert J", "middle": [], "last": "Fine", "suffix": "" }, { "first": "", "middle": [], "last": "Gray", "suffix": "" } ], "year": 1999, "venue": "Journal of the American statistical association", "volume": "94", "issue": "446", "pages": "496--509", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason P Fine and Robert J Gray. 1999. A proportional hazards model for the subdistribution of a competing risk. Journal of the American statistical association, 94(446):496-509.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Domain-adversarial training of neural networks. The journal of machine learning research", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Marchand", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "17", "issue": "", "pages": "2096--2030", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. The journal of machine learning research, 17(1):2096-2030.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation", "authors": [ { "first": "Boqing", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Kristen", "middle": [], "last": "Grauman", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" } ], "year": 2013, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "222--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boqing Gong, Kristen Grauman, and Fei Sha. 2013. Connecting the dots with landmarks: Discrimina- tively learning domain-invariant features for un- supervised domain adaptation. In International Conference on Machine Learning, pages 222-230. PMLR.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A kernel two-sample test", "authors": [ { "first": "Arthur", "middle": [], "last": "Gretton", "suffix": "" }, { "first": "M", "middle": [], "last": "Karsten", "suffix": "" }, { "first": "", "middle": [], "last": "Borgwardt", "suffix": "" }, { "first": "J", "middle": [], "last": "Malte", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Rasch", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "", "middle": [], "last": "Smola", "suffix": "" } ], "year": 2012, "venue": "The Journal of Machine Learning Research", "volume": "13", "issue": "1", "pages": "723--773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch\u00f6lkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Disclosure when the market is unsure of information endowment of managers", "authors": [ { "first": "Woon-Oh", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Young", "middle": [ "K" ], "last": "Kwon", "suffix": "" } ], "year": 1988, "venue": "Journal of Accounting research", "volume": "", "issue": "", "pages": "146--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woon-Oh Jung and Young K Kwon. 1988. Disclosure when the market is unsure of information endow- ment of managers. Journal of Accounting research, pages 146-153.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Language independent semantic kernels for shorttext classification", "authors": [ { "first": "Kwanho", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Beom-Suk", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Yerim", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Seungjun", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Jae-Yoon", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Jonghun", "middle": [], "last": "Park", "suffix": "" } ], "year": 2014, "venue": "Expert Systems with Applications", "volume": "41", "issue": "2", "pages": "735--743", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kwanho Kim, Beom-suk Chung, Yerim Choi, Se- ungjun Lee, Jae-Yoon Jung, and Jonghun Park. 2014. Language independent semantic kernels for short- text classification. Expert Systems with Applica- tions, 41(2):735-743.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Associative partial domain adaptation", "authors": [ { "first": "Youngeun", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sungeun", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Seunghan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sungil", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Yunho", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "Jiwon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.03111" ] }, "num": null, "urls": [], "raw_text": "Youngeun Kim, Sungeun Hong, Seunghan Yang, Sungil Kang, Yunho Jeon, and Jiwon Kim. 2020. Associative partial domain adaptation. arXiv preprint arXiv:2008.03111.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "What you saw is not what you get: Domain adaptation using asymmetric kernel transforms", "authors": [ { "first": "Brian", "middle": [], "last": "Kulis", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" } ], "year": 2011, "venue": "CVPR 2011", "volume": "", "issue": "", "pages": "1785--1792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Kulis, Kate Saenko, and Trevor Darrell. 2011. What you saw is not what you get: Domain adapta- tion using asymmetric kernel transforms. In CVPR 2011, pages 1785-1792. IEEE.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Universal source-free domain adaptation", "authors": [ { "first": "Jogendra", "middle": [], "last": "Nath Kundu", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Venkat", "suffix": "" }, { "first": "", "middle": [], "last": "Venkatesh Babu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "4544--4553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jogendra Nath Kundu, Naveen Venkat, R Venkatesh Babu, et al. 2020. Universal source-free domain adaptation. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 4544-4553.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Race: Large-scale reading comprehension dataset from examinations", "authors": [ { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04683" ] }, "num": null, "urls": [], "raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Bioinformatics", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Annual report readability, current earnings, and earnings persistence", "authors": [ { "first": "Feng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "Journal of Accounting and economics", "volume": "45", "issue": "2-3", "pages": "221--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Li. 2008. Annual report readability, current earn- ings, and earnings persistence. Journal of Account- ing and economics, 45(2-3):221-247.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Textual analysis of corporate disclosures: A survey of the literature", "authors": [ { "first": "Feng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2010, "venue": "Journal of accounting literature", "volume": "29", "issue": "1", "pages": "143--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Li. 2010. Textual analysis of corporate disclo- sures: A survey of the literature. Journal of account- ing literature, 29(1):143-165.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Recurrent neural network for text classification with multi-task learning", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)", "volume": "", "issue": "", "pages": "2873--2879", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. In Proceedings of the Twenty- Fifth International Joint Conference on Artificial In- telligence (IJCAI-16), pages 2873-2879.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "When is a liability not a liability? textual analysis, dictionaries, and 10-ks", "authors": [ { "first": "Tim", "middle": [], "last": "Loughran", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2011, "venue": "The Journal of finance", "volume": "66", "issue": "1", "pages": "35--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Loughran and Bill McDonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. The Journal of finance, 66(1):35-65.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Good debt or bad debt: Detecting semantic orientations in economic texts", "authors": [ { "first": "Pekka", "middle": [], "last": "Malo", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Pekka", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2014, "venue": "Journal of the Association for Information Science and Technology", "volume": "65", "issue": "4", "pages": "782--796", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wal- lenius, and Pyry Takala. 2014. Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Sci- ence and Technology, 65(4):782-796.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Md&a disclosure and the firm's ability to continue as a going concern", "authors": [ { "first": "J", "middle": [], "last": "William", "suffix": "" }, { "first": "Mani", "middle": [], "last": "Mayew", "suffix": "" }, { "first": "Mohan", "middle": [], "last": "Sethuraman", "suffix": "" }, { "first": "", "middle": [], "last": "Venkatachalam", "suffix": "" } ], "year": 2015, "venue": "The Accounting Review", "volume": "90", "issue": "4", "pages": "1621--1651", "other_ids": {}, "num": null, "urls": [], "raw_text": "William J Mayew, Mani Sethuraman, and Mohan Venkatachalam. 2015. Md&a disclosure and the firm's ability to continue as a going concern. The Accounting Review, 90(4):1621-1651.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A note on a general definition of the coefficient of determination", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Nico", "suffix": "" }, { "first": "", "middle": [], "last": "Nagelkerke", "suffix": "" } ], "year": 1991, "venue": "Biometrika", "volume": "78", "issue": "3", "pages": "691--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nico JD Nagelkerke et al. 1991. A note on a gen- eral definition of the coefficient of determination. Biometrika, 78(3):691-692.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Domain adaptation via transfer component analysis", "authors": [ { "first": "Ivor", "middle": [ "W" ], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "James", "middle": [ "T" ], "last": "Tsang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Kwok", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2010, "venue": "IEEE transactions on neural networks", "volume": "22", "issue": "2", "pages": "199--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. 2010. Domain adaptation via transfer component analysis. IEEE transactions on neural networks, 22(2):199-210.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Dea as a tool for predicting corporate failure and success: A case of bankruptcy assessment", "authors": [ { "first": "M", "middle": [], "last": "Inguruwatt", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Premachandra", "suffix": "" }, { "first": "John", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Watson", "suffix": "" } ], "year": 2011, "venue": "Omega", "volume": "39", "issue": "6", "pages": "620--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inguruwatt M Premachandra, Yao Chen, and John Wat- son. 2011. Dea as a tool for predicting corporate fail- ure and success: A case of bankruptcy assessment. Omega, 39(6):620-626.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "A", "middle": [], "last": "Radford", "suffix": "" }, { "first": "R", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Child", "suffix": "" }, { "first": "D", "middle": [], "last": "Luan", "suffix": "" }, { "first": "", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Radford, J Wu, R Child, D Luan, D Amodei, and I Sutskever. 2019. Language models are unsuper- vised multitask learners.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Squad: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.05250" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Adapting visual category models to new domains", "authors": [ { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Kulis", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Fritz", "suffix": "" }, { "first": "Trevor", "middle": [ "Darrell" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "European conference on computer vision", "volume": "", "issue": "", "pages": "213--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Dar- rell. 2010. Adapting visual category models to new domains. In European conference on computer vi- sion, pages 213-226. Springer.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Maximum classifier discrepancy for unsupervised domain adaptation", "authors": [ { "first": "Kuniaki", "middle": [], "last": "Saito", "suffix": "" }, { "first": "Kohei", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Yoshitaka", "middle": [], "last": "Ushiku", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Harada", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "3723--3732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrep- ancy for unsupervised domain adaptation. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 3723-3732.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Termweighting approaches in automatic text retrieval", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Buckley", "suffix": "" } ], "year": 1988, "venue": "formation processing & management", "volume": "24", "issue": "", "pages": "513--523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard Salton and Christopher Buckley. 1988. Term- weighting approaches in automatic text retrieval. In- formation processing & management, 24(5):513- 523.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Esl: Entropy-guided selfsupervised learning for domain adaptation in semantic segmentation", "authors": [ { "first": "Antoine", "middle": [], "last": "Saporta", "suffix": "" }, { "first": "Tuan-Hung", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Cord", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "P\u00e9rez", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.08658" ] }, "num": null, "urls": [], "raw_text": "Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, and Patrick P\u00e9rez. 2020. Esl: Entropy-guided self- supervised learning for domain adaptation in seman- tic segmentation. arXiv preprint arXiv:2006.08658.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", "authors": [ { "first": "Hidetoshi", "middle": [], "last": "Shimodaira", "suffix": "" } ], "year": 2000, "venue": "Journal of statistical planning and inference", "volume": "90", "issue": "2", "pages": "227--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hidetoshi Shimodaira. 2000. Improving predictive in- ference under covariate shift by weighting the log- likelihood function. Journal of statistical planning and inference, 90(2):227-244.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "An application of support vector machines in bankruptcy prediction model. Expert systems with applications", "authors": [ { "first": "Kyung-Shik", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Taik", "middle": [], "last": "Soo Lee", "suffix": "" }, { "first": "Hyun-Jung", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2005, "venue": "", "volume": "28", "issue": "", "pages": "127--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyung-Shik Shin, Taik Soo Lee, and Hyun-jung Kim. 2005. An application of support vector machines in bankruptcy prediction model. Expert systems with applications, 28(1):127-135.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Forecasting bankruptcy more accurately: A simple hazard model. The journal of business", "authors": [ { "first": "Tyler", "middle": [], "last": "Shumway", "suffix": "" } ], "year": 2001, "venue": "", "volume": "74", "issue": "", "pages": "101--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tyler Shumway. 2001. Forecasting bankruptcy more accurately: A simple hazard model. The journal of business, 74(1):101-124.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Return of frustratingly easy domain adaptation", "authors": [ { "first": "Baochen", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jiashi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Re- turn of frustratingly easy domain adaptation. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 30.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Assessing the information content of narrative disclosures in explaining bankruptcy", "authors": [ { "first": "Robert", "middle": [ "W" ], "last": "B Mack Tennyson", "suffix": "" }, { "first": "Michael T", "middle": [], "last": "Ingram", "suffix": "" }, { "first": "", "middle": [], "last": "Dugan", "suffix": "" } ], "year": 1990, "venue": "Journal of Business Finance & Accounting", "volume": "17", "issue": "3", "pages": "391--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "B Mack Tennyson, Robert W Ingram, and Michael T Dugan. 1990. Assessing the information content of narrative disclosures in explaining bankruptcy. Jour- nal of Business Finance & Accounting, 17(3):391- 410.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "More than words: Quantifying language to measure firms' fundamentals. The journal of finance", "authors": [ { "first": "Maytal", "middle": [], "last": "Paul C Tetlock", "suffix": "" }, { "first": "Sofus", "middle": [], "last": "Saar-Tsechansky", "suffix": "" }, { "first": "", "middle": [], "last": "Macskassy", "suffix": "" } ], "year": 2008, "venue": "", "volume": "63", "issue": "", "pages": "1437--1467", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul C Tetlock, Maytal Saar-Tsechansky, and Sofus Macskassy. 2008. More than words: Quantifying language to measure firms' fundamentals. The jour- nal of finance, 63(3):1437-1467.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Variable selection and corporate bankruptcy forecasts", "authors": [ { "first": "Shaonan", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2015, "venue": "Journal of Banking & Finance", "volume": "52", "issue": "", "pages": "89--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaonan Tian, Yan Yu, and Hui Guo. 2015. Variable se- lection and corporate bankruptcy forecasts. Journal of Banking & Finance, 52:89-100.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Discovering finance keywords via continuous-space language models", "authors": [ { "first": "Ming-Feng", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Chuan-Ju", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Po-Chuan", "middle": [], "last": "Chien", "suffix": "" } ], "year": 2016, "venue": "ACM Transactions on Management Information Systems (TMIS)", "volume": "7", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Feng Tsai, Chuan-Ju Wang, and Po-Chuan Chien. 2016. Discovering finance keywords via continuous-space language models. ACM Transac- tions on Management Information Systems (TMIS), 7(3):1-17.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07461" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Bankruptcy prediction using neural networks. Decision support systems", "authors": [ { "first": "L", "middle": [], "last": "Rick", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "", "middle": [], "last": "Sharda", "suffix": "" } ], "year": 1994, "venue": "", "volume": "11", "issue": "", "pages": "545--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rick L Wilson and Ramesh Sharda. 1994. Bankruptcy prediction using neural networks. Decision support systems, 11(5):545-557.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A real-valued genetic algorithm to optimize the parameters of support vector machine for predicting bankruptcy. Expert systems with applications", "authors": [ { "first": "Chih-Hung", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Gwo-Hshiung", "middle": [], "last": "Tzeng", "suffix": "" }, { "first": "Yeong-Jia", "middle": [], "last": "Goo", "suffix": "" }, { "first": "Wen-Chang", "middle": [], "last": "Fang", "suffix": "" } ], "year": 2007, "venue": "", "volume": "32", "issue": "", "pages": "397--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Hung Wu, Gwo-Hshiung Tzeng, Yeong-Jia Goo, and Wen-Chang Fang. 2007. A real-valued genetic algorithm to optimize the parameters of support vec- tor machine for predicting bankruptcy. Expert sys- tems with applications, 32(2):397-408.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Self-adapter at semeval-2021 task 10: Entropy-based pseudo-labeler for source-free domain adaptation", "authors": [ { "first": "Sangwon", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Yanghoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Kyomin", "middle": [], "last": "Jung", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 15th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sangwon Yoon, Yanghoon Kim, and Kyomin Jung. 2021. Self-adapter at semeval-2021 task 10: Entropy-based pseudo-labeler for source-free do- main adaptation. Proceedings of the 15th Interna- tional Workshop on Semantic Evaluation.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "A c-lstm neural network for text classification", "authors": [ { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chonglin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "C", "middle": [ "M" ], "last": "Francis", "suffix": "" }, { "first": "", "middle": [], "last": "Lau", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and C.M. Francis Lau. 2015. A c-lstm neural network for text classification. CoRR.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines", "authors": [ { "first": "Jian", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xibing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiuzhi", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2012, "venue": "Safety science", "volume": "50", "issue": "4", "pages": "629--644", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Zhou, Xibing Li, and Xiuzhi Shi. 2012. Long-term prediction model of rockburst in underground open- ings using heuristic algorithms and support vector machines. Safety science, 50(4):629-644.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Unsupervised domain adaptation for semantic segmentation via class-balanced selftraining", "authors": [ { "first": "Yang", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Zhiding", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jinsong", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European conference on computer vision (ECCV)", "volume": "", "issue": "", "pages": "289--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. 2018. Unsupervised domain adaptation for semantic segmentation via class-balanced self- training. In Proceedings of the European conference on computer vision (ECCV), pages 289-305.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "This figure displays the relative accuracy prediction of the models. Refer to Section 3 for detailed definitions of FIN, DICT, W2V, BERT and DAPT. A1 measures the accuracy of predicting non-bankrupt firm-years and A2 measures the accuracy of predicting bankrupt firm-years. We display the prediction performance measured with hazard discrete logistic regression model, kNN-5, and linear SVM.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "text": "This table reports univariate analysis results.", "content": "", "html": null, "num": null }, "TABREF3": { "type_str": "table", "text": "This table reports the relative prediction accuracy of the models.", "content": "
", "html": null, "num": null } } } }