{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:45:53.859603Z" }, "title": "Research Replication Prediction Using Weakly Supervised Learning", "authors": [ { "first": "Tianyi", "middle": [], "last": "Luo", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Santa Cruz", "region": "CA" } }, "email": "" }, { "first": "Xingyu", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Santa Cruz", "region": "CA" } }, "email": "" }, { "first": "Hainan", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Santa Cruz", "region": "CA" } }, "email": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Santa Cruz", "region": "CA" } }, "email": "yangliu@ucsc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Knowing whether a published research result can be replicated is important. Carrying out direct replication of published research incurs a high cost. There are efforts tried to use machine learning aided methods to predict scientific claims' replicability. However, existing machine learning aided approaches use only hand-extracted statistics features such as p-value, sample size, etc. without utilizing research papers' text information and train only on a very small size of annotated data without making the most use of a large number of unlabeled articles. Therefore, it is desirable to develop effective machine learning aided automatic methods which can automatically extract text information as features so that we can benefit from Natural Language Processing techniques. Besides, we aim for an approach that benefits from both labeled and the large number of unlabeled data. In this paper, we propose two weakly supervised learning approaches that use automatically extracted text information of research papers to improve the prediction accuracy of research replication using both labeled and unlabeled datasets. Our experiments over real-world datasets show that our approaches obtain much better prediction performance compared to the supervised models utilizing only statistic features and a small size of labeled dataset. Further, we are able to achieve an accuracy of 75.76% for predicting the replicability of research.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Knowing whether a published research result can be replicated is important. Carrying out direct replication of published research incurs a high cost. There are efforts tried to use machine learning aided methods to predict scientific claims' replicability. However, existing machine learning aided approaches use only hand-extracted statistics features such as p-value, sample size, etc. without utilizing research papers' text information and train only on a very small size of annotated data without making the most use of a large number of unlabeled articles. Therefore, it is desirable to develop effective machine learning aided automatic methods which can automatically extract text information as features so that we can benefit from Natural Language Processing techniques. Besides, we aim for an approach that benefits from both labeled and the large number of unlabeled data. In this paper, we propose two weakly supervised learning approaches that use automatically extracted text information of research papers to improve the prediction accuracy of research replication using both labeled and unlabeled datasets. Our experiments over real-world datasets show that our approaches obtain much better prediction performance compared to the supervised models utilizing only statistic features and a small size of labeled dataset. Further, we are able to achieve an accuracy of 75.76% for predicting the replicability of research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Non-reproducible scientific results will mislead the progress of science and undermine the trustworthiness of the research community. In recent years, we saw the emergence of systematic large-scale replication projects which are based on the concerns of research credibility in the social and behavioral sciences (Camerer et al., 2016 (Camerer et al., , 2018 Ebersole et al., 2016; Klein et al., 2014b Klein et al., , 2018 Collaboration et al., 2015) . Researchers conducted preregistered replications of hundreds of classic and contemporary published findings in the social and behavioral sciences. Unfortunately, the reported replication rates only range from 39% to 62%. Therefore it is important to develop a confidence scoring system for the following question:", "cite_spans": [ { "start": 313, "end": 334, "text": "(Camerer et al., 2016", "ref_id": "BIBREF4" }, { "start": 335, "end": 358, "text": "(Camerer et al., , 2018", "ref_id": "BIBREF5" }, { "start": 359, "end": 381, "text": "Ebersole et al., 2016;", "ref_id": "BIBREF12" }, { "start": 382, "end": 401, "text": "Klein et al., 2014b", "ref_id": null }, { "start": 402, "end": 422, "text": "Klein et al., , 2018", "ref_id": "BIBREF20" }, { "start": 423, "end": 450, "text": "Collaboration et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To what extend can a research result be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "reproduced?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The answer to the above question will help facilitate the policymakers as well as the general public to better understand and digest a published claim. As a response, for example, Defense Advanced Research Projects Agency (DARPA) has announced a systematic confidence checking of published claims (Russell, 2019) . Alongside the above encouraging movement, the downside is that the average replication expense of each research project (which often consists a number of research studies) can go up to $500,000 (Freedman et al., 2015) 1 , which is hardly affordable to replicate each research finding, with an exponentially increasing number of publications.", "cite_spans": [ { "start": 297, "end": 312, "text": "(Russell, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, efforts have been noted to use machine learning as a much cheaper and more efficient alternative to provide an informative replication prediction Yang, 2018; Altmejd et al., 2019) . It has been reported that with simple machine learning models, a predicted accuracy of 71% can be achieved. Although we should not trust or rely on a machine-made prediction entirely, such automatic predictions offer cheap, scalable, and useful information for performing targeted spotchecking and for raising a red flag towards a partic-1 \"Irreproducibility also has downstream impacts in the drug development pipeline. Academic research studies with potential clinical applications are typically replicated within the pharmaceutical industry before clinical studies are begun, with each study replication requiring between 3 and 24 months and between US$500,000 to US$2,000,000 investment\" ular scientific claim.", "cite_spans": [ { "start": 156, "end": 167, "text": "Yang, 2018;", "ref_id": "BIBREF37" }, { "start": 168, "end": 189, "text": "Altmejd et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nonetheless, existing machine learning works on replication prediction face a couple of outstanding challenges:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Substantial human efforts are required to extract features from the published articles, such as p-values of the claims, effect size, author information, etc. to train a supervised machine learning model;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The small amount of expensive annotated training data will limit the use of more sophisticated but more accurate learning techniques (e.g., deep neural networks based natural language processing tools).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We aim for a method that is fully automatic in feature generation, and that can leverage the existence of the large corpus of unlabeled (checked) articles for boosting up the performance in predicting replications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To tackle the first challenge, we will resort to natural language processing (NLP) tools to process the research articles to obtain meaningful text features. Text information of research papers is important and intuitive resource for training machine learning models. The rich amount of structural text information looks promising to us to help improve the predictive performance of replication. Further, a good understanding of text information from different components of an article (e.g., abstract, introduction, methods, experimental results, etc.) will also be helpful for highlighting suspicious sections of the articles for a more targeted check.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the training of the state-of-the-art NLP models aligns with our second challenge that it often relies on a massive volume of annotated training data. Due to the severely limited ground truth annotation we have, we desire a method that leverages large amounts of unlabeled research articles. These unlabeled examples, although possibly noisy, can provide informative features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To make the most use of the unlabeled data, we explore the possibility of using a weakly supervised approach to perform replication prediction. The particular type of weakly supervised learning method that we will focus on utilizes techniques from the literature on learning from noisy labels (Liu et al., 2012; Natarajan et al., 2013; Scott, 2015; Van Rooyen et al., 2015; Liu and Guo, 2020) . Our high-level idea is to bootstrap the small set of labeled data to train a set of weak predictors which will help us generate \"artificial\" and noisy labels for the unlabeled articles. Then we will apply tools from learning with noisy labels to improve the training with these artificially supervised examples.", "cite_spans": [ { "start": 293, "end": 311, "text": "(Liu et al., 2012;", "ref_id": "BIBREF22" }, { "start": 312, "end": 335, "text": "Natarajan et al., 2013;", "ref_id": "BIBREF25" }, { "start": 336, "end": 348, "text": "Scott, 2015;", "ref_id": "BIBREF30" }, { "start": 349, "end": 373, "text": "Van Rooyen et al., 2015;", "ref_id": "BIBREF35" }, { "start": 374, "end": 392, "text": "Liu and Guo, 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on two approaches to address the above problem of learning with artificial labels. The first approach uses efficient variational inference methods (Liu et al., 2012) to estimate the error rates of the noisy labels. The above knowledge of error rates allows us to perform loss correction (Natarajan et al., 2013) to improve the performance with the help of an unlabeled dataset. The second approach is inspired by a recent work (Liu and Guo, 2020) that proposed a family of peer loss functions which can perform learning with noisy labels without knowing noise rates and without conducting intermediate error rates' estimation step.", "cite_spans": [ { "start": 156, "end": 174, "text": "(Liu et al., 2012)", "ref_id": "BIBREF22" }, { "start": 296, "end": 320, "text": "(Natarajan et al., 2013)", "ref_id": "BIBREF25" }, { "start": 436, "end": 455, "text": "(Liu and Guo, 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We utilized both labeled and unlabeled datasets to carry out the study of replication prediction. The labeled dataset containing 399 research articles are obtained from summarizing eight research replication projects (details will be given later). As for the unlabeled dataset, a python crawler is implemented to obtain the pdf files of 2,170 research papers from the websites of corresponding journals. We preprocess the files to extract text information. Then BERT (Devlin et al., 2018) is used for tokenization and for obtaining word embeddings to serve as the input features for training.", "cite_spans": [ { "start": 467, "end": 488, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The experimental results demonstrate that i) using text information as features can improve the performance than utilizing only pre-and handextracted statistics features. The combination of models trained on text features and statistics features separately can obtain better performance than separate models; and ii) our weakly supervised methods that take advantage of unlabeled data can significantly improve the prediction performance. The best of our proposed methods can achieve a prediction accuracy of 75.76%, as well as a 72.50% precision, a 88.24% recall, and a 78.95% F1 score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize our contributions as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) We propose two weakly supervised learning approaches based on text information of research papers to improve the prediction accuracy of research replication using both labeled and unlabeled datasets. (2) We present experimental results to validate the usefulness of our proposed weakly supervised learning models. (3) We contribute to the community by publishing our codes and data. Please refer to https://github.com/pkuluotianyi/PeerRRP for the most updated codes and datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Replication crisis has spurred systematic largescale direct replication projects in the social and behavioral sciences (Camerer et al., 2016 (Camerer et al., , 2018 Ebersole et al., 2016; Klein et al., 2014b Klein et al., , 2018 Collaboration et al., 2015) . Data is collected by individual volunteers, volunteer teams, or Amazon Mechanical Turk (AMT). However, direct research replication is expensive and time-consuming (Freedman et al., 2015) . Machine learning serves as a much more efficient method to conduct replication prediction. Altmejd et al. (2019) applied ML methods on the data from four large-scale replication projects in experimental psychology and economics and studied which variables drive predictable replication. But they used only statistics features such as p-value, sample size, etc. and train only on a small labeled dataset.", "cite_spans": [ { "start": 119, "end": 140, "text": "(Camerer et al., 2016", "ref_id": "BIBREF4" }, { "start": 141, "end": 164, "text": "(Camerer et al., , 2018", "ref_id": "BIBREF5" }, { "start": 165, "end": 187, "text": "Ebersole et al., 2016;", "ref_id": "BIBREF12" }, { "start": 188, "end": 207, "text": "Klein et al., 2014b", "ref_id": null }, { "start": 208, "end": 228, "text": "Klein et al., , 2018", "ref_id": "BIBREF20" }, { "start": 229, "end": 256, "text": "Collaboration et al., 2015)", "ref_id": null }, { "start": 422, "end": 445, "text": "(Freedman et al., 2015)", "ref_id": "BIBREF13" }, { "start": 539, "end": 560, "text": "Altmejd et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We hold the hypothesis that text features contain rich information to potentially improve the performance of replication prediction. In NLP, many research works have been proposed for text processing to make use of text features (Jurafsky and Martin, 2014; Biemann and Mehler, 2014; Boro\u015f et al., 2018; Devlin et al., 2018) .", "cite_spans": [ { "start": 229, "end": 256, "text": "(Jurafsky and Martin, 2014;", "ref_id": "BIBREF17" }, { "start": 257, "end": 282, "text": "Biemann and Mehler, 2014;", "ref_id": "BIBREF1" }, { "start": 283, "end": 302, "text": "Boro\u015f et al., 2018;", "ref_id": "BIBREF2" }, { "start": 303, "end": 323, "text": "Devlin et al., 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Weakly supervised learning approaches have been proposed to utilize both labeled and unlabeled data (Zhou, 2018; Oliver et al., 2018; Miyato et al., 2018) . Our weakly supervised learning approaches tie close to learning with the inaccurate supervision (Cesa-Bianchi et al., 2011; Bylander, 1994; Scott et al., 2013; Scott, 2015; Van Rooyen et al., 2015) . Particularly relevant to us, a surrogate loss function is proposed in (Natarajan et al., 2013) to achieve an unbiased estimation of the true training loss using only noisy labels. Liu and Guo (2020) introduced a new family of loss functions, peer loss functions, to empirical risk minimization (ERM), for a broad class of learning with noisy labels problems, without requiring estimating the error rates of the noisy labels.", "cite_spans": [ { "start": 100, "end": 112, "text": "(Zhou, 2018;", "ref_id": "BIBREF38" }, { "start": 113, "end": 133, "text": "Oliver et al., 2018;", "ref_id": "BIBREF26" }, { "start": 134, "end": 154, "text": "Miyato et al., 2018)", "ref_id": "BIBREF24" }, { "start": 253, "end": 280, "text": "(Cesa-Bianchi et al., 2011;", "ref_id": "BIBREF6" }, { "start": 281, "end": 296, "text": "Bylander, 1994;", "ref_id": "BIBREF3" }, { "start": 297, "end": 316, "text": "Scott et al., 2013;", "ref_id": "BIBREF31" }, { "start": 317, "end": 329, "text": "Scott, 2015;", "ref_id": "BIBREF30" }, { "start": 330, "end": 354, "text": "Van Rooyen et al., 2015)", "ref_id": "BIBREF35" }, { "start": 427, "end": 451, "text": "(Natarajan et al., 2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Annotated Data In our study, we obtained 399 annotated articles containing labels indicating whether the involved research claim can be reproduced or not. If it can be replicated, we use the label '1' to denote it. Otherwise, the label '0' is used to represent it. There exist different definitions and criteria for a claim to be replicable. Here for the collected dataset, a claim extracted from the article is replicable if an independent effort can produce a statistically significant effect in the original direction as originally claimed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "The question of how we treat an article/claim as replicable is an active research question itself (Simonsohn, 2015) . To include as many annotated data points as possible, we adopt the most basic binary model that defines replication success as a \"statistically significant (p-value <= 0.05) effect in the same direction as in the original study.\" (Altmejd et al., 2019) The annotated dataset comes from eight research replication projects which are the Registered Replication Report ( Among 399 annotated samples, 201 samples are labeled as '1' (replicable). The remaining 198 samples are annotated as '0' (non-replicable). From the distribution of class labels, we observe that this annotated dataset is balanced.", "cite_spans": [ { "start": 98, "end": 115, "text": "(Simonsohn, 2015)", "ref_id": "BIBREF34" }, { "start": 348, "end": 370, "text": "(Altmejd et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Unsupervised Data In addition, we deployed a crawler to obtain an unlabeled dataset to pair with the above annotated one. Because the published research papers in the labeled dataset are mainly from American Economic Review and Psychological Science and all the other papers in the annotated dataset are economic and psychology-related, we use the crawler to get all 2,170 published research papers from the websites of American Economic Review (Jan 2011 -Dec 2014) and Psychological Science (Jan 2006 -Dec 2012) to form our unlabeled dataset. The number of papers crawled in the American Economic Review website is 981 and there are 1,189 papers from the Psychological Science website. The distribution of papers' number Year 2006 Year 2007 Year 2008 Year 2009 2010 2011 2012 Total # of pub 185 200 196 238 293 243 224 1189 Table 2 : Distribution of published psychological related papers' number by year in the unlabeled dataset by year about American Economic review and Psychological Science are shown in Table 1 and Table 2 respectively. Our setting is severely imbalanced: we have a very small amount of labeled data and a much large amount of unlabeled ones. We list the average length (# of words contained), minimum length, and maximum length information of different datasets in Table 3 .", "cite_spans": [ { "start": 722, "end": 731, "text": "Year 2006", "ref_id": null }, { "start": 732, "end": 741, "text": "Year 2007", "ref_id": null }, { "start": 742, "end": 751, "text": "Year 2008", "ref_id": null }, { "start": 752, "end": 761, "text": "Year 2009", "ref_id": null } ], "ref_spans": [ { "start": 762, "end": 841, "text": "2010 2011 2012 Total # of pub 185 200 196 238 293 243 224 1189 Table 2", "ref_id": "TABREF0" }, { "start": 1018, "end": 1038, "text": "Table 1 and Table 2", "ref_id": "TABREF0" }, { "start": 1299, "end": 1306, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "We introduce the pipeline of our weakly supervised research prediction framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Feature Extraction Our method relies on automatically extracted text features. Specifically, PDFMiner (Shinyama, 2014) is used to extract the text information in the raw pdf files of the articles. Tf-idf features are used in bag-of-words models. BERT (Devlin et al., 2018) is used for tokenization and obtaining word embeddings as the input features of the sequential models. More specifically, we use \"bert-base-uncased\" pretrained model from Transformers (Wolf et al., 2019) which has 12-layer, 768-hidden, 12-heads, 110M parameters and trained on lower-cased English text.", "cite_spans": [ { "start": 102, "end": 118, "text": "(Shinyama, 2014)", "ref_id": null }, { "start": 251, "end": 272, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF10" }, { "start": 457, "end": 476, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Artificial and Noisy Label Generation Our problem is formulated as a binary classification to predict whether a research paper can be replicated or not. We utilize five basic classifiers trained on the labeled dataset to obtain artificial labels for the unlabeled articles. They are five commonly used binary classification algorithms including Logistic Regression (LR) (Peng et al., 2002) , Random Forest (RF) (Ho, 1995), Support Vector Machine (SVM) (Chang and Lin, 2011) , Multilayer Perceptron (MLP) (Goodfellow et al., 2016) , and Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) .", "cite_spans": [ { "start": 370, "end": 389, "text": "(Peng et al., 2002)", "ref_id": "BIBREF28" }, { "start": 452, "end": 473, "text": "(Chang and Lin, 2011)", "ref_id": "BIBREF7" }, { "start": 504, "end": 529, "text": "(Goodfellow et al., 2016)", "ref_id": "BIBREF14" }, { "start": 566, "end": 600, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Suppose that we have an annotated training dataset L :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "= {(x i , y i )} L i=1 , an unlabeled dataset U := {x i } U i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": ", and a test dataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "T := {(x i , y i )} T i=1 , where x i \u2208 X \u2286 R d is a d- dimensional vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "We have K baseline classifiers F := {f 1 , f 2 , ..., f K : X \u2192 {0, 1}} that map each feature vector to a binary classification outcome. We let N = L+U , i.e., the total number of training dataset is N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Given the whole training data D = L \u222a U and multiple classifiers {f j } K j=1 , we firstly train five basic classifiers and get their predictions in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "D := {(x i ,\u0233 j i )} N i=1 , j = 1, ..., K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Then we can use aggregation rules, e.g., majority voting rule, to obtain the noisy labels for the whole training data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Y noise := {\u0233 noise i } N i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "Training with Artificially Generated Noisy Labels Then we can utilize two different ways to conduct the learning with noisy labels Y noise . Details will be given in the next Section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weakly Supervised Research Replication Prediction", "sec_num": "4" }, { "text": "In this section we present two weakly supervised methods. The first approach is based on the error correction proxy loss function (Natarajan et al., 2013) and the variational inference approaches (mean field) (Liu et al., 2012) to estimate the error rates. The two techniques jointly provide us a bias-corrected training process to improve the model's robustness against noises in labels. We name this solution as Variational Inference aided Weakly Supervised Learning.", "cite_spans": [ { "start": 130, "end": 154, "text": "(Natarajan et al., 2013)", "ref_id": "BIBREF25" }, { "start": 209, "end": 227, "text": "(Liu et al., 2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "5" }, { "text": "The second approach is built on the peer loss approach (Liu and Guo, 2020) . This approach is particularly suitable for our application when the label noises are unclear. In this paper, we will apply peer loss function in the weakly supervised learning scenario for the research replication prediction problem. We name this solution as Peer Loss aided Weakly Supervised Learning.", "cite_spans": [ { "start": 55, "end": 74, "text": "(Liu and Guo, 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "5" }, { "text": "Supervised Learning ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variational Inference aided Weakly", "sec_num": "5.1" }, { "text": "D = {(x 1 , y 1 ), ..., (x N , y N )}: training data L = {(x 1 , y 1 ), ..., (x L , y L )}: labeled data U = {x 1 , ..., x U }: unlabeled data T = {(x 1 , y 1 ), ..., (x T , y T )}: test data F = {f 1 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variational Inference aided Weakly", "sec_num": "5.1" }, { "text": ".., f K }: classifiers Ensure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variational Inference aided Weakly", "sec_num": "5.1" }, { "text": "1: Train K classifiers (F) on the labeled training data L. 2: for j = 1 to K do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variational Inference aided Weakly", "sec_num": "5.1" }, { "text": "for i = 1 to N do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variational Inference aided Weakly", "sec_num": "5.1" }, { "text": "Compute\u0233 j i using j-th basic classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variational Inference aided Weakly", "sec_num": "5.1" }, { "text": "end for 6: end for 7: Aggregate above labels into {\u0233 noise i } N i=1 and estimate the error rates according to mean field method described in (Liu et al., 2012) . 8: Train the LSTM model using the proxy loss function mentioned in Section 5.1 with the estimated error rates in line#7 as the inputs. 9: for t = 1 to T do 10:", "cite_spans": [ { "start": 142, "end": 160, "text": "(Liu et al., 2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Output prediction. 11: end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "We start with using the five basic classifiers (LR, RF, SVM, MLP, and LSTM) trained on the annotated dataset of small size to generate the noisy labels for the whole training data respectively. These noisy labels will then be aggregated using a variational procedure (Liu et al., 2012 ), which we reproduce below:", "cite_spans": [ { "start": 267, "end": 284, "text": "(Liu et al., 2012", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Denote by \u00b5 i as the probability of different class labels for the i-th train sample, \u03c9 j as the weight or ability of the j-th classifier, \u03b1 and \u03b2 are the hyperparameters,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "\u03b4 ij = 1[\u0233 j i =\u0233 noise i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": ", and g is a function to calculate the error rates using {\u0233 em i } N i=1 ,\u03c9 j . \u00b5 i and \u03c9 j are firstly estimated using the Expectation-Maximization (EM) algorithms. We then obtain EM predictions\u0233 em i based on the above estimated \u00b5 i and \u03c9 j .\u0233 em i at the final step will serve as our noisy label\u0233 noise i . The final step is to estimate error rates by using\u0233 em i as the proxy for the ground truth label. The procedure is summarized in Algorithm 2. More detailed explanation are described in (Liu et al., 2012) .", "cite_spans": [ { "start": 495, "end": 513, "text": "(Liu et al., 2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Algorithm 2 Aggregation and Error Rates 1: Update \u00b5 i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "\u00b5 i (z i ) = j\u2208K\u03c9 \u03b4 ij j (1 \u2212\u03c9 j ) 1\u2212\u03b4 ij 2: Update\u03c9 j :\u03c9 j = i\u2208N \u00b5 i (\u0233 j i )+\u03b1 N +\u03b1+\u03b2 3: EM Predictions :\u0233 em i = argmax z \u00b5 i (z i ) 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Error rates :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "\u03c3 0 = |i :\u0233 em i = 0,\u0233 noise i = 1| |i :\u0233 em i = 0| \u03c3 1 = |i :\u0233 em i = 1,\u0233 noise i = 0| |i :\u0233 em i = 1|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Finally, we use an LSTM neural network model with proxy loss function as shown in (Natarajan et al., 2013) to conduct the training. The definition of proxy loss function is as follows:", "cite_spans": [ { "start": 82, "end": 106, "text": "(Natarajan et al., 2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "N i=1 (1 \u2212 \u03c3 1\u2212y p i ) (y p i ,\u0233 noise i ) \u2212 \u03c3 y p i (1 \u2212 y p i ,\u0233 noise i ) 1 \u2212 \u03c3 1 \u2212 \u03c3 0 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "where in above (y p i ,\u0233 noise i ) is a standard cross entropy loss function where y p i is the i-th sample's real-value prediction of final LSTM model and\u0233 noise i is the corresponding noisy label. The procedure is summarized in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Variational inference (VI) aided weakly supervised learning method requires estimating the error rates. This additional step of estimation may introduce estimation errors that can affect the final model's performance. Liu and Guo (2020) provided an alternative, peer loss, to deal with noisy labels that does not require an additional estimation step for the noise rates. We propose peer loss (PL) aided weakly supervised learning method. Similar to the VI approach, we firstly train five basic classifiers on the annotated dataset of small size to provide the noisy supervisions for the whole training data Y noise := {\u0233 noise i } N i=1 , as mentioned in Section 4 via a simple majority vote.", "cite_spans": [ { "start": 218, "end": 236, "text": "Liu and Guo (2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "For each training sample (x i ,\u0233 noise i ), we randomly draw another two samples Peer Samples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "(x i p 1 ,\u0233 noise i p 1 ), (x i p 2 ,\u0233 noise i p 2 ) such that i p 1 = i p 2 and i p 1 , i p 2 = i. (x i p 1 ,\u0233 noise i p 1 ), (x i p 2 ,\u0233 noise i p 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": ") are the i-th data's peer samples. Then we calculate peer loss function as shown in (Liu and Guo, 2020) . The definition of total peer loss L peer (Y p , Y noise ) is given as follows:", "cite_spans": [ { "start": 85, "end": 104, "text": "(Liu and Guo, 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "N i=1 (y p i ,\u0233 noise i ) \u2212 \u03b1 \u2022 (y p i p 1 ,\u0233 noise i p 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "(y p i ,\u0233 noise i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": ") is a standard cross entropy loss function where y p i is the i-th sample's real-value prediction of final LSTM model and\u0233 noise i is the corresponding noisy label. \u03b1 is a hyperparameter that we will tune with.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "We use an LSTM neural network model with the above defined peer loss function and train the model. The procedure is further illustrated in Algorithm 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Peer Loss aided Weakly Supervised Learning", "sec_num": "5.2" }, { "text": "Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Peer Loss aided Weakly Supervised Learning Require:", "sec_num": null }, { "text": "D = {(x 1 , y 1 ), ..., (x N , y N )}: training data L = {(x 1 , y 1 ), ..., (x L , y L )}: labeled data U = {x 1 , , ..., x U }: unlabeled data T = {(x 1 , y 1 ), ..., (x T , y T )}: test data F = {f 1 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Peer Loss aided Weakly Supervised Learning Require:", "sec_num": null }, { "text": ".., f K }: classifiers Ensure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Peer Loss aided Weakly Supervised Learning Require:", "sec_num": null }, { "text": "1: Train K classifiers (F) on the labeled training data L. 2: for j = 1 to K do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Peer Loss aided Weakly Supervised Learning Require:", "sec_num": null }, { "text": "for i = 1 to N do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Peer Loss aided Weakly Supervised Learning Require:", "sec_num": null }, { "text": "Compute\u0233 j i using j-th basic classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Peer Loss aided Weakly Supervised Learning Require:", "sec_num": null }, { "text": "end for 6: end for 7: Compute {\u0233 noise i } N i=1 using majority rule. 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "for i = 1 to N do 9: Construct {(x i ,\u0233 noise i ), (x i p 1 ,\u0233 noise i p 2 )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "10: end for 11: Create noisy training dataset:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "D noise = {(x i ,\u0233 noise i ), (x i p 1 ,\u0233 noise i p 2 )} N i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": ". 12: Train the LSTM model using peer loss function as shown in Section 5.2 on D noise . 13: for t = 1 to T do 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Output prediction. 15: end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "To complete our analysis, we also take an off-the-shelf semi-supervised learning technique DIVIDEMIX (Li et al., 2020) . It is a broad literature of methods proposed in semi-supervised learning and we chose the most recent and robust approach. DIVIDEMIX is a semi-supervised method which trains two networks simultaneously and the training dataset is dynamically divided into a labeled dataset and an unlabeled dataset in each iteration. We adapt the setting of DIVIDEMIX to ours to serve as a baseline comparison. DIVIDEMIX can benefit from the unlabeled data but they do not use bias-corrected loss function which is different from our methodology.", "cite_spans": [ { "start": 101, "end": 118, "text": "(Li et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Other Methods", "sec_num": "5.3" }, { "text": "In this section, we present our experimental results and findings and offer discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We have 399 labeled and 2,170 unlabeled samples. Randomly selected 300 (150:1;150:0) labeled and 2,170 unlabeled samples are considered as the training dataset. We test our proposed framework on the remaining 99 (51:1;48:0) labeled replication projects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.1" }, { "text": "We consider both text and statistics features of research papers. p-value, effect size, sample size are utilized as statistics features. As for the text information, Tf-idf and word embeddings (obtained by BERT) are used as the input features of bag-ofwords and sequential models respectively. Using BERT helped us obtain better context-aware word embedding features so that we could improve the classification accuracy. A published BERT pretrained model (\"bert-base-uncased\" 2 ) is utilized as the embedding layer of LSTM model. \"Bert-baseuncased\" is a pretrained model on English language using a masked language modeling objective and its vocabulary size is 30,522. We set the maximum length of documents to 10,000 in the LSTM model because the average length of all the documents in the labeled dataset is about 10,000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.1" }, { "text": "Since the text features and statistics feature are not compatible with each other, we will train models on these two sets of features separately. But we also try combining the results of these two sets of models to further boost up the prediction perfor- mance. 3 A summation of their prediction probabilities will be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.1" }, { "text": "The results of text only and text + statistics are reported in Table 4 . From this table, we first observe that the ensemble models (combining text and statistics) outperform the ones trained only on text features. This suggests that the statistics feature are complementary to text feature. We report that LR, RF, and SVM models (nondeep learning) trained using only statistics features are only able to obtain a 54.55%, 50.51%, and 56.57% test accuracy respectively. Therefore our experiments confirm that the performance of model training on text features is better.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "We compare eight methods LR, RF, SVM, MLP, LSTM, DIVIDEMIX (Li et al., 2020) , VI (our variational inference based method), and PL (our peer loss based method). The first five models are com-monly used binary classification algorithms and they are trained only on 300 annotated data instances. VI and PL return the best performance and the result shows that our proposed methods consistently outperform other models. Among our two proposed approaches, PL obtains better performance and it reaches 75.76% accuracy. This is evidence to us that the PL approach works better in handling the noise; on the other hand, likely additional errors were introduced to VI during the process of estimating the error rates.", "cite_spans": [ { "start": 59, "end": 76, "text": "(Li et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "We also trained LSTM on both labeled and unlabeled datasets but with artificially provided labels. We observe the same performance as training only on the labeled dataset. It shows that the prediction performance cannot be improved if we do not use a noise-resistant procedure to correct the biases in the artificially provided labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "The experimental results on Precision, Recall, and F1 score for eight models are also reported in Table 5 . Our weakly supervised methods achieved the best performances consistently across different measures.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "We explore which features are more indicative of an article's reproducibility. We perform the with/without experiments to compare the performance in different settings so that it can help us understand which features are more important in predicting replication. The papers in our dataset contain different sections including title, authors, abstract, introduction, method, experiment, discussion, conclusion, ref- Donors tend to avoid charities that dedicate a high percentage of expenses to administrative and fundraising costs, limiting the ability of nonprofits to be effective. We propose a solution to this problem: Use donations from major philanthropists to cover overhead expenses and offer potential donors an overhead-free donation opportunity. A laboratory experiment testing this solution confirms that donations decrease when overhead increases, but only when donors pay for overhead themselves. In a field experiment with 40,000 potential donors, we compared the overhead-free solution with other common uses of initial donations. Consistent with prior research, informing donors that seed money has already been raised increases donations, as does a $1:$1 matching campaign. Our main result, however, clearly shows that informing potential donors that overhead costs 3 are covered by an initial donation significantly increases the donation rate by 80% (or 94%) and total donations by 75% (or 89%) compared with the seed (or matching) approach. Table 7 : Red color highlights words having positive weights and the absolute value is larger than 0.1. Blue color highlights words having negative weights and the absolute value is larger than 0.1. Classification result of Logistic Regression for this paper is Non-replicable (Wrong)", "cite_spans": [], "ref_spans": [ { "start": 1461, "end": 1468, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study on Feature Importance for Research Replication", "sec_num": "6.3" }, { "text": "Donors tend to avoid charities that dedicate a high percentage of expenses to administrative and fundraising costs, limiting the ability of nonprofits to be effective. We propose a solution to this problem: Use donations from major philanthropists to cover overhead expenses and offer potential donors an overhead-free donation opportunity. A laboratory experiment testing this solution confirms that donations decrease when overhead increases, but only when donors pay for overhead themselves. In a field experiment with 40,000 potential donors, we compared the overhead-free solution with other common uses of initial donations. Consistent with prior research, informing donors that seed money has already been raised increases donations, as does a $1:$1 matching campaign. Our main result, however, clearly shows that informing potential donors that overhead costs 3 are covered by an initial donation significantly increases the donation rate by 80% (or 94%) and total donations by 75% (or 89%) compared with the seed (or matching) approach. erence, and appendix. We consider each section as a meta feature. The first set of features is title + authors + abstract + introduction, comprising the summary of this paper. The second set of features is methods + experiments which describe the details of the methods utilized in the paper and the effectiveness of the methods. The third set of features is discussion + conclusion + reference + appendix which consist the general conclusion and supplementary materials of this paper. Experiments' results are reported in Table 6 . We make several observations:", "cite_spans": [], "ref_spans": [ { "start": 1569, "end": 1576, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Ablation Study on Feature Importance for Research Replication", "sec_num": "6.3" }, { "text": "\u2022 Training using the entire body of text returns the best performance. This implies the necessity/informativeness of each component of an article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study on Feature Importance for Research Replication", "sec_num": "6.3" }, { "text": "\u2022 Removing the abstract and introduction leads to decreased performance but the reduction is not significant. Our conjecture is that the first set of features contains the summary of the whole paper, but it lacks details of methods and experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study on Feature Importance for Research Replication", "sec_num": "6.3" }, { "text": "\u2022 Cutting off the ending set of features (discus-sion+conclusion+reference+appendix) results in almost the same performance as the all text setting. This is primarily because the information in the third set of features has already been covered in the first set of features or is supplementary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study on Feature Importance for Research Replication", "sec_num": "6.3" }, { "text": "\u2022 Removing method+experiment leads to a significant reduction of testing accuracy. We conjecture this is because the second set of features contains the core details. In summary, we found that the methods and experiments sections are more important than other sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study on Feature Importance for Research Replication", "sec_num": "6.3" }, { "text": "We showed two samples which have the same text but have different classification results with two different classifiers. The paragraph is selected from the research paper \"Avoiding overhead aversion in charity\" published in Behavioral Economics. This article has been verified to be replicable. The goal of this case study is to provide an intuitive view about how the classifiers work and their ability to identify relevant contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "6.4" }, { "text": "The classification result of LR classifier is nonreplicable which is wrong. Since our text features are Tf-idf, there is a weight coefficient for each word in LR classifier. We highlight the words with larger weights in Table 7 . As for the PL classifiers, its classification result is Replicable (Correct). We highlight the words with larger weights in Table 8 . Because PL uses a neural network to train the model, there is a corresponding node in the input layer for each word. Each node has multiple links to the hidden layer and every link has a weight coefficient. For each code, we calculate the summation of all the weights. We do observe evidence that the PL classifier is able to capture more relevant keywords such as charity, donors, overhead, significantly, etc. This study demonstrates the possibility of using our works to identify the keywords or key paragraphs to spot-check an article.", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 227, "text": "Table 7", "ref_id": null }, { "start": 354, "end": 362, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Case Study", "sec_num": "6.4" }, { "text": "In this paper, we used two fields of corpus (\"economic review\" and \"psychological science\") to train our model together because both of them are social sciences that rely heavily on quantitative methodologies (e.g., survey, experiments) and draw conclusions based on statistics. Thus, they share the same definition of replicability such that whether the same statistical findings (e.g., effect size, p-value) can be reproduced in replications following the same methodological procedure with different samples. The same methodologies are also widely used in empirical sciences (e.g., lab experiments in Biology and Medicine) which demand replicability in the same sense and also follow the same format in reporting their procedures and findings. Thus, our proposed methods should also work in the contexts mentioned above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The paper studies the possibilities of using weakly supervised learning methods based on text information of research papers to improve the prediction accuracy of research replication using a small amount of labeled data and a large amount of unlabeled data. Our experiments show that our ap-proaches successfully improved prediction performance compared to the supervised models utilizing only statistic features and a small size of labeled dataset. Our approach can also be generically extended to other weakly supervised NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Our study has limitations. First of all, our sampling of the unsupervised articles is not ideal. As a next step, we will include a more diverse and bigger pool of representative articles into our study. Our method replied on BERT for feature extraction, which remains largely as a \"blackbox\" processor. In the future, we plan to explore other advanced NLP techniques such as Named Entity Recognition, Relation Extraction, etc. to help us identify more explainable features. This information will help facilitate the human evaluation of a research claim's replicability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "https://huggingface.co/bert-base-uncased", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the combination, the model using only statistics features is fixed to SVM since it has the best performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.replicationmarkets.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the members of the ReplicationMarkets team 4 for their helpful comments and suggestions. The list of members includes but is not limited to M. Bishop, Y. Chen, M. Gordon, T. Pfeiffer, R. Raab, C. Twardy, and J. Wang. The authors also would like to thank Dr. Bingjie Liu for her valuable feedback on the manuscript. We thank anonymous reviewers for valuable suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Predicting the replicability of social science lab experiments", "authors": [ { "first": "Adam", "middle": [], "last": "Altmejd", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Dreber", "suffix": "" }, { "first": "Eskil", "middle": [], "last": "Forsell", "suffix": "" }, { "first": "Juergen", "middle": [], "last": "Huber", "suffix": "" }, { "first": "Taisuke", "middle": [], "last": "Imai", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Johannesson", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kirchler", "suffix": "" } ], "year": 2019, "venue": "PloS one", "volume": "", "issue": "12", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Altmejd, Anna Dreber, Eskil Forsell, Juergen Huber, Taisuke Imai, Magnus Johannesson, Michael Kirchler, Gideon Nave, and Colin Camerer. 2019. Predicting the replicability of social science lab ex- periments. PloS one, 14(12).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Text mining: From ontology learning to automated text processing applications", "authors": [ { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Mehler", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Biemann and Alexander Mehler. 2014. Text min- ing: From ontology learning to automated text pro- cessing applications. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Nlp-cube: End-to-end raw text processing with neural networks", "authors": [ { "first": "Tiberiu", "middle": [], "last": "Boro\u015f", "suffix": "" }, { "first": "Stefan", "middle": [ "Daniel" ], "last": "Dumitrescu", "suffix": "" }, { "first": "Ruxandra", "middle": [], "last": "Burtica", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "171--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiberiu Boro\u015f, Stefan Daniel Dumitrescu, and Ruxan- dra Burtica. 2018. Nlp-cube: End-to-end raw text processing with neural networks. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Pars- ing from Raw Text to Universal Dependencies, pages 171-179.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning linear threshold functions in the presence of classification noise", "authors": [ { "first": "Tom", "middle": [], "last": "Bylander", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the seventh annual conference on Computational learning theory", "volume": "", "issue": "", "pages": "340--347", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bylander. 1994. Learning linear threshold func- tions in the presence of classification noise. In Pro- ceedings of the seventh annual conference on Com- putational learning theory, pages 340-347.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Evaluating replicability of laboratory experiments in economics", "authors": [ { "first": "F", "middle": [], "last": "Colin", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Camerer", "suffix": "" }, { "first": "Eskil", "middle": [], "last": "Dreber", "suffix": "" }, { "first": "Teck-Hua", "middle": [], "last": "Forsell", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Huber", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Johannesson", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Kirchler", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Almenberg", "suffix": "" }, { "first": "Taizan", "middle": [], "last": "Altmejd", "suffix": "" }, { "first": "", "middle": [], "last": "Chan", "suffix": "" } ], "year": 2016, "venue": "Science", "volume": "351", "issue": "6280", "pages": "1433--1436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin F Camerer, Anna Dreber, Eskil Forsell, Teck- Hua Ho, J\u00fcrgen Huber, Magnus Johannesson, Michael Kirchler, Johan Almenberg, Adam Altmejd, Taizan Chan, et al. 2016. Evaluating replicability of laboratory experiments in economics. Science, 351(6280):1433-1436.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Evaluating the replicability of social science experiments in nature and science between", "authors": [ { "first": "F", "middle": [], "last": "Colin", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Camerer", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Dreber", "suffix": "" }, { "first": "Teck-Hua", "middle": [], "last": "Holzmeister", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Huber", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Johannesson", "suffix": "" }, { "first": "Gideon", "middle": [], "last": "Kirchler", "suffix": "" }, { "first": "", "middle": [], "last": "Nave", "suffix": "" }, { "first": "A", "middle": [], "last": "Brian", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Nosek", "suffix": "" }, { "first": "", "middle": [], "last": "Pfeiffer", "suffix": "" } ], "year": 2010, "venue": "Nature Human Behaviour", "volume": "2", "issue": "9", "pages": "637--644", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin F Camerer, Anna Dreber, Felix Holzmeister, Teck-Hua Ho, J\u00fcrgen Huber, Magnus Johannesson, Michael Kirchler, Gideon Nave, Brian A Nosek, Thomas Pfeiffer, et al. 2018. Evaluating the repli- cability of social science experiments in nature and science between 2010 and 2015. Nature Human Be- haviour, 2(9):637-644.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Online learning of noisy data", "authors": [ { "first": "Nicolo", "middle": [], "last": "Cesa-Bianchi", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Ohad", "middle": [], "last": "Shamir", "suffix": "" } ], "year": 2011, "venue": "IEEE Transactions on Information Theory", "volume": "57", "issue": "12", "pages": "7907--7931", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicolo Cesa-Bianchi, Shai Shalev-Shwartz, and Ohad Shamir. 2011. Online learning of noisy data. IEEE Transactions on Information Theory, 57(12):7907- 7931.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Libsvm: A library for support vector machines", "authors": [ { "first": "Chih-Chung", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM transactions on intelligent systems and technology (TIST)", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: A library for support vector machines. ACM trans- actions on intelligent systems and technology (TIST), 2(3):27.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An open, largescale, collaborative effort to estimate the reproducibility of psychological science", "authors": [], "year": 2012, "venue": "Perspectives on Psychological Science", "volume": "7", "issue": "6", "pages": "657--660", "other_ids": {}, "num": null, "urls": [], "raw_text": "Open Science Collaboration. 2012. An open, large- scale, collaborative effort to estimate the repro- ducibility of psychological science. Perspectives on Psychological Science, 7(6):657-660.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Estimating the reproducibility of psychological science", "authors": [], "year": 2015, "venue": "Science", "volume": "349", "issue": "6251", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Open Science Collaboration et al. 2015. Estimating the reproducibility of psychological science. Science, 349(6251):aac4716.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Predicting replication outcomes in the many labs 2 study", "authors": [ { "first": "A", "middle": [], "last": "Dreber", "suffix": "" }, { "first": "E", "middle": [], "last": "Pfeiffer", "suffix": "" }, { "first": "", "middle": [], "last": "Forsell", "suffix": "" }, { "first": "M", "middle": [], "last": "Viganola", "suffix": "" }, { "first": "Y", "middle": [], "last": "Johannesson", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "J", "middle": [], "last": "Ba Nosek", "suffix": "" }, { "first": "", "middle": [], "last": "Almenberg", "suffix": "" } ], "year": 2019, "venue": "Journal of Economic Psychology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Dreber, T Pfeiffer, E Forsell, D Viganola, M Johan- nesson, Y Chen, B Wilson, BA Nosek, and J Almen- berg. 2019. Predicting replication outcomes in the many labs 2 study. Journal of Economic Psychol- ogy.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Many labs 3: Evaluating participant pool quality across the academic semester via replication", "authors": [ { "first": "Olivia", "middle": [ "E" ], "last": "Charles R Ebersole", "suffix": "" }, { "first": "Aimee", "middle": [ "L" ], "last": "Atherton", "suffix": "" }, { "first": "Hayley", "middle": [ "M" ], "last": "Belanger", "suffix": "" }, { "first": "Jill", "middle": [ "M" ], "last": "Skulborstad", "suffix": "" }, { "first": "Jonathan", "middle": [ "B" ], "last": "Allen", "suffix": "" }, { "first": "Erica", "middle": [], "last": "Banks", "suffix": "" }, { "first": "", "middle": [], "last": "Baranski", "suffix": "" }, { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Diane", "middle": [ "Bv" ], "last": "Bernstein", "suffix": "" }, { "first": "Leanne", "middle": [], "last": "Bonfiglio", "suffix": "" }, { "first": "", "middle": [], "last": "Boucher", "suffix": "" } ], "year": 2016, "venue": "Journal of Experimental Social Psychology", "volume": "67", "issue": "", "pages": "68--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles R Ebersole, Olivia E Atherton, Aimee L Belanger, Hayley M Skulborstad, Jill M Allen, Jonathan B Banks, Erica Baranski, Michael J Bern- stein, Diane BV Bonfiglio, Leanne Boucher, et al. 2016. Many labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67:68- 82.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The economics of reproducibility in preclinical research", "authors": [ { "first": "Iain", "middle": [ "M" ], "last": "Leonard P Freedman", "suffix": "" }, { "first": "Timothy", "middle": [ "S" ], "last": "Cockburn", "suffix": "" }, { "first": "", "middle": [], "last": "Simcoe", "suffix": "" } ], "year": 2015, "venue": "PLoS Biol", "volume": "13", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonard P Freedman, Iain M Cockburn, and Timothy S Simcoe. 2015. The economics of reproducibility in preclinical research. PLoS Biol, 13(6):e1002165.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep learning", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Random decision forests", "authors": [ { "first": "Kam", "middle": [], "last": "Tin", "suffix": "" }, { "first": "", "middle": [], "last": "Ho", "suffix": "" } ], "year": 1995, "venue": "Proceedings of 3rd international conference on document analysis and recognition", "volume": "1", "issue": "", "pages": "278--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tin Kam Ho. 1995. Random decision forests. In Pro- ceedings of 3rd international conference on docu- ment analysis and recognition, volume 1, pages 278- 282. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Speech and language processing", "authors": [ { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "H", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2014, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Jurafsky and James H Martin. 2014. Speech and language processing. vol. 3.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Claudia Chloe Brumbaugh, et al. 2014a. Investigating variation in replicability. Social psychology", "authors": [ { "first": "Kate", "middle": [ "A" ], "last": "Richard A Klein", "suffix": "" }, { "first": "Michelangelo", "middle": [], "last": "Ratliff", "suffix": "" }, { "first": "Reginald", "middle": [ "B" ], "last": "Vianello", "suffix": "" }, { "first": "\u0160t\u011bp\u00e1n", "middle": [], "last": "Adams", "suffix": "" }, { "first": "", "middle": [], "last": "Bahn\u00edk", "suffix": "" }, { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Konrad", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "", "middle": [], "last": "Bocian", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard A Klein, Kate A Ratliff, Michelangelo Vianello, Reginald B Adams Jr,\u0160t\u011bp\u00e1n Bahn\u00edk, Michael J Bernstein, Konrad Bocian, Mark J Brandt, Beach Brooks, Claudia Chloe Brumbaugh, et al. 2014a. Investigating variation in replicability. So- cial psychology.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Many labs 2: Investigating variation in replicability across samples and settings", "authors": [ { "first": "Michelangelo", "middle": [], "last": "Richard A Klein", "suffix": "" }, { "first": "Fred", "middle": [], "last": "Vianello", "suffix": "" }, { "first": "", "middle": [], "last": "Hasselman", "suffix": "" }, { "first": "G", "middle": [], "last": "Byron", "suffix": "" }, { "first": "Reginald", "middle": [ "B" ], "last": "Adams", "suffix": "" }, { "first": "Sinan", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Alper", "suffix": "" }, { "first": "", "middle": [], "last": "Aveyard", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan R Axt", "suffix": "" }, { "first": "T", "middle": [], "last": "Mayowa", "suffix": "" }, { "first": "\u0160t\u011bp\u00e1n", "middle": [], "last": "Babalola", "suffix": "" }, { "first": "", "middle": [], "last": "Bahn\u00edk", "suffix": "" } ], "year": 2018, "venue": "Advances in Methods and Practices in Psychological Science", "volume": "1", "issue": "4", "pages": "443--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard A Klein, Michelangelo Vianello, Fred Hassel- man, Byron G Adams, Reginald B Adams Jr, Sinan Alper, Mark Aveyard, Jordan R Axt, Mayowa T Ba- balola,\u0160t\u011bp\u00e1n Bahn\u00edk, et al. 2018. Many labs 2: In- vestigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4):443-490.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dividemix: Learning with noisy labels as semi-supervised learning", "authors": [ { "first": "Junnan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Steven", "suffix": "" }, { "first": "", "middle": [], "last": "Hoi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.07394" ] }, "num": null, "urls": [], "raw_text": "Junnan Li, Richard Socher, and Steven CH Hoi. 2020. Dividemix: Learning with noisy la- bels as semi-supervised learning. arXiv preprint arXiv:2002.07394.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Variational inference for crowdsourcing", "authors": [ { "first": "Qiang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Alexander", "middle": [ "T" ], "last": "Ihler", "suffix": "" } ], "year": 2012, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "692--700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiang Liu, Jian Peng, and Alexander T Ihler. 2012. Variational inference for crowdsourcing. In Ad- vances in neural information processing systems, pages 692-700.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Peer loss functions: Learning from noisy labels without knowing noise rates. International Conference on Machine Learning", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hongyi", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu and Hongyi Guo. 2020. Peer loss functions: Learning from noisy labels without knowing noise rates. International Conference on Machine Learn- ing.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Virtual adversarial training: a regularization method for supervised and semisupervised learning", "authors": [ { "first": "Takeru", "middle": [], "last": "Miyato", "suffix": "" }, { "first": "Masanori", "middle": [], "last": "Shin-Ichi Maeda", "suffix": "" }, { "first": "Shin", "middle": [], "last": "Koyama", "suffix": "" }, { "first": "", "middle": [], "last": "Ishii", "suffix": "" } ], "year": 2018, "venue": "IEEE transactions on pattern analysis and machine intelligence", "volume": "41", "issue": "", "pages": "1979--1993", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pat- tern analysis and machine intelligence, 41(8):1979- 1993.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning with noisy labels", "authors": [ { "first": "Nagarajan", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "S", "middle": [], "last": "Inderjit", "suffix": "" }, { "first": "", "middle": [], "last": "Dhillon", "suffix": "" }, { "first": "K", "middle": [], "last": "Pradeep", "suffix": "" }, { "first": "Ambuj", "middle": [], "last": "Ravikumar", "suffix": "" }, { "first": "", "middle": [], "last": "Tewari", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1196--1204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nagarajan Natarajan, Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. 2013. Learning with noisy labels. In Advances in neural information pro- cessing systems, pages 1196-1204.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Realistic evaluation of deep semi-supervised learning algorithms", "authors": [ { "first": "Avital", "middle": [], "last": "Oliver", "suffix": "" }, { "first": "Augustus", "middle": [], "last": "Odena", "suffix": "" }, { "first": "Colin", "middle": [ "A" ], "last": "Raffel", "suffix": "" }, { "first": "Ekin", "middle": [], "last": "Dogus Cubuk", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3235--3246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. 2018. Re- alistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Pro- cessing Systems, pages 3235-3246.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Psychfiledrawer: archive of replication attempts in experimental psychology", "authors": [ { "first": "H", "middle": [], "last": "Pashler", "suffix": "" }, { "first": "S", "middle": [], "last": "Spellman", "suffix": "" }, { "first": "A", "middle": [], "last": "Kang", "suffix": "" }, { "first": "", "middle": [], "last": "Holcombe", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H Pashler, B Spellman, S Kang, and A Holcombe. 2019. Psychfiledrawer: archive of replication attempts in experimental psychology. Online\u00a1 http://psychfiledrawer. org/view article list. php.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An introduction to logistic regression analysis and reporting", "authors": [ { "first": "Chao-Ying Joanne", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Kuk", "middle": [ "Lida" ], "last": "Lee", "suffix": "" }, { "first": "Gary M", "middle": [], "last": "Ingersoll", "suffix": "" } ], "year": 2002, "venue": "The journal of educational research", "volume": "96", "issue": "1", "pages": "3--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao-Ying Joanne Peng, Kuk Lida Lee, and Gary M In- gersoll. 2002. An introduction to logistic regression analysis and reporting. The journal of educational research, 96(1):3-14.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Systematizing confidence in open research and evidence (score)", "authors": [ { "first": "Adam", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Russell. 2019. Systematizing confidence in open research and evidence (score). Technical report, Tech. Rep., Defense Advanced Research Projects Agency, Arlington, VA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A rate of convergence for mixture proportion estimation, with application to learning from noisy labels", "authors": [ { "first": "Clayton", "middle": [], "last": "Scott", "suffix": "" } ], "year": 2015, "venue": "Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "838--846", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clayton Scott. 2015. A rate of convergence for mix- ture proportion estimation, with application to learn- ing from noisy labels. In Artificial Intelligence and Statistics, pages 838-846.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Classification with asymmetric label noise: Consistency and maximal denoising", "authors": [ { "first": "Clayton", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Blanchard", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Handy", "suffix": "" } ], "year": 2013, "venue": "Conference On Learning Theory", "volume": "", "issue": "", "pages": "489--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clayton Scott, Gilles Blanchard, and Gregory Handy. 2013. Classification with asymmetric label noise: Consistency and maximal denoising. In Conference On Learning Theory, pages 489-511.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "An introduction to registered replication reports at perspectives on psychological science", "authors": [ { "first": "J", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Alex", "middle": [ "O" ], "last": "Simons", "suffix": "" }, { "first": "Barbara", "middle": [ "A" ], "last": "Holcombe", "suffix": "" }, { "first": "", "middle": [], "last": "Spellman", "suffix": "" } ], "year": 2014, "venue": "Perspectives on Psychological Science", "volume": "9", "issue": "5", "pages": "552--555", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel J Simons, Alex O Holcombe, and Barbara A Spellman. 2014. An introduction to registered replication reports at perspectives on psychologi- cal science. Perspectives on Psychological Science, 9(5):552-555.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Small telescopes: Detectability and the evaluation of replication results. Psychological science", "authors": [ { "first": "Uri", "middle": [], "last": "Simonsohn", "suffix": "" } ], "year": 2015, "venue": "", "volume": "26", "issue": "", "pages": "559--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uri Simonsohn. 2015. Small telescopes: Detectability and the evaluation of replication results. Psycholog- ical science, 26(5):559-569.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning with symmetric label noise: The importance of being unhinged", "authors": [ { "first": "Brendan", "middle": [], "last": "Van Rooyen", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Menon", "suffix": "" }, { "first": "Robert C", "middle": [], "last": "Williamson", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan Van Rooyen, Aditya Menon, and Robert C Williamson. 2015. Learning with symmetric label noise: The importance of being unhinged. In Ad- vances in Neural Information Processing Systems, pages 10-18.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "", "middle": [], "last": "Debut", "suffix": "" }, { "first": "J", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "C", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "P", "middle": [], "last": "Moi", "suffix": "" }, { "first": "", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "", "middle": [], "last": "Rault", "suffix": "" }, { "first": "", "middle": [], "last": "Louf", "suffix": "" }, { "first": "", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, L Debut, V Sanh, J Chaumond, C De- langue, A Moi, P Cistac, T Rault, R Louf, M Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The replicability of scientific findings using human and machine intelligence", "authors": [ { "first": "Yang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Yang. 2018. The replicability of scien- tific findings using human and machine intelli- gence. https://www.metascience2019.org/ presentations/yang-yang/ Metascience 2019.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A brief introduction to weakly supervised learning", "authors": [ { "first": "Zhi-Hua", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "National Science Review", "volume": "5", "issue": "1", "pages": "44--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhi-Hua Zhou. 2018. A brief introduction to weakly supervised learning. National Science Review, 5(1):44-53.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "RRR) (Simons et al., 2014), Many Labs 1 (Klein et al., 2014a), Many Labs 2 (Klein et al., 2018), Many Labs 3 (Ebersole et al., 2016), Social Sciences Replication Project (SSRP) (Camerer et al., 2018), PsychFileDrawer (Pashler et al., 2019), Experimental Economics Replication Project (Camerer et al., 2016), and Reproducibility Project: Psychology (RPP) (Collaboration, 2012" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "\u03c3 0 := P (\u0233 noise i = 1|y i = 0) and \u03c3 1 := P (\u0233 noise i = 0|y i = 1)" }, "TABREF0": { "content": "", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF2": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Number, average length, maximum length, and minimum length of documents in different datasets" }, "TABREF5": { "content": "
: Comparison on Train setting, Test Accuracy (Text), and Test Accuracy (Text + Statistics) between different
eight trained models. VI is our variational inference based method, and PL is our peer loss based approach. 300
(L) means that 300 labelled dataset are used to train. 300 (L) + 2,170 (U) means that 300 labelled and 2,170 dataset
are used to train.
ModelPrecisionRecallF1
LR61.90%50.98% 55.91%
RF54.05%39.22% 45.45%
SVM63.04%56.86% 59.79%
MLP65.00%50.98% 57.14%
LSTM70.27%50.98% 59.09%
DIVIDEMIX 65.11%54.90% 59.57%
VI72.50% 56.86% 63.74%
PL71.43% 88.24% 78.95%
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF6": { "content": "
: Comparison on Precision, Recall, and F1 be-
tween different approaches (Setting: Text + Statistics)
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF8": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Accuracy comparison between different features on the test dataset" }, "TABREF9": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Red color highlights words having positive weights and the absolute value is larger than 0.15. Blue colors highlight words having negative weights and the absolute value is larger than 0.15. Classification result of Peer Loss for this paper is Replicable (Correct)" } } } }