|
{ |
|
"paper_id": "N19-1003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:57:16.750910Z" |
|
}, |
|
"title": "Neural Self-Training through Spaced Repetition", |
|
"authors": [ |
|
{ |
|
"first": "Hadi", |
|
"middle": [], |
|
"last": "Amiri", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "hadi@hms.harvard.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Self-training is a semi-supervised learning approach for utilizing unlabeled data to create better learners. The efficacy of self-training algorithms depends on their data sampling techniques. The majority of current sampling techniques are based on predetermined policies which may not effectively explore the data space or improve model generalizability. In this work, we tackle the above challenges by introducing a new data sampling technique based on spaced repetition that dynamically samples informative and diverse unlabeled instances with respect to individual learner and instance characteristics. The proposed model is specifically effective in the context of neural models which can suffer from overfitting and high-variance gradients when trained with small amount of labeled data. Our model outperforms current semi-supervised learning approaches developed for neural networks on publicly-available datasets.", |
|
"pdf_parse": { |
|
"paper_id": "N19-1003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Self-training is a semi-supervised learning approach for utilizing unlabeled data to create better learners. The efficacy of self-training algorithms depends on their data sampling techniques. The majority of current sampling techniques are based on predetermined policies which may not effectively explore the data space or improve model generalizability. In this work, we tackle the above challenges by introducing a new data sampling technique based on spaced repetition that dynamically samples informative and diverse unlabeled instances with respect to individual learner and instance characteristics. The proposed model is specifically effective in the context of neural models which can suffer from overfitting and high-variance gradients when trained with small amount of labeled data. Our model outperforms current semi-supervised learning approaches developed for neural networks on publicly-available datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "It is often expensive or time-consuming to obtain labeled data for Natural Language Processing tasks. In addition, manually-labeled datasets may not contain enough samples for downstream data analysis or novelty detection (Wang and Hebert, 2016) . To tackle these issues, semi-supervised learning (Zhu, 2006; Chapelle et al., 2009) has become an important topic when one has access to small amount of labeled data and large amount of unlabeled data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 245, |
|
"text": "(Wang and Hebert, 2016)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 308, |
|
"text": "(Zhu, 2006;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 331, |
|
"text": "Chapelle et al., 2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Self-training is a type of semi-supervised learning in which a downstream learner (e.g. a classifier) is first trained with labeled data, then the trained model is applied to unlabeled data to generate more labeled instances. A select sample of these instances together with their pseudo (predicted) labels are added to the labeled data and the learner is re-trained using the new labeled dataset. This process repeats until there is no more unlabeled data left or no improvement is observed in model performance on validation data (Zhu, 2006; Zhu and Goldberg, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 532, |
|
"end": 543, |
|
"text": "(Zhu, 2006;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 567, |
|
"text": "Zhu and Goldberg, 2009)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Conventional self-training methods often rely on prediction confidence of their learners to sample unlabeled data. Typically the most confident unlabeled instances are selected (HEARST, 1991; Yarowsky, 1995; Riloff and Jones, 1999; Zhou et al., 2012) . This strategy often causes only those unlabeled instances that match well with the current model being selected during self-training, therefore, the model may fail to best generalize to complete sample space (Zhang and Rudnicky, 2006; Wu et al., 2018) . Ideally, a self-training algorithm should explore the space thoroughly for better generalization and higher performance. Recently Wu et al. (2018) developed an effective data sampling technique for \"co-training\" (Blum and Mitchell, 1998) methods which require two distinct views of data. Although effective, this model can't be readily applied to some text datasets due to the two distinct view requirement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 191, |
|
"text": "(HEARST, 1991;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 207, |
|
"text": "Yarowsky, 1995;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 231, |
|
"text": "Riloff and Jones, 1999;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 250, |
|
"text": "Zhou et al., 2012)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 487, |
|
"text": "(Zhang and Rudnicky, 2006;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 504, |
|
"text": "Wu et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 637, |
|
"end": 653, |
|
"text": "Wu et al. (2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 719, |
|
"end": 744, |
|
"text": "(Blum and Mitchell, 1998)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the context of neural networks, pretraining is an effective semi-supervised approach in which layers of a network are first pretrained by learning to reconstruct their inputs, and then network parameters are optimized by supervised fine-tuning on a target task (Hinton and Salakhutdinov, 2006; Bengio et al., 2007; Erhan et al., 2010) . While pretraining has been effective in neural language modeling and document classification (Dai and Le, 2015; Miyato et al., 2016) , it has an inherent limitation: the same neural model or parts thereof must be used in both pretraining and fine-tuning steps. This poses a major limitation on the design choices as some pretraining tasks may need to exploit several data types (e.g., speech and text), or might require deeper network architectures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 296, |
|
"text": "(Hinton and Salakhutdinov, 2006;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 317, |
|
"text": "Bengio et al., 2007;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 337, |
|
"text": "Erhan et al., 2010)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 451, |
|
"text": "(Dai and Le, 2015;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 472, |
|
"text": "Miyato et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The above challenges and intuitions inspire our work on developing a novel approach for neural self-training. The core part of our approach is a data sampling policy which is inspired by findings in cognitive psychology about spaced repetition (Dempster, 1989; Cepeda et al., 2006; Averell and Heathcote, 2011) ; the phenomenon in which a learner (often a human) can learn efficiently and effectively by accurately scheduling reviews of learning materials. In contrast to previous self-training approaches, our spaced repetitionbased data sampling policy is not predetermined, explores the entire data space, and dynamically selects unlabeled instances with respect to the \"strength\" of a downstream learner on a target task, and \"easiness\" of unlabeled instances. In addition, our model relaxes the \"same model\" constraint of pretraining-based approaches by naturally decoupling pretraining and fine-tuning models through spaced repetition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 260, |
|
"text": "(Dempster, 1989;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 281, |
|
"text": "Cepeda et al., 2006;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 310, |
|
"text": "Averell and Heathcote, 2011)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contributions of this paper are (a): we propose an effective formulation of spaced repetition for self-training methods; to the best of our knowledge, this is the first work that investigates spaced repetition for semi-supervised learning, (b): our approach dynamically samples data, is not limited to predetermined sampling strategies, and naturally decouples pretraining and fine-tuning models, and (c): it outperforms current state-of-the-art baselines on large-scale datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our best model outperforms standard and current state-of-the-art semi-supervised learning methods by 6.5 and 4.1 points improvement in macro-F1 on sentiment classification task, and 3.6 and 2.2 points on churn classification task. Further analyses show that the performance gain is due to our model's ability in sampling diverse and informative unlabeled instances (those that are different from training data and can improve model generalizability).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Conventional self-training methods employ the following steps to utilize unlabeled data for semisupervised learning: (1) train a learner, e.g. a classifier, using labeled data, (2) iteratively select unlabeled instances based on a data sampling technique, and add the sampled instances (together with their predicted pseudo labels) to the labeled data, and (3) iteratively update the learner using the new labeled dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Training data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Network", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Leitner Queue", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling Policy Learner", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 1: Neural Self-training Framework: at every self-training episode, the network uses current labeled data to iteratively optimize its parameters against a target task, and dynamically explores unlabeled data space through spaced repetition (specifically Leitner queue) to inform a data sampler that selects unlabeled data for the next self-training episode. Dashed/Red and solid/green arrows in Leitner queue indicate instance movements among queues.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling Policy Learner", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The core difference between self-training algorithms is in the second step: data sampling policy. In this paper, we develop a new data sampling technique based on \"spaced repetition\" which dynamically explores the data space and takes into account instance and learner characteristics (such as easiness of instances or learner strength on target task) to sample unlabeled data for effective self-training. Figure 1 illustrates our proposed neural selftraining framework. We assume the downstream learner is a neural network that, at every selftraining episode, (a): takes current labeled and unlabeled data as input, (b): uses labeled data to iteratively optimize its parameters with respect to a target task, and (c): dynamically explores unlabeled data space through spaced repetition to inform a data sampler that selects unlabeled data for the next self-training episode.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 414, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sampling Policy Learner", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Spaced repetition (Dempster, 1989; Cepeda et al., 2006; Averell and Heathcote, 2011) was presented in psychology and forms the building block of many educational devices, including flashcards, in which small pieces of information are repeatedly presented to a learner on a schedule determined by a spaced repetition algorithm. Such algorithms show that humans and machines can better learn by scheduling reviews of materials so that more time is spent on difficult concepts and less time on easier ones (Dempster, 1989; Novikoff et al., 2012; Amiri et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 34, |
|
"text": "(Dempster, 1989;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 35, |
|
"end": 55, |
|
"text": "Cepeda et al., 2006;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 56, |
|
"end": 84, |
|
"text": "Averell and Heathcote, 2011)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 519, |
|
"text": "(Dempster, 1989;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 542, |
|
"text": "Novikoff et al., 2012;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 562, |
|
"text": "Amiri et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spaced Repetition", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on a specific spaced repetition framework called Leitner system (Leitner, 1974 ). Suppose we have n queues {q 0 , . . . , q n\u22121 }. In general, Leitner system initially places all instances in the first queue, q 0 . During training, if an instance from q i is correctly classified by the learner, it will be \"promoted\" to q i+1 (solid/green arrows in Figure 1 ), otherwise it will be \"demoted\" to the previous queue, q i\u22121 (dashed/red arrows in Figure 1 ). Therefore, as the learner trains through time, higher queues will accumulate instances that are easier for the learner, while lower queues will accumulate harder instances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 102, |
|
"text": "(Leitner, 1974", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 382, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 476, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Spaced Repetition", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To use Leitner system for neural self-training, we assume our learner is a neural network, place all unlabeled instances in the first queue of Leitner system (line 2 in Algorithm 1), and gradually populate them to other queues while training the network. Our Leitner system uses iteration-specific network predictions on unlabeled instances and current pseudo labels of these instances to move them between queues (see line 4-5 in Algorithm 1); pseudo labels can be obtained through posterior predictions generated by any trained downstream learner (see Section 2.2). Instances with similar class predictions and pseudo labels will be promoted to their next queues, and those with opposite predictions and labels will be demoted to lower queues. We note that, errors (e.g. inaccurate pseudo labels or network predictions) can inversely affect instance movements among queues. However, our sampling technique (see below) alleviates this issue because such misleading instances, if sampled, can't improve the generalizability of downstream learners. Details of our Leitner system is shown in Table 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spaced Repetition", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We formulate the data sampling process as a decision-making problem where, at every selftraining episode, the decision is to select a subset of unlabeled instances for self-training using information from Leitner queues. A simple, yet effective, approach to utilize such information is a greedy one in which instances of the queue that most improves the performance of the current model on validation data will be selected. We refer to this queue as designated queue:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Algorithm 2 shows details of our self-training approach. At every episode, we use current labeled data to train a task-specific neural net-Algorithm 1. Leitner system Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "L, U, V : labeled, unlabeled, and validation data y : pseudo labels for U k : number of training epochs n : number of queues Output: Q: Leitner queue populated with U", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "1 Q = [q0, q1, . . . , qn\u22121] 2 q0 = [U], qi = [] for i \u2208 [1, n \u2212 1] 3 for epoch = 1 to k: 4 model = epoch train(L, V) 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "promos, demos = eval(Q, y, model) 6 Q = schedule(Q, promos, demos) 7 end for 8 return Q Table 1: Leitner system for neural self-training. All unlabeled instances are initially placed in the first queue and then populated to other queues depending on their easiness and learner (network) performance. epoch train(.) uses training data to train the network for a single epoch and returns a trained model, eval(.) applies the current model on unlabeled instances in all queues and, based on given pseudo labels (treated as gold labels), returns lists of correctly and incorrectly classified instances, promos and demos respectively, and schedule(.) moves promos and demos instances to their next and previous queues respectively, and returns the updated queue. work (line 2). Here, we weight the loss function using class size to deal with imbalanced data, and weight pseudo-labeled instances (as a function of episodes) to alleviate the effect of potentially wrong pseudo labels while training the network. We then use the trained network to generate pseudo labels for current unlabeled instances (line 3). These instances are then populated in Leitner queues as described before (line 4). Given the populated Leitner queues, the sample for current selftraining episode is then created using instances of the designated queue, the queue that most improves the performance of the current network on validation data (lines 5-8). Instances of the designated queue will be removed from unlabeled data and added to labeled data with their pseudo labels treated as gold labels (lines 9-10).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We note that finding designated queues (lines 5-8 in Algorithm 2) imposes computational complexity on our model. However, in practice, we observe that designated queues are almost always among middle or higher queues in Leitner system, i.e. q i , \u2200i \u2208 [ n/2 , n \u2212 1] where n in the number of queues. This can help accelerating the search Algorithm 2. Neural Self-training Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "L, U, V : labeled, unlabeled, and validation data K : number of self-training episodes Output: M : classification model 1 for episode = 1 to K: Table 2 : Proposed neural self-training framework. train(.) uses current labeled data to train the network and returns a trained model, label(.) generates pseudo labels for unlabeled instances using the trained model, Leitner system(.) populates current unlabeled instances in Leitner queue, and get best(.) compares performance of given models on validation data and returns the best model in conjunction with the queue that leads to the best performance, if any. Instances of the designated queue will be removed from unlabeled data and added to labeled data with their pseudo labels treated as gold labels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 151, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "2 ML = train(L, V) 3 y = label(ML, U) 4 Q = Leitner system(L, U, V, y) \\\\Alg. 1 5 for q in Q: 6 Mq = train(L + q, y[q] , V) 7 end for 8 M, q desig = get best(MQ, ML) 9 L = L + q desig , y[q desig ] 10 U = U \u2212 q desig 11 end for 12 return M", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "process. In addition, learning a data sampling policy from movement patterns of instances among queues may help alleviating/eliminating the need for such an iterative search; see Section 4.4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Finally, at test time, we apply the resulting selftrained network to test data and use the result for model comparison.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training with Leitner Queues", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We compare different self-training approaches in two settings where learners (neural networks) have low or high performance on original labeled data. This consideration helps investigating sensitivity of different self-training algorithms to the initial performance of learners.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As datasets, we use movie reviews from IMDb and short microblog posts from Twitter. These datasets and their corresponding tasks are described below and their statistics are provided in Table 3 . In terms of preprocessing, we change all texts to lowercase, and remove stop words, user names, and URLs from texts in these datasets: IMDb: The IMDb dataset was developed by Maas et al. 20111 for sentiment classification where systems should classify the polarity of a given movie review as positive or negative. The dataset contains 50K labeled movie reviews. For the purpose of our experiments, we randomly sample 1K, 1K, and 48K instances from this data (with balanced distribution over classes) and treat them as labeled (training), validation, and test data respectively. We create five such datasets for robustness against different seeding or data partitions. This dataset also provides 50K unlabeled reviews.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 193, |
|
"text": "Table 3", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metric", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Train Val. Test Unlabeled IMDb 5 \u00d7 1K 5 \u00d7 1K 5 \u00d7 48K 50K Churn 5 \u00d7 1K 5 \u00d7 1K 5 \u00d7 3K 100K", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metric", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Churn: This dataset contains more than 5K tweets about three telecommunication brands and was developed by Amiri and Daum\u00e9 III (2015) 2 for the task of churn prediction 3 where systems should predict if a twitter post indicates user intention about leaving a brand -classifying tweets as churny or non-churny with respect to brands. We replace all target brand names with the keyword BRAND and other non-target brands with BRAND-OTHER for the purpose of our experiments. Similar to IMDb, we create five datasets for experiments. We also crawl an additional 100K tweets about the target brands and treat them as unlabeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metric", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We evaluate models in terms of macro-F1 score, i.e. the mean of F1-scores across classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metric", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As downstream neural networks (referred to as base classifiers), we consider current state-of-theart deep averaging networks (DANs) (Shen et al., 2018; Iyyer et al., 2015; Joulin et al., 2017; Arora et al., 2017) for IMDb, and a basic CNN model for Churn dataset with parameters set from the work presented in (Gridach et al., 2017) except for pretrained embeddings. In terms of DANs, we use FastText (Joulin et al., 2017) for its high per-formance and simplicity. FastText is a feedforward neural network that consists of an embedding layer that maps vocabulary indices to embeddings, an averaging layer that averages word embeddings of inputs, and several hidden layers (we use two layers of size 256) followed by a prediction layer with sigmoid activation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 151, |
|
"text": "(Shen et al., 2018;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 171, |
|
"text": "Iyyer et al., 2015;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 192, |
|
"text": "Joulin et al., 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 212, |
|
"text": "Arora et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 332, |
|
"text": "(Gridach et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 422, |
|
"text": "(Joulin et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Downstream Learner and Settings", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We use 300-dimensional word embeddings provided by Google's word2vec toolkit (Mikolov et al., 2013) . In Algorithm 1, we set the number of training epochs to k = 32, and stop training when F1 performance on validation data stops improving with patience of three continuous iterations, i.e. after three continuous epochs with no improvement, training will be stopped. In addition, we set the number of training episodes to K = 20 and stop training when this number of episodes is reached or there is no unlabeled data left for sampling; the latter case is often the reason for stopping in our self-training method. In addition, we experiment with different number of Leitner queues chosen from n = {3, 5, 7, 9, 11}.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 99, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Downstream Learner and Settings", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We consider the following baselines:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Standard self-training: This approach iteratively trains a network on current labeled data and applies it to current unlabeled data; it uses a prediction confidence threshold to sample unlabeled instances (Zhu, 2006) . We set the best confidence threshold from {.80,.85,.90,.95} using validation data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 218, |
|
"text": "(Zhu, 2006)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Autoencoder self-training (Dai and Le, 2015): This approach first pretrains a network using unlabeled data (through a layerwise training approach to optimally reconstruct the inputs), and then fine-tunes it using labeled data with respect to the target task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Adversarial self-training (Miyato et al., 2016) : This model utilizes pretraining as described above, but also applies adversarial perturbations to word embeddings for more effective learning (perturbation is applied to embeddings instead of word inputs because words or their one-hot vectors do not admit infinitesimal perturbation; the network is trained to be robust to the worst perturbation).", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 49, |
|
"text": "(Miyato et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Knowledge Transfer self-training (Noroozi et al., 2018) : This model uses a clustering approach (e.g. k-means) to create clusters of unlabeled instances that have similar representations, where representations are derived from standard pretraining as described above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 57, |
|
"text": "(Noroozi et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The model then pretrains a network by learning to classify unlabeled instances to their corresponding clusters. The resulting pretrained network is then fine-tuned with respect to the target task using labeled data (with slight modification at prediction layer which makes the network suitable for target task). We set the best number of clusters from {10, 20, . . . , 100} based on model performance on validation data. Table 4 reports Macro-F1 performance of different models; we report average performance across five random test sets for each task (see Section 3.1 and Table 3 ). The performance of base classifiers in supervised settings, where the networks are only trained on original labeled datasets, is reasonably high on IMDb (73.02) and low on Churn (65.77). Standard ST (SST) improves performance on IMDb but not on Churn dataset. SST achieves its best performance (on validation data) in the first few episodes when, on average, 1.4K and 0 instances are sampled for IMDb and Churn datasets respectively. Beyond that, the performance considerably decreases down to 66.94 (IMDb) and 57.04 (Churn) respectively. This is perhaps due to imbalanced class size in Churn dataset, failure of SST to explore the data space, or classification mistakes that reinforce each other. Several previous works also observed no improvement with SST (Gollapalli et al., 2013; Zhu and Goldberg, 2009; Zhang and Rudnicky, 2006) ; but some successful applications have been reported (Wu et al., 2018; Zhou et al., 2012; Riloff and Jones, 1999; Yarowsky, 1995; HEARST, 1991) . The result also show that pretraining and adversarial-based training, PST and AST in Table 4 respectively, improve the performance of base classifiers by 3.34 and 3.37 points in macro-F1 on IMDb, and by 1.5 and 1.93 points on Churn dataset. In addition, since PST and AST show comparable performance, we conjecture that when original labeled data has a small size, adversarialbased self-training do not considerably improve pretraining. But, considerable improvement can be achieved with larger amount of labeled data, see (Miyato et al., 2016) for detailed comparison on pretraining and adversarial-based training. The results also show that knowledge transfer (KST) outperforms PST and AST on IMDb -indicating that good initial labels derived through clustering information could help semi-supervised learning, even with small amount of seed labeled data. Table 4 also shows the result of our model, Leitner ST (LST). The best performance of LST is obtained using n = 5 and n = 7 queues for IMDb and Churn datasets respectively. Considering these queue lengths, our model outperforms base classifiers by 5.25 and 4.13 points in Macro-F1 on IMDb and Churn datasets respectively; similar to PST and AST, our model results in a greater gain when the learner has higher initial performance. It also improves the best selftraining baseline, KST for IMDb and AST for Churn, by 1.16 and 2.2 points in macro-F1 on IMDb and Churn datasets respectively where both differences are significant (average \u03c1-values based on t-test are .004 and .015 respectively).", |
|
"cite_spans": [ |
|
{ |
|
"start": 1339, |
|
"end": 1368, |
|
"text": "SST (Gollapalli et al., 2013;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1369, |
|
"end": 1392, |
|
"text": "Zhu and Goldberg, 2009;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1393, |
|
"end": 1418, |
|
"text": "Zhang and Rudnicky, 2006)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1473, |
|
"end": 1490, |
|
"text": "(Wu et al., 2018;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1491, |
|
"end": 1509, |
|
"text": "Zhou et al., 2012;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1510, |
|
"end": 1533, |
|
"text": "Riloff and Jones, 1999;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1534, |
|
"end": 1549, |
|
"text": "Yarowsky, 1995;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1550, |
|
"end": 1563, |
|
"text": "HEARST, 1991)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 2089, |
|
"end": 2110, |
|
"text": "(Miyato et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 428, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 580, |
|
"text": "Table 3", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 2424, |
|
"end": 2431, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We investigate several questions about our model to shed light on its improved performance. One partial explanation is that by differentiating instances and augmenting the informative ones, we are creating a more powerful model that better explores the space of unlabeled data. In this section, we elaborate on the behavior of our model by conducting finer-grained analysis at queue-level and investigating the following questions in the context of challenges of semi-supervised learning. Due to space limit, we mainly report results on IMDb and discuss corresponding behaviors on Churn dataset in the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Introspection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We analyze queue level performance to understand how instances of different queues contribute in creating better models during the self-training process. For this experiment, we train networks using our Leitner self-training framework as normal (where, at every iteration, only instances of the designated queue are added to training data), and report the average macro-F1 performance of the network-on validation data-if it is trained with instances of each queue. Concretely, we report average macro-F1 performance of models learned at line 6 of Algorithm 2 (see M q s in Table 2) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 574, |
|
"end": 582, |
|
"text": "Table 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Queue-level Performance", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Figures 2(a) and 2(b) show the results on IMDb and Churn datasets for n = 5 and n = 7 queues respectively. Note that the last queue for Churn dataset, q 6 , has never been reached by any instance. This is perhaps because of the difficulty of this task 4 and low initial performance of the network on Churn dataset. q 2 on IMDb and q 4 on Churn dataset result in the best average performance across training episodes, both queues are close to the middle. In addition, the result show that the highest queues (q 4 for IMDb and q 5 for Churn) are often not the best queues. This result can justify the lower performance of Standard ST (SST) as instances in these queues are the easiest (and perhaps most confident ones) for the network; we further analyze these queues in Section 4.2. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Queue-level Performance", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As we discussed before, instances in the highest queues, although easy to learn for the classifier, are not informative and do not contribute to training an improved model; therefore, highest queues are often not selected by our model. To understand the reason, we try to quantify how well instances of these queues match with training data. For this purpose, we compute cosine similarity between representations of training instances (see below) and those in the highest and designated queues during self-training as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What's the Issue with Highest Queues?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1 K K e=1 cosine(T e , Q e )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What's the Issue with Highest Queues?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where T e \u2208 R m e \u00d7d and Q e \u2208 R p e \u00d7d indicate representations of training instances and those of a given target queue respectively (where d indicates the dimension of representations, and m e and p e indicate number of instances in training data and target queue at episode e respectively), and cosine(.,.) computes L2-normalized dot product of its input matrices. To obtain the above representations for instances, we compute the output of the last hidden layer (the layer below prediction layer) of the trained network at each episode. These outputs can be considered as feature representations for inputs. For finer-grained comparison, we compute similarities with respect to positive and negative classes. As the results in Figure 2(c) show, instances in the highest queue match well with current training data (and hence the current model), and, therefore, are less informative. On the other hand, instances in the designated queues show considerably smaller similarity with training instances in both positive and negative classes, and, therefore, do not match well with training data. These instances are more informative, and help the network to better explore the space of unlabeled data and optimize for the target task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 731, |
|
"end": 742, |
|
"text": "Figure 2(c)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "What's the Issue with Highest Queues?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We analyze different queues to measure the extent of diversity that each queue introduces to training data during our normal self-training process where, at every iteration, only instances of the designated queue are added to training data. Specifically, we compute the extent of diversity that each given queue introduces as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Does Diversity Matter?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "1 K K e=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Does Diversity Matter?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "1 \u2212 cosine(T e , concat(T e , Q e ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Does Diversity Matter?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "where, as before, T e and Q e indicate the representations of training and queue instances at episode e respectively, and concat(.,.) is a function that creates a new dataset by vertically concatenating T e and Q e . Figure 3 shows the results. On IMDb, q 2 and designated queues show greater diversity to training data compared to other queues. We note that q 0 carries a greater diversity than q 3 and q 4 , but, as we observed in Figure 2 , instances of q 0 do not improve performance of the model, perhaps due to their difficulty or wrong pseudo labels. We observe similar behavior in case of Churn dataset where q 4 introduces the highest diversity. From this analysis, we conclude that Leitner selftraining enables sampling diverse sets of instances that contributes to training an improved model. Table 5 : Macro-F1 performance of diverse queues across datasets. Compare these results with those obtained by designated queues in Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 225, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 441, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 804, |
|
"end": 811, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 943, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does Diversity Matter?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Given the above results on diversity, we investigate whether greater diversity can further improve the performance of our model. For this analysis, we create a considerably more \"diverse\" queue at every self-training episode and treat it as the designated queue. We create the diverse queue by sampling instances with high prediction confidence from all queues. In particular, at every episode, we rank instances of each queue based on their prediction confidence and create a diverse queue by combining top r% instances of each queue, where r indicates the rate of adding new instances and set to r = 10%. We note that a smaller rate is better for adding instances because it allows the model to gradually consume unlabeled instances with high prediction confidence. Table 5 shows the effect of diverse queues on the performance of our model on both IMDb and Churn datasets. The results show that diverse queues improve the performance of our Leitner self-training model from 78.27 (reported in Table 4) to 80.71 on IMDb, i.e. 2.44 points improvement in macro-F1. However, the corresponding performance on Churn dataset decreases from 69.90 to 68.56, i.e. 1.34 points decrease in macro-F1. The inverse effect of diverse queues in case of Churn dataset is because diverse queues suffer from the issue of considerable class imbalance more than designated queues. This is because highly confident instances which accumulate in higher queues are often negative instances in case of Churn prediction. Although we tackle this issue by weighting the loss function during training, diverse positive instances which are different from their training counterparts are still needed for performance improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 768, |
|
"end": 775, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Diverse Queue", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "We investigate the challenges associated with our data sampling policy by conducting finer-grained analysis on instance movement patterns among queues. To illustrate, assume that we have a Leit- Figure 4 : Deviation in instance movements for each queue (in terms of average standard deviation over all training episodes). At every episode, we keep track of instance movements among queues and measure movement variation among instances that ultimately home in on the same queue.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Do We Need Better Sampling Policies?", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "ner queue of size n = 3 and the following movement patterns for four individual instances that ultimately home in on q 0 (recall that correct prediction promotes an instance to a higher queue, while wrong prediction demotes it to a lower queue):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Do We Need Better Sampling Policies?", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "q 0 \u2192 q 0 \u2192 q 0 \u2192 q 0 \u2192 q 0 : always in q 0 q 0 \u2192 q 1 \u2192 q 0 \u2192 q 0 \u2192 q 0 : mainly in q 0 q 0 \u2192 q 1 \u2192 q 0 \u2192 q 1 \u2192 q 0 : partially in q 0 q 0 \u2192 q 1 \u2192 q 2 \u2192 q 1 \u2192 q 0 : partially in q 0 & q 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Do We Need Better Sampling Policies?", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Although all these instances ultimately home in on the same queue, they may have different contributions to the training of a model because there is a considerable difference in the ability of the downstream network in learning their labels. Therefore, if there is a large deviation among movement patterns of instances of the same queue, better data sampling policies could be developed, perhaps through finer-grained queue-level sampling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Do We Need Better Sampling Policies?", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For this analyses, we keep track of instance movements among queues and measure standard deviation among movement patterns of instances of the same queue at every self-training episode, and report the average of these deviations. Figure 4 shows the results. On both datasets, there is considerably greater deviation in movements for middle queues than lower/higher queues. This is meaningful because Leitner system (and other spaced repetition schedulers) are expected to keep easy and hard instances at higher and lower queues respectively. Since such instances mainly stay at lower or higher queues, we observe smaller deviation in their movements. On the other hand, the corresponding values for middle queues indicate that movements in these queues are spread out over a larger range of queues. From these results, we conjecture that a data sampling policy that conducts finer-grained analysis at queue-level (e.g. by taking into account queue movement patterns) could create better data samples. Verifying this hypothesis will be the subject for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 238, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Do We Need Better Sampling Policies?", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Semi-supervised learning (Zhu, 2006; Chapelle et al., 2009 ) is a type of machine learning where one has access to a small amount of labeled data and a large amount of unlabeled data. Self-training is a type of semi-supervised learning to boost the performance of downstream learners (e.g. classifiers) through data sampling from unlabeled data. Most data sampling policies rely on prediction confidence of the downstream learner for sampling unlabeled data (Zhu and Goldberg, 2009) . Selftraining has been successfully applied to various tasks and domains including word sense disambiguation (HEARST, 1991; Yarowsky, 1995) , information extraction (Riloff and Jones, 1999) , and object recognition (Zhou et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 36, |
|
"text": "(Zhu, 2006;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 37, |
|
"end": 58, |
|
"text": "Chapelle et al., 2009", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 482, |
|
"text": "(Zhu and Goldberg, 2009)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 607, |
|
"text": "(HEARST, 1991;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 608, |
|
"end": 623, |
|
"text": "Yarowsky, 1995)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 673, |
|
"text": "(Riloff and Jones, 1999)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 718, |
|
"text": "(Zhou et al., 2012)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In addition, co-training (Blum and Mitchell, 1998; Zhang and Rudnicky, 2006; Wu et al., 2018) is another type of semi-supervised learning. It assumes that each instance can be described using two distinct feature sets that provide different and complementary information about the instance. Ideally, the two views should be conditionally independent, i.e., the two feature sets of each instance are conditionally independent given the class, and each view should be sufficient, i.e., the class of an instance can be accurately predicted from each view alone. Co-training first learns separate downstream learners for each view using a small set of labeled data. The most confident predictions of each learner on the unlabeled data are then used to iteratively construct additional labeled training data. Recently Wu et al. (2018) developed an effective model based on reinforcement learning (specifically, a joint formulation of a Q-learning agent and two co-training classifiers) to learn data sampling policies and utilize unlabeled data space in the context of cotraining methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 50, |
|
"text": "(Blum and Mitchell, 1998;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 51, |
|
"end": 76, |
|
"text": "Zhang and Rudnicky, 2006;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 93, |
|
"text": "Wu et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 829, |
|
"text": "Wu et al. (2018)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Effective semi-supervised learning algorithms based on pretraining techniques (Hinton and Salakhutdinov, 2006; Bengio et al., 2007; Erhan et al., 2010) have been developed for text classification, deep belief networks (Hinton and Salakhutdinov, 2006) , and stacked autoencoders Bengio et al., 2007) . In particular, Dai and Le (2015) developed an autoencoder for the later supervised learning process. Miyato et al. (2016) applied perturbations to word embeddings and used pretraining technique and adversarial training for effective semisupervised learning. These models although effective have not been well studied in the context of semi-supervised learning where models may have low initial performance or limited amount of labeled data. In addition, pretraining is limited by the same architecture requirement in both pretraining and fine-tuning steps.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 110, |
|
"text": "(Hinton and Salakhutdinov, 2006;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 131, |
|
"text": "Bengio et al., 2007;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 151, |
|
"text": "Erhan et al., 2010)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 250, |
|
"text": "(Hinton and Salakhutdinov, 2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 298, |
|
"text": "Bengio et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 422, |
|
"text": "Miyato et al. (2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this work, we extend previous work in self-training by developing a new and effective data sampling policy based on spaced repetition (Dempster, 1989; Cepeda et al., 2006; Averell and Heathcote, 2011) which addresses some of the above challenges. In particular, our model's data sampling policy is not predetermined, it explores the entire data space and dynamically selects unlabeled instances with respect to the strength of a learner on a target task and easiness of unlabeled instances, and it relaxes the same model constraint of pretraining-based approaches by decoupling pretraining and fine-tuning steps.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 153, |
|
"text": "(Dempster, 1989;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 154, |
|
"end": 174, |
|
"text": "Cepeda et al., 2006;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 203, |
|
"text": "Averell and Heathcote, 2011)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We propose a novel method based on spaced repetition to self-train neural networks using small amount of labeled and large amount of unlabeled data. Our model can select high-quality unlabeled data samples for self-training and outperforms current state-of-the-art semi-supervised baselines on two text classification problems. We analyze our model from various perspectives to explain its improvement gain with respect to challenges of semisupervised learning. There are several venues for future work including (a): finer-grained data sampling at queue level, (b): extending our model to other machine learning algorithms that employ iterative training, such as boosting approaches, and (c): applying this model to areas where neural networks have not been investigated, e.g. due to limited availability of labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://ai.stanford.edu/\u02dcamaas/data/ sentiment/ 2 https://scholar.harvard.edu/hadi/ chData 3 Churn is a term relevant to customer retention in marketing discourse; examples of churny tweets are \"my days with BRAND are numbered,\" \"debating if I should stay with BRAND,\" and \"leaving BRAND in two days.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Churn prediction is a target-dependent task, largely affected by negation and function words, e.g. compare \"switching from\" and \"switching to,\" and language complexity, e.g. the tweets \"hate that I may end up leaving BRAND cause they have the best service\" is a positive yet churny tweet.5 Note that the performance on lower queues (e.g. q1 for IMDb and q0 for Churn) are higher than expected. This is because, at the end of each iteration, instances of designated (best-performing) queues-but not lower queues-are added to training data; instances of designated queues help creating better and more robust models which still perform well even if instances of lower queues are added.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "I sincerely thank Mitra Mohtarami and anonymous reviewers for their insightful comments and constructive feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Targetdependent churn classification in microblogs", |
|
"authors": [ |
|
{ |
|
"first": "Hadi", |
|
"middle": [], |
|
"last": "Amiri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2361--2367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hadi Amiri and Hal Daum\u00e9 III. 2015. Target- dependent churn classification in microblogs. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 2361-2367. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Repeat before forgetting: Spaced repetition for efficient and effective training of neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Hadi Amiri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guergana", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Savova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2401--2410", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hadi Amiri, Timothy Miller, and Guergana Savova. 2017. Repeat before forgetting: Spaced repetition for efficient and effective training of neural net- works. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2401-2410.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A simple but tough-to-beat baseline for sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingyu", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tengyu", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International conference on learning representations (ICLR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International conference on learning representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The form of the forgetting curve and the fate of memories", |
|
"authors": [ |
|
{ |
|
"first": "Lee", |
|
"middle": [], |
|
"last": "Averell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Heathcote", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Mathematical Psychology", |
|
"volume": "55", |
|
"issue": "1", |
|
"pages": "25--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee Averell and Andrew Heathcote. 2011. The form of the forgetting curve and the fate of memories. Jour- nal of Mathematical Psychology, 55(1):25-35.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Greedy layer-wise training of deep networks", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Lamblin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Popovici", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in neural informa- tion processing systems, pages 153-160.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Combining labeled and unlabeled data with co-training", |
|
"authors": [ |
|
{ |
|
"first": "Avrim", |
|
"middle": [], |
|
"last": "Blum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the eleventh annual conference on Computational learning theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "92--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In Pro- ceedings of the eleventh annual conference on Com- putational learning theory, pages 92-100. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Distributed practice in verbal recall tasks: A review and quantitative synthesis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nicholas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harold", |
|
"middle": [], |
|
"last": "Cepeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Pashler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Wixted", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rohrer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Psychological bulletin", |
|
"volume": "132", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas J Cepeda, Harold Pashler, Edward Vul, John T Wixted, and Doug Rohrer. 2006. Distributed practice in verbal recall tasks: A review and quanti- tative synthesis. Psychological bulletin, 132(3):354.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Semi-supervised learning (chapelle, o", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Chapelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Scholkopf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Zien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. 2009. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews].", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Semi-supervised sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3079--3087", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079-3087.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Spacing effects and their implications for theory and practice", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dempster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Educational Psychology Review", |
|
"volume": "1", |
|
"issue": "4", |
|
"pages": "309--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank N Dempster. 1989. Spacing effects and their im- plications for theory and practice. Educational Psy- chology Review, 1(4):309-330.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Why does unsupervised pre-training help deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Antoine", |
|
"middle": [], |
|
"last": "Manzagol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "625--660", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(Feb):625-660.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Researcher homepage classification using unlabeled data", |
|
"authors": [ |
|
{ |
|
"first": "Cornelia", |
|
"middle": [], |
|
"last": "Sujatha Das Gollapalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasenjit", |
|
"middle": [], |
|
"last": "Caragea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Lee", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Giles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "471--482", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujatha Das Gollapalli, Cornelia Caragea, Prasenjit Mitra, and C Lee Giles. 2013. Researcher homepage classification using unlabeled data. In Proceedings of the 22nd international conference on World Wide Web, pages 471-482. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Churn identification in microblogs using convolutional neural networks with structured logical knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Mourad", |
|
"middle": [], |
|
"last": "Gridach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hatem", |
|
"middle": [], |
|
"last": "Haddad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hala", |
|
"middle": [], |
|
"last": "Mulki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mourad Gridach, Hatem Haddad, and Hala Mulki. 2017. Churn identification in microblogs using con- volutional neural networks with structured logical knowledge. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 21-30.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Noun homograph disambiguation using local context in large corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proc. 7th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M HEARST. 1991. Noun homograph disambiguation using local context in large corpora. In Proc. 7th", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Annual Conference of the Centre for the New OED and Text Research: Using Corpora", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the Centre for the New OED and Text Research: Using Corpora, pages 1-22.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Reducing the dimensionality of data with neural networks. science", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Geoffrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan R", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "313", |
|
"issue": "", |
|
"pages": "504--507", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural net- works. science, 313(5786):504-507.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Deep unordered composition rivals syntactic methods for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varun", |
|
"middle": [], |
|
"last": "Manjunatha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of ACL-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of ACL-IJCNLP.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "427--431", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "So lernt man lernen", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Leitner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Leitner. 1974. So lernt man lernen. Herder.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the as- sociation for computational linguistics: Human lan- guage technologies-volume 1, pages 142-150. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Adversarial training methods for semisupervised text classification", |
|
"authors": [ |
|
{ |
|
"first": "Takeru", |
|
"middle": [], |
|
"last": "Miyato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takeru Miyato, Andrew M Dai, and Ian Goodfel- low. 2016. Adversarial training methods for semi- supervised text classification. International confer- ence on learning representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Boosting self-supervised learning via knowledge transfer", |
|
"authors": [ |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Noroozi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ananth", |
|
"middle": [], |
|
"last": "Vinjimoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Favaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamed", |
|
"middle": [], |
|
"last": "Pirsiavash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9359--9367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, and Hamed Pirsiavash. 2018. Boosting self-supervised learning via knowledge transfer. In 2018 IEEE Con- ference on Computer Vision and Pattern Recogni- tion, CVPR 2018, Salt Lake City, UT, USA, June 18- 22, 2018, pages 9359-9367.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Education of a model student", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Timothy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Novikoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Kleinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strogatz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "109", |
|
"issue": "6", |
|
"pages": "1868--1873", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy P Novikoff, Jon M Kleinberg, and Steven H Strogatz. 2012. Education of a model student. Proceedings of the National Academy of Sciences, 109(6):1868-1873.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning dictionaries for information extraction by multi-level bootstrapping", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rosie", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the sixteenth national conference on Artificial intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "474--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dictionar- ies for information extraction by multi-level boot- strapping. In Proceedings of the sixteenth national conference on Artificial intelligence, pages 474-479.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms", |
|
"authors": [ |
|
{ |
|
"first": "Dinghan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoyin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenlin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Renqiang Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qinliang", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhe", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Henao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word- embedding-based models and associated pooling mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, volume 1.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Lajoie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Antoine", |
|
"middle": [], |
|
"last": "Manzagol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of machine learning research", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "3371--3408", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a lo- cal denoising criterion. Journal of machine learning research, 11(Dec):3371-3408.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Learning from small sample sets by combining unsupervised meta-training with cnns", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Xiong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martial", |
|
"middle": [], |
|
"last": "Hebert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "244--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu-Xiong Wang and Martial Hebert. 2016. Learning from small sample sets by combining unsupervised meta-training with cnns. In Advances in Neural In- formation Processing Systems, pages 244-252.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Reinforced co-training", |
|
"authors": [ |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1252--1262", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiawei Wu, Lei Li, and William Yang Wang. 2018. Re- inforced co-training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1252-1262.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Unsupervised word sense disambiguation rivaling supervised methods", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Pro- ceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 189-196. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "A new data selection approach for semi-supervised acoustic modeling", |
|
"authors": [ |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Rudnicky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings. 2006 IEEE International Conference on", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong Zhang and Alexander I Rudnicky. 2006. A new data selection approach for semi-supervised acoustic modeling. In Acoustics, Speech and Signal Process- ing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, volume 1, pages I-I. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Self-training with selection-byrejection", |
|
"authors": [ |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Murat", |
|
"middle": [], |
|
"last": "Kantarcioglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhavani", |
|
"middle": [], |
|
"last": "Thuraisingham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IEEE 12th International Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "795--803", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yan Zhou, Murat Kantarcioglu, and Bhavani Thu- raisingham. 2012. Self-training with selection-by- rejection. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 795-803. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Semi-supervised learning literature survey", |
|
"authors": [ |
|
{ |
|
"first": "Xiaojin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaojin Zhu. 2006. Semi-supervised learning literature survey. Technical Report 1530, Computer Science, University of Wisconsin-Madison, 2(3):4.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Xiaojin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaojin Zhu and Andrew B Goldberg. 2009. Intro- duction to semi-supervised learning. Synthesis lec- tures on artificial intelligence and machine learning, 3(1):1-130.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "(a) and (b): Average macro-F1 performance computed over individual queues using validation dataset across training episodes (average performance of M q s at line 6 of Algorithm 2). (a): Performance on IMDb with optimal queue length of n = 5, and (b): performance on Churn with optimal queue length of n = 7: note that none of unlabeled instances has made it to the last queue. (c): Comparison of highest and designated queues in terms of instance similarity to training data; high train indicates similarity between (representations of) instances in the highest queue and training instances, and desig train shows the corresponding values for instances in the designated queue. + and \u2212 signs indicate positive and negative pseudo/gold labels for unlabeled/training instances.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "The amount of diversity that instances of each queue introduce if added to training data (on IMDb).", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"text": "Statistics of dataset used in experiments.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Macro-F1 performance of models across datasets; Note that Standard ST (SST) samples only 1.4K and 0 instances from IMDb and Churn datasets respectively; sampling more data decreases SST's performance down to 66.94 and 57.04 perhaps due to ineffective exploring of data space. Our model achieves its best performance on IMDb and Churn datasets with n = 5 and n = 7 Leitner queues respectively.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |