{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:36:05.068393Z" }, "title": "Multi-task Peer-Review Score Prediction", "authors": [ { "first": "Jiyi", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Yamanashi", "location": { "settlement": "Kofu", "country": "Japan" } }, "email": "" }, { "first": "Ayaka", "middle": [], "last": "Sato", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Yamanashi", "location": { "settlement": "Kofu", "country": "Japan" } }, "email": "" }, { "first": "Kazuya", "middle": [], "last": "Shimura", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Yamanashi", "location": { "settlement": "Kofu", "country": "Japan" } }, "email": "" }, { "first": "Fumiyo", "middle": [], "last": "Fukumoto", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Yamanashi", "location": { "settlement": "Kofu", "country": "Japan" } }, "email": "fukumoto4@yamanashi.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic prediction of the peer-review aspect scores of academic papers can be a useful assistant tool for both reviewers and authors. To handle the small size of published datasets on the target aspect of scores, we propose a multi-task approach to leverage additional information from other aspects of scores for improving the performance of the target aspect. Because one of the problems of building multi-task models is how to select the proper resources of auxiliary tasks and how to select the proper shared structures, we thus propose a multi-task shared structure encoding approach that automatically selects good shared network structures as well as good auxiliary resources. The experiments based on peer-review datasets show that our approach is effective and has better performance on the target scores than the single-task method and na\u00efve multi-task methods.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Automatic prediction of the peer-review aspect scores of academic papers can be a useful assistant tool for both reviewers and authors. To handle the small size of published datasets on the target aspect of scores, we propose a multi-task approach to leverage additional information from other aspects of scores for improving the performance of the target aspect. Because one of the problems of building multi-task models is how to select the proper resources of auxiliary tasks and how to select the proper shared structures, we thus propose a multi-task shared structure encoding approach that automatically selects good shared network structures as well as good auxiliary resources. The experiments based on peer-review datasets show that our approach is effective and has better performance on the target scores than the single-task method and na\u00efve multi-task methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic prediction of the peer-review aspect scores (e.g. \"clarity\" and \"originality\") of academic papers can be a useful assistant tool for both reviewers and authors. On the one hand, because the number of submissions to AI-related international conferences has significantly increased in recent years, it is challenging for the review process. Rejecting some papers with evidently low quality can reduce the workload. On the other hand, suggesting the weak aspects to the authors can also help them improve their papers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several existing works related to the paper review which concentrate on the quality of the review (De Silva and Vance, 2017; Langford and Guzdial, 2015) . Huang (2018) et al. predicted the acceptance of a paper only based on a paper's visual appearance (Huang, 2018) . Automatic essay scoring (Dong and Zhang, 2016; Dong et al., 2017; Amorim et al., 2018) can be regarded as a related sub-topic that mainly focus on the grammatical and syntactic features in short essays. PeerRead is the first public dataset of scientific peer reviews for research purposes (Kang et al., 2018) , which can be used for paper acceptance classification and review aspect score prediction. It provides detailed peerreviews including the final decisions, the aspect scores such as clarity and originality, and the review contents. It raises two NLP tasks, paper acceptance classification and review aspect score prediction. We focus on the later one in this paper. However, the dataset is relatively small; the set of papers for each review aspect can be different. To improve the performance of aspect score prediction, we propose a solution based on the multi-task learning that can leverage additional rich information from the resources obtained by other aspect scores. We treat the prediction of each aspect as a separate task. The multi-task model for each aspect score has a main-auxiliary manner.", "cite_spans": [ { "start": 135, "end": 162, "text": "Langford and Guzdial, 2015)", "ref_id": "BIBREF13" }, { "start": 165, "end": 177, "text": "Huang (2018)", "ref_id": null }, { "start": 263, "end": 276, "text": "(Huang, 2018)", "ref_id": null }, { "start": 303, "end": 325, "text": "(Dong and Zhang, 2016;", "ref_id": "BIBREF3" }, { "start": 326, "end": 344, "text": "Dong et al., 2017;", "ref_id": "BIBREF4" }, { "start": 345, "end": 365, "text": "Amorim et al., 2018)", "ref_id": "BIBREF0" }, { "start": 568, "end": 587, "text": "(Kang et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-task methods have been widely utilized in many NLP tasks, such as summarization (Isonuma et al., 2017; Guo et al., 2018) , classification (Liu et al., 2017b; Shimura et al., 2019) , parsing (Hershcovich et al., 2018) , sequence labeling (Lin et al., 2018) , and Entity and Relation (Luan et al., 2018) . When building a multi-task model, there are two critical issues, i.e., which auxiliary resources (tasks) can be used for sharing useful information and how to share the information among the tasks. In these previous studies, researchers always select specific auxiliary resources, and design handcrafted shared structure in the model for a particular NLP topic.", "cite_spans": [ { "start": 86, "end": 108, "text": "(Isonuma et al., 2017;", "ref_id": "BIBREF8" }, { "start": 109, "end": 126, "text": "Guo et al., 2018)", "ref_id": "BIBREF5" }, { "start": 144, "end": 163, "text": "(Liu et al., 2017b;", "ref_id": "BIBREF17" }, { "start": 164, "end": 185, "text": "Shimura et al., 2019)", "ref_id": "BIBREF23" }, { "start": 196, "end": 222, "text": "(Hershcovich et al., 2018)", "ref_id": "BIBREF6" }, { "start": 243, "end": 261, "text": "(Lin et al., 2018)", "ref_id": "BIBREF14" }, { "start": 288, "end": 307, "text": "(Luan et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, for different datasets and tasks, there may exist other better auxiliary resources and shared structures. We thus propose an approach selecting the shared structures automatically as well as the auxiliary resources that are more beneficial for the main task. There are diverse parameter sharing manners in the multi-task methods for deep neural networks (Ruder, 2017) . How to define the exploration space for automatic selection is a problem. Our approach encodes the multi-task shared structures in the manner of hard parameter sharing and defines the exploration space. We also propose a strategy to search the optimal structures and auxiliaries from the candidate models. It is also flexible to add more auxiliary tasks. Our approach can be integrated with hyperparameter optimization methods (Snoek et al., 2012) or network architecture search methods (Zoph and Le, 2016) for searching. Furthermore, our method is capable for not only review score prediction but also some other NLP tasks such as text classification. Our main contributions can be summarized as follows. (1). We address an application that predicting the peer-review aspect scores of papers which can be a useful assistant tool for both reviewers and authors. (2). We propose a multi-task shared structure encoding method which automatically selects good shared network structures as well as good auxiliary resources. (3). The experiments based on real paper peer-review datasets show that our approach can build a multi-task model with effective structures and auxiliaries which has better performance than the single-task model and na\u00efve multi-task models.", "cite_spans": [ { "start": 363, "end": 376, "text": "(Ruder, 2017)", "ref_id": "BIBREF20" }, { "start": 806, "end": 826, "text": "(Snoek et al., 2012)", "ref_id": "BIBREF24" }, { "start": 866, "end": 885, "text": "(Zoph and Le, 2016)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Peer-review aspect score prediction is a regression problem with text data. We can utilize existing text classification methods (Kim, 2014; Liu et al., 2017a ) based on deep neural network for this problem by changing the loss function from cross-entropy for classification to mean squared error for regression. Without loss of generality, we use the basic CNN-based text classification model (Kim, 2014) as the example to facilitate the description of our multi-task approach. Figure 1 shows the architecture of this model for predicting the aspect score. It includes the embedding layer, convolutional and pooling layer, and fully connected layers. The multi-task approach we propose is not limited to be adapted with this model. It can be integrated with similar neural network structures in this example, e.g., XML-CNN (Liu et al., 2017a) and DPCNN (Johnson and Zhang, 2017) .", "cite_spans": [ { "start": 128, "end": 139, "text": "(Kim, 2014;", "ref_id": "BIBREF12" }, { "start": 140, "end": 157, "text": "Liu et al., 2017a", "ref_id": "BIBREF16" }, { "start": 393, "end": 404, "text": "(Kim, 2014)", "ref_id": "BIBREF12" }, { "start": 823, "end": 842, "text": "(Liu et al., 2017a)", "ref_id": "BIBREF16" }, { "start": 853, "end": 878, "text": "(Johnson and Zhang, 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 478, "end": 486, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Preliminary", "sec_num": "2.1" }, { "text": "We have n single tasks (i.e., aspect scores) and assume that they have the same network structures with k layers. For each task, we regard it as the main task and search the proper shared structures and auxiliary tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary", "sec_num": "2.1" }, { "text": "To automatically search the proper shared structures and auxiliary tasks, we need to define the exploration space. Because it is difficult to mix diverse parameter sharing manners proposed in various multi-task methods (Ruder, 2017) , we utilize the typical manner of hard parameter sharing as the starting point to implement our idea. Other manners of parameter sharing will be addressed in future work. Figure 2 shows an example of the shared structure encoding (SSE) that we propose with three tasks (one main task and two auxiliary tasks). Given a main task t 0 , for each auxiliary task t i , if the jth layer of t i is shared with t 0 , then we encode this shared structure as l ij = 1; if the jth layer is not shared, then l ij = 0. We do not encode the shared structures among auxiliary tasks to decrease the complexity of the model. It is flexible to add more auxiliary tasks to a model. There are two special cases of this SSE. One is l ij = 1 for all aux-iliary tasks. The corresponding model is equivalent to one single model for all tasks. Another is l ij = 0 for all auxiliary tasks. It is equivalent to a singletask model for the main task. In other words, in the search stage, these models are also included. Lu et al. (2017) adaptively generate the feature sharing structure by splitting the network into branches without merging. Its exploration space is a subset of our approach.", "cite_spans": [ { "start": 219, "end": 232, "text": "(Ruder, 2017)", "ref_id": "BIBREF20" }, { "start": 1225, "end": 1241, "text": "Lu et al. (2017)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 405, "end": 413, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Multi-task Shared Structures", "sec_num": "2.2" }, { "text": "Our multi-task approach utilizes a mainauxiliary manner, rather than a manner which equally treats all tasks. The later manner makes a sum of the weighted losses of all tasks and requires a trade-off among the tasks (Sener and Koltun, 2018 ), which may not be able to reach optimal results for a specific task. In our approach, we thus use every single task as the main task respectively and other tasks as the candidates for auxiliary tasks. It is flexible for us to define all candidate shared structures in the exploration space and decrease the size of the exploration space.", "cite_spans": [ { "start": 216, "end": 239, "text": "(Sener and Koltun, 2018", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-task Shared Structures", "sec_num": "2.2" }, { "text": "In our search strategy, we denote the number of auxiliary tasks in a model as m, m \u2264 n \u2212 1. There are n\u22121 m combinations of the auxiliary tasks. For each combination of auxiliary tasks, we search the shared structures and select the one with minimized loss. For the selection criterion, because the dataset is too small, we use the loss on both the training set and validation set rather than only using the loss of validation set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Structure and Auxiliary Task Search", "sec_num": "2.2.1" }, { "text": "After selecting the shared structures for all combinations of the auxiliary tasks, we select the combination of which the average loss of all candidate shared structures is minimum. For a main task, the number of candidate multi-task models is N m = n\u22121 m \u00d7 2 km . When m = n \u2212 1, i.e., using all other tasks as the auxiliary tasks, this number is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Structure and Auxiliary Task Search", "sec_num": "2.2.1" }, { "text": "N n\u22121 = 2 k(n\u22121) . If m n \u2212 1, then N m N n\u22121 . If N m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Structure and Auxiliary Task Search", "sec_num": "2.2.1" }, { "text": "is small, we can explore all candidates. Otherwise, we need to refer some other methods to search in the exploration space, for example, the hyperparameter optimization methods based on Bayesian optimization (Snoek et al., 2012) ; the network architecture search (NAS) methods based on reinforcement learning (Zoph and Le, 2016; Zoph et al., 2018; Liu et al., 2018) . Random search is also possible to be used. 3 Experiments", "cite_spans": [ { "start": 208, "end": 228, "text": "(Snoek et al., 2012)", "ref_id": "BIBREF24" }, { "start": 309, "end": 328, "text": "(Zoph and Le, 2016;", "ref_id": "BIBREF25" }, { "start": 329, "end": 347, "text": "Zoph et al., 2018;", "ref_id": "BIBREF26" }, { "start": 348, "end": 365, "text": "Liu et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Shared Structure and Auxiliary Task Search", "sec_num": "2.2.1" }, { "text": "We use the ICLR and ACL datasets in the Peer-Read Dataset (Kang et al., 2018) because they provide the scores of the peer-review aspects. Table 1 shows the statistics of these datasets. We utilize the papers which have the scores in some of the six aspects (n = 6), i.e., Clarity (cla), Originality (ori), Correctness (cor), Comparison (com), Substance (sub) and Impact (imp). The scale of these scores is from 1 to 5. We utilize the dataset splitting provided by PeerRead. Because not all papers contain all six aspects in the ICLR dataset, the number of papers for each aspect are diverse. For the ground truth, we use the mean score of multiple reviews which is the general method of multiple score aggregation without considering the review bias. Analyzing the review bias among different reviewers is out of the scope of this paper. Note that although PeerRead contains both paper text and review text, we only used the paper text because the purpose of this work is to predict the aspect scores before review progress. Moreover, because in the PeerRead (Kang et al., 2018) article, the authors utilized the first 1,000 tokens because the paper text was extremely long; and we used full paper text with our own text pre-processing in the experiments, the results obtained by our experiments and that reported in PeerRead are thus not exactly comparable.", "cite_spans": [ { "start": 58, "end": 77, "text": "(Kang et al., 2018)", "ref_id": "BIBREF11" }, { "start": 1060, "end": 1079, "text": "(Kang et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 138, "end": 146, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "We remove the stop words and use stemming to the words in the papers. The initial word embeddings in the models are pre-trained by fastText ) from each dataset. The hyperparameters of the CNN structures for the approaches refer to the common ones used in exiting work (Shimura et al., 2018) . Table 2 shows the parameter settings of CNN and XML-CNN, which are used as basic models of the proposed multi-task approach in the paper.", "cite_spans": [ { "start": 268, "end": 290, "text": "(Shimura et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "The baselines are as follows. Single task model: It is equivalent to the case that SSEs of all auxiliary tasks are \"000\". It uses one network for one aspect score like the models in (Dong and Zhang, 2016; Dong et al., 2017) .", "cite_spans": [ { "start": 182, "end": 204, "text": "(Dong and Zhang, 2016;", "ref_id": "BIBREF3" }, { "start": 205, "end": 223, "text": "Dong et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "All-in-one (Ain1): It builds a single model that the main task and m auxiliary tasks use same network like the models in the PeerRead (Kang et al., 2018) . It is equivalent to treating the prediction of all aspects as one task or as a multi-task that SSEs of all auxiliary tasks are \"111\".", "cite_spans": [ { "start": 134, "end": 153, "text": "(Kang et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "Average performance of all explored Multi-Task models (AMT): It is equivalent to the expectation of the performance if randomly selecting a multi-task model from all candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "We select the aspect of Clarity, which has most test data as the main task for the evaluation in this paper. The evaluation metric is the Root Mean Square Error (RMSE). We first verify our approach by using CNN (Kim, 2014) as the basic model. We set m \u2208 [1, 2, n \u2212 1]. When m = n \u2212 1, the N m = 8 5 is very huge. We use random search method by exploring 1000 candidate models and evaluate the mean performance of five times.", "cite_spans": [ { "start": 211, "end": 222, "text": "(Kim, 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "3.1" }, { "text": "We first verify whether our SSE method can select a good shared structure for a given combination of auxiliary tasks. Table 3 .(a) shows the results in the case of m = 1. It shows that our method successfully builds a better model than the single task model and the model in which all tasks completely share with each other. The comparison result with AMT shows our method can select a better shared structure from all candidate structures. Table 3 : Results (Performance and SSEs) of shared structure selection for each combination of auxiliary tasks. Main task: \"Clarity\"; basic model: CNN; dataset: ICLR; metric: RMSE; performance of single task model: 0.849. Bold marks the best performance (including performance of the single task model). Italic marks the better one between \"Our\" and \"AMT\". Table 4 : Results of selecting both shared structures and auxiliary tasks. Main task: \"Clarity\"; basic model: CNN; dataset: ICLR; performance of single task model: 0.849. Bold marks the best performance.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 441, "end": 448, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 798, "end": 805, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.2" }, { "text": "Italic marks the better one between \"Our\" and \"AMT\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.2" }, { "text": "structure from all candidate structures. But it cannot always be better than the single task model this time. It is because that the corresponding combinations of auxiliaries are not proper. After using our search strategy to select the combinations of auxiliaries, in 2nd row of Table 4 , our method can select the auxiliaries and structures with better performance. In addition, in Table 4 , the performance for m = 2 is better than m = 1, it shows that increasing m is possible to improve the performance. However, a large m results in a large N m . In the case of m = 5, although it is possible to obtain a better model than m = 1 or 2 if exploring all N 5 = 8 5 candidate models, only exploring a subset (N 5 = 1000) cannot reach better performance even though N 5 has been larger than N 2 . Without a better search method, using a small m (e.g., m = 2) rather than a large m (e.g., m = 5, all other aspects as auxiliaries) is recommended. Furthermore, we also respectively change the following four settings while keeping other settings unchanged to verify our approach in different conditions, (1). basic model: one of the SOTA text classification methods XML-CNN (Liu et al., 2017a) ; (2). main task: Originality, besides the clarity aspect, we also show the results when another aspect is the main task; (3). dataset: ACL. (4). embedding: the pre-trained embeddings by fastText are initialized by the embeddings trained from Wikipedia data. Table 5 shows that our approach can robustly generate better results in different settings. Table 4 and 5 also show that the selected auxiliary tasks and shared structures are diverse in different settings. It would be better to automatically select them rather than manually decide them. For the underlying characteristics of review aspects in this dataset, there is no apparent observation that one aspect is exactly related to the main aspect and must be the auxiliary. Finally, from the results of \"originality\" aspect in Table 5 , it shows that \"substance\", \"comparison\" and \"impact\" support \"originality\", the selected aspects by SSEs is reasonable and fit human intuitions.", "cite_spans": [ { "start": 1171, "end": 1190, "text": "(Liu et al., 2017a)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 4", "ref_id": null }, { "start": 384, "end": 391, "text": "Table 4", "ref_id": null }, { "start": 1450, "end": 1457, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 1542, "end": 1550, "text": "Table 4", "ref_id": null }, { "start": 1977, "end": 1984, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.2" }, { "text": "In this paper, we focus on the peer-review score prediction for papers. We propose a multi-task shared structure encoding approach which automatically selects good shared network structures as well as good auxiliary resources. There are some issues in the future work, e.g., trying search methods such as network architecture search and finding evidences of the score predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [ { "text": "This work was partially supported by KDDI Foundation Research Grant Program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automated essay scoring in the presence of biased ratings", "authors": [ { "first": "Evelin", "middle": [], "last": "Amorim", "suffix": "" }, { "first": "Marcia", "middle": [], "last": "Can\u00e7ado", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Veloso", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "229--237", "other_ids": { "DOI": [ "10.18653/v1/N18-1021" ] }, "num": null, "urls": [], "raw_text": "Evelin Amorim, Marcia Can\u00e7ado, and Adriano Veloso. 2018. Automated essay scoring in the presence of biased ratings. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 229-237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.04606" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Preserving the quality of scientific research: peer review of research articles", "authors": [ { "first": "De", "middle": [], "last": "Pali", "suffix": "" }, { "first": "Candace", "middle": [ "K" ], "last": "Silva", "suffix": "" }, { "first": "", "middle": [], "last": "Vance", "suffix": "" } ], "year": 2017, "venue": "Scientific Scholarly Communication", "volume": "", "issue": "", "pages": "73--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pali UK De Silva and Candace K Vance. 2017. Preserv- ing the quality of scientific research: peer review of research articles. In Scientific Scholarly Communi- cation, pages 73-99. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic features for essay scoring -an empirical study", "authors": [ { "first": "Fei", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1072--1077", "other_ids": { "DOI": [ "10.18653/v1/D16-1115" ] }, "num": null, "urls": [], "raw_text": "Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring -an empirical study. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1072-1077, Austin, Texas. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Attentionbased recurrent convolutional neural network for automatic essay scoring", "authors": [ { "first": "Fei", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "153--162", "other_ids": { "DOI": [ "10.18653/v1/K17-1017" ] }, "num": null, "urls": [], "raw_text": "Fei Dong, Yue Zhang, and Jie Yang. 2017. Attention- based recurrent convolutional neural network for au- tomatic essay scoring. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 153-162, Vancou- ver, Canada. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Soft layer-specific multi-task summarization with entailment and question generation", "authors": [ { "first": "Han", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Ramakanth", "middle": [], "last": "Pasunuru", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "687--697", "other_ids": { "DOI": [ "10.18653/v1/P18-1064" ] }, "num": null, "urls": [], "raw_text": "Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft layer-specific multi-task summarization with entailment and question generation. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 687-697, Melbourne, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multitask parsing across semantic representations", "authors": [ { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "373--385", "other_ids": { "DOI": [ "10.18653/v1/P18-1035" ] }, "num": null, "urls": [], "raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018. Multitask parsing across semantic representa- tions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 373-385, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Extractive summarization using multi-task learning with document classification", "authors": [ { "first": "Masaru", "middle": [], "last": "Isonuma", "suffix": "" }, { "first": "Toru", "middle": [], "last": "Fujino", "suffix": "" }, { "first": "Junichiro", "middle": [], "last": "Mori", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Matsuo", "suffix": "" }, { "first": "Ichiro", "middle": [], "last": "Sakata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2101--2110", "other_ids": { "DOI": [ "10.18653/v1/D17-1223" ] }, "num": null, "urls": [], "raw_text": "Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, and Ichiro Sakata. 2017. Extractive sum- marization using multi-task learning with document classification. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing, pages 2101-2110, Copenhagen, Den- mark. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deep pyramid convolutional neural networks for text classification", "authors": [ { "first": "Rie", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "562--570", "other_ids": { "DOI": [ "10.18653/v1/P17-1052" ] }, "num": null, "urls": [], "raw_text": "Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text classification. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics, pages 562- 570, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A dataset of peer reviews (PeerRead): Collection, insights and NLP applications", "authors": [ { "first": "Dongyeop", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Waleed", "middle": [], "last": "Ammar", "suffix": "" }, { "first": "Bhavana", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Madeleine", "middle": [], "last": "Van Zuylen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Kohlmeier", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1647--1661", "other_ids": { "DOI": [ "10.18653/v1/N18-1149" ] }, "num": null, "urls": [], "raw_text": "Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Ed- uard Hovy, and Roy Schwartz. 2018. A dataset of peer reviews (PeerRead): Collection, insights and NLP applications. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 1647-1661, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The arbitrariness of reviews, and advice for school administrators", "authors": [ { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Guzdial", "suffix": "" } ], "year": 2015, "venue": "Communications of the ACM", "volume": "58", "issue": "4", "pages": "12--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Langford and Mark Guzdial. 2015. The arbitrari- ness of reviews, and advice for school administra- tors. Communications of the ACM, 58(4):12-13.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A multi-lingual multi-task architecture for low-resource sequence labeling", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shengqi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "799--809", "other_ids": { "DOI": [ "10.18653/v1/P18-1074" ] }, "num": null, "urls": [], "raw_text": "Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 799-809, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Darts: Differentiable architecture search", "authors": [ { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.09055" ] }, "num": null, "urls": [], "raw_text": "Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deep learning for extreme multilabel text classification", "authors": [ { "first": "Jingzhou", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wei-Cheng", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yuexin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yim- ing Yang. 2017a. Deep learning for extreme multi- label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 115-124. ACM.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adversarial multi-task learning for text classification", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/P17-1001" ] }, "num": null, "urls": [], "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017b. Adversarial multi-task learning for text classifica- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1-10, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification", "authors": [ { "first": "Yongxi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Shuangfei", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Javidi", "suffix": "" }, { "first": "Rog\u00e9rio", "middle": [], "last": "Schmidt Feris", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1131--1140", "other_ids": { "DOI": [ "10.1109/CVPR.2017.126" ] }, "num": null, "urls": [], "raw_text": "Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, and Rog\u00e9rio Schmidt Feris. 2017. Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1131-1140.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction", "authors": [ { "first": "Yi", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3219--3232", "other_ids": { "DOI": [ "10.18653/v1/D18-1360" ] }, "num": null, "urls": [], "raw_text": "Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of enti- ties, relations, and coreference for scientific knowl- edge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3219-3232, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An overview of multi-task learning in", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "deep neural networks", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.05098" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multi-task learning as multi-objective optimization", "authors": [ { "first": "Ozan", "middle": [], "last": "Sener", "suffix": "" }, { "first": "Vladlen", "middle": [], "last": "Koltun", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "525--536", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozan Sener and Vladlen Koltun. 2018. Multi-task learning as multi-objective optimization. In Ad- vances in Neural Information Processing Systems, pages 525-536.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "HFT-CNN: Learning hierarchical category structure for multi-label short text categorization", "authors": [ { "first": "Kazuya", "middle": [], "last": "Shimura", "suffix": "" }, { "first": "Jiyi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fumiyo", "middle": [], "last": "Fukumoto", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "811--816", "other_ids": { "DOI": [ "10.18653/v1/D18-1093" ] }, "num": null, "urls": [], "raw_text": "Kazuya Shimura, Jiyi Li, and Fumiyo Fukumoto. 2018. HFT-CNN: Learning hierarchical category structure for multi-label short text categorization. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 811-816, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Text categorization by learning predominant sense of words as auxiliary task", "authors": [ { "first": "Kazuya", "middle": [], "last": "Shimura", "suffix": "" }, { "first": "Jiyi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fumiyo", "middle": [], "last": "Fukumoto", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1109--1119", "other_ids": { "DOI": [ "10.18653/v1/P19-1105" ] }, "num": null, "urls": [], "raw_text": "Kazuya Shimura, Jiyi Li, and Fumiyo Fukumoto. 2019. Text categorization by learning predominant sense of words as auxiliary task. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1109-1119, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Practical bayesian optimization of machine learning algorithms", "authors": [ { "first": "Jasper", "middle": [], "last": "Snoek", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Ryan P", "middle": [], "last": "Adams", "suffix": "" } ], "year": 2012, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2951--2959", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in neural informa- tion processing systems, pages 2951-2959.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Neural architecture search with reinforcement learning", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.01578" ] }, "num": null, "urls": [], "raw_text": "Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning transferable architectures for scalable image recognition", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Vijay", "middle": [], "last": "Vasudevan", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "8697--8710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. 2018. Learning transferable architec- tures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pat- tern recognition, pages 8697-8710.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Basic model CNN Figure 2: Example of Multi-task CNN with Shared Structure Encoding (SSE)", "num": null }, "TABREF1": { "content": "
SettingsCNNXMN-CNN
Input word vectorsfastTextfastText
Embedding Dimension200200
Stride size12
Filter region size22
Feature maps (m)6464
Poolingmax pooling dynamic max pooling
Activation functionReLuReLu
Hidden layers1024512
Batch sizes88
Dropout rate 10.250.25
Dropout rate 20.50.5
OptimizerAdamAdam
Loss functionMSEMSE
Epoch4040
", "html": null, "type_str": "table", "text": "Statistics of Datasets", "num": null }, "TABREF2": { "content": "", "html": null, "type_str": "table", "text": "", "num": null }, "TABREF3": { "content": "
AuxiliaryOur (SSE)AMTAin1
ori0.801 (001)0.9311.027
cor0.839 (111)0.9510.858
com0.792 (100)0.9130.908
sub0.782 (100)0.9160.981
imp0.831 (100)0.9240.970
(a). m = 1
AuxiliariesOur (SSEs)AMT Ain1
ori,cor0.881 (001,110) 0.957 1.036
ori,com0.946 (111,101) 0.976 1.136
ori,sub0.849 (001,101) 0.971 1.211
ori,imp0.853 (001,100) 0.977 1.046
cor,com0.996 (111,101) 0.965 1.226
cor,sub0.761 (101,001) 0.967 1.143
cor,imp0.799 (101,001) 0.965 1.189
com,sub0.892 (001,001) 0.979 1.243
com,imp0.732 (101,101) 0.981 0.918
sub,imp0.932 (001,101) 0.969 1.087
(b). m = 2
", "html": null, "type_str": "table", "text": "(b) shows the results in the case of m = 2. Our method can select a better shared", "num": null }, "TABREF6": { "content": "", "html": null, "type_str": "table", "text": "Results of selecting both shared structures and auxiliary tasks, by changing four settings respectively", "num": null } } } }