{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:26.729936Z" }, "title": "Offline Reinforcement Learning from Human Feedback in Real-World Sequence-to-Sequence Tasks", "authors": [ { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research", "location": { "settlement": "Montreal", "country": "Canada" } }, "email": "jkreutzer@google.com" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heidelberg University", "location": { "country": "Germany" } }, "email": "riezler@cl.uni-heidelberg.de" }, { "first": "Carolin", "middle": [], "last": "Lawrence", "suffix": "", "affiliation": { "laboratory": "", "institution": "NEC Laboratories Europe", "location": { "settlement": "Heidelberg", "country": "Germany" } }, "email": "carolin.lawrence@neclab.eu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Large volumes of interaction logs can be collected from NLP systems that are deployed in the real world. How can this wealth of information be leveraged? Using such interaction logs in an offline reinforcement learning (RL) setting is a promising approach. However, due to the nature of NLP tasks and the constraints of production systems, a series of challenges arise. We present a concise overview of these challenges and discuss possible solutions. * All authors contributed equally, order has been randomized (see https://bit.ly/38PgRjm).", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Large volumes of interaction logs can be collected from NLP systems that are deployed in the real world. How can this wealth of information be leveraged? Using such interaction logs in an offline reinforcement learning (RL) setting is a promising approach. However, due to the nature of NLP tasks and the constraints of production systems, a series of challenges arise. We present a concise overview of these challenges and discuss possible solutions. * All authors contributed equally, order has been randomized (see https://bit.ly/38PgRjm).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "When Natural Language Processing (NLP) systems are deployed in production, and interact with users (\"the real world\"), there are many potential ways of collecting feedback data or rich interaction logs. For example, one can ask for explicit user ratings (Kreutzer et al., 2018a) , or collect user clicks (De Bona et al., 2010) , or elicit user revisions (Trivedi et al., 2019) to get an estimate of how well the deployed system is doing. However, such user interaction logs are primarily used for an one-off assessment of the system, e.g., for spotting critical errors, detecting domain shifts, or identifying the most successful use cases of the system in production. This assessment can then be used to support the decision of keeping or replacing this system in production.", "cite_spans": [ { "start": 254, "end": 278, "text": "(Kreutzer et al., 2018a)", "ref_id": "BIBREF21" }, { "start": 304, "end": 326, "text": "(De Bona et al., 2010)", "ref_id": "BIBREF10" }, { "start": 354, "end": 376, "text": "(Trivedi et al., 2019)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From a machine learning perspective, using interaction logs only for evaluation purposes is a lost opportunity for offline reinforcement learning (RL). Logs of user interactions are gold mines for off-policy learning, and they should be put to use, rather than being forgotten after a one-off evaluation purpose. To move towards the goal of using user interaction logs for learning, we will discuss which challenges have hindered RL from being employed in real-world interaction with users of NLP systems so far.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Concretely, our focus is on sequence-tosequence learning for NLP applications (see \u00a7 2 for an overview). For example, many machine translation services provide the option for users to give feedback on the quality of the translation, e.g., by collecting post-edits. Similarly, industrial chatbots can easily collect vast amounts of interaction logs, which can be utilized with offline RL methods (Kandasamy et al., 2017; Zhou et al., 2017; Hancock et al., 2019) . In the following, we will thus present challenges that are encountered in userinteractive RL for NLP systems. With this discussion, we aim to (1) encourage NLP practitioners to leverage their interaction logs through offline RL, and (2) inspire RL researchers to steel their algorithms for the challenging applications in NLP.", "cite_spans": [ { "start": 395, "end": 419, "text": "(Kandasamy et al., 2017;", "ref_id": "BIBREF19" }, { "start": 420, "end": 438, "text": "Zhou et al., 2017;", "ref_id": "BIBREF47" }, { "start": 439, "end": 460, "text": "Hancock et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In sequence-to-sequence (Seq2Seq) learning, the task is to map an input sequence x = x 1 , x 2 , . . . , x |x| , \u2200x i \u2208 X to an output sequence y = y 1 , y 2 , . . . , y |y| , \u2200y j \u2208 Y, where X , Y denote the sets of input and output vocabularies, respectively. The conditional distribution of the output sequence given the input can be modeled with a policy \u03c0 \u03b8 with learnable parameters \u03b8. Assuming a left-to-right generation order, the output sequence y is generated by conditioning on previous output elements y 0 (Lawrence et al., 2017a) . Concretely, this means, while the worst output sequences with \u03b4 t = 0 are simply ignored, all other sequences are encouraged, even if their reward is close to 0. However, it is clearly undesirable to increase the probability of low reward examples (Swaminathan and Joachims, 2015; Lawrence et al., 2017b,a) .", "cite_spans": [ { "start": 202, "end": 226, "text": "(Lawrence et al., 2017a)", "ref_id": "BIBREF25" }, { "start": 477, "end": 509, "text": "(Swaminathan and Joachims, 2015;", "ref_id": "BIBREF39" }, { "start": 510, "end": 535, "text": "Lawrence et al., 2017b,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "There are two possible solutions to this problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "The first solution is to tune the learning rate and perform early stopping before the degenerate state can be reached. The second solution is to utilize a multiplicative control variate (Kong, 1992) for selfnormalization (Swaminathan and Joachims, 2015) . For efficient gradient calculation, batches of size B can be reweighted one-step-late (OSL) (Lawrence and Riezler, 2018) using \u03b8 from some previous iteration:", "cite_spans": [ { "start": 186, "end": 198, "text": "(Kong, 1992)", "ref_id": "BIBREF20" }, { "start": 221, "end": 253, "text": "(Swaminathan and Joachims, 2015)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L OSL = \u2212 1 B B b=1 \u03b4 b \u03c0 \u03b8 (\u1ef9 b | x b ) 1 T T t=1 \u03c0 \u03b8 (\u1ef9 t | x t ) .", "eq_num": "(4)" } ], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "Self-normalization discourages increasing the probability of low reward data because this would take away probability mass from higher reward outputs and as a result. This introduces a bias in the estimator (that decreases as T increases), however, it makes learning under deterministic logging feasible, as has been shown for learning with real human feedback in a semantic parsing scenario (Lawrence and Riezler, 2018) . This gives the RL agent an edge in learning in an environment that has been deemed impossible in the literature.", "cite_spans": [ { "start": 392, "end": 420, "text": "(Lawrence and Riezler, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "A second form of degenerate behavior occurs because the reward \u03b4 t of an output sequence is typically measured with some non-negative value, e.g., \u03b4 t \u2208 [0, 1]. For example, for machine translation, Kreutzer et al. (2018b) collect ratings for translations on a 5-point Likert scale and map the values linearly to [0, 1]. However, utilizing any of the above objectives means that bad output sequences with low rewards cannot actively be discouraged.", "cite_spans": [ { "start": 199, "end": 222, "text": "Kreutzer et al. (2018b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "There are two possible solutions, both of which have been used as additive control variates to reduce variance in gradient estimators. First, low reward sequences can be discouraged by employing a reward baseline, where for example the average reward \u2206 = 1 t t t =1 \u03b4 t is subtracted from each \u03b4 t . This will cause output sequences worse than the running average to be discouraged rather than encouraged. The second option is to use the logged data D log to learn a reward estimator\u03b4 that can return a reward estimate for any pair (x, y). This estimator together with the IPS objective leads to the Doubly Robust (DR) objective (Dudik et al., 2011) ,", "cite_spans": [ { "start": 629, "end": 649, "text": "(Dudik et al., 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "L DR = \u2212 1 T T t=1 (\u03b4 t \u2212\u03b4(x t ,\u1ef9 t )) \u03c0 \u03b8 (\u1ef9 t | x t )+ \u1ef9 \u223c\u03c0 \u03b8 (\u1ef9|xt)\u03b4 (x t ,\u1ef9 ) \u03c0 \u03b8 (\u1ef9 | x t ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "This objective enables the exploration of other outputs\u1ef9 that are not part of the original log and encourages them based on the reward value returned by the estimator. For the task of machine translation, Lawrence et al. (2017b) show this objective to be the most successful in their setup, and Kreutzer et al. (2018a) report simulation results that show that this objective can significantly reduce the gap between offline and online policy learning, even if the reward estimator is not perfect. Zhou et al. (2017) present an alternating approach to integrating a reward estimator for exploration, by switching between learning offline from logged rewards and exploring online with the help of a reward estimator in phases.", "cite_spans": [ { "start": 205, "end": 228, "text": "Lawrence et al. (2017b)", "ref_id": "BIBREF27" }, { "start": 295, "end": 318, "text": "Kreutzer et al. (2018a)", "ref_id": "BIBREF21" }, { "start": 497, "end": 515, "text": "Zhou et al. (2017)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Logging and Off-line Learning", "sec_num": "3.1" }, { "text": "In interactive NLP, it is unrealistic to expect anything else than bandit feedback from a human user interacting with a chatbot, automatic summarization tool, or commercial machine translation system. That is, users of such systems will only provide a reward signal to the one output that is presented to them, and cannot be expected to rate a multitude of outputs for the same input. As a result, the feedback is very sparse in relation to the size of the output space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability and Learnability of Feedback", "sec_num": "3.2" }, { "text": "Ideally, the user experience should not be disrupted through feedback collection. Non-intrusive interface options for example allow for corrections of the output (\"post-edits\" in the context of machine translation) as a negative signal, or recording whether the output is copied and/or shared without changes, which may be interpreted as a positive signal. However, the signal might be noisy, since the notion of output quality for natural language generation tasks is not a well-defined function to start with: Each input might have many possible valid outputs, each of which humans may judge differently, depending on many contextual and personal factors. In machine translation evaluation for instance, inter-rater agreements have traditionally been reported as low (Turian et al., 2003; Carl et al., 2011; Lommel et al., 2014) , especially when quality estimates are collected from non-professional raters (Callison-Burch, 2009) . Similar observations have been made for other text generation tasks (Godwin and Piwek, 2016; Verberne et al., 2018) . Nguyen et al. (2017) illustrated how badly machine translation systems can handle humanlevel noise in direct feedback for online RL with simulations. The level of noise in real-world human feedback may be so high that it prevents learning completely, as for example experienced in ecommerce machine translation logs (Kreutzer et al., 2018a) . The issue is even higher in dialogue generation where there are a plenitude of acceptable responses (Pang et al., 2020) . To this aim, inverse RL has been proposed to infer reward functions from responses indirectly (Takanobu et al., 2019) . Surprisingly, the question of how to best improve an RL agent in the scenario of learning from real-world human feedback has been scarcely researched. This might originate from many RL research environments coming with fixed reward functions. In the real world, however, there is rarely a clearly defined single reward function for which it would suffice optimizing for. The suggestions in Dulac-Arnold et al. (2019) seem straightforward: warm-starting agents to decrease sample complexity or using inverse reinforcement learning to recover reward functions from demonstrations (Wang et al., 2020) -but they require additional supervision signals that RL was supposed to alleviate.", "cite_spans": [ { "start": 769, "end": 790, "text": "(Turian et al., 2003;", "ref_id": "BIBREF44" }, { "start": 791, "end": 809, "text": "Carl et al., 2011;", "ref_id": "BIBREF5" }, { "start": 810, "end": 830, "text": "Lommel et al., 2014)", "ref_id": "BIBREF29" }, { "start": 910, "end": 932, "text": "(Callison-Burch, 2009)", "ref_id": "BIBREF4" }, { "start": 1003, "end": 1027, "text": "(Godwin and Piwek, 2016;", "ref_id": "BIBREF13" }, { "start": 1028, "end": 1050, "text": "Verberne et al., 2018)", "ref_id": "BIBREF45" }, { "start": 1053, "end": 1073, "text": "Nguyen et al. (2017)", "ref_id": "BIBREF30" }, { "start": 1369, "end": 1393, "text": "(Kreutzer et al., 2018a)", "ref_id": "BIBREF21" }, { "start": 1496, "end": 1515, "text": "(Pang et al., 2020)", "ref_id": "BIBREF32" }, { "start": 1612, "end": 1635, "text": "(Takanobu et al., 2019)", "ref_id": "BIBREF40" }, { "start": 2216, "end": 2235, "text": "(Wang et al., 2020)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability and Learnability of Feedback", "sec_num": "3.2" }, { "text": "When it comes to the question which type of human feedback is most beneficial for training an RL agent, one finds a lot of blanket statements, e.g., referring to the advantages of pairwise comparisons (Thurstone, 1927) . For instance, learning from human pairwise preferences from humans has been advertised for summarization (Christiano et al., 2017; Stiennon et al., 2020) and language modeling (Ziegler et al., 2019) , but the reliability of the signal has not been evaluated. An exception is the work of Kreutzer et al. (2018b) which is the first to investigate two crucial questions. The first question addresses which type of human feedback -pairwise judgments or cardinal feedback on a 5point scale -can be given most reliably by human teachers. The second question investigates which type of feedback allows to learn reward estimators that best approximate human rewards and can be best integrated into an end-to-end RL-NLP task.", "cite_spans": [ { "start": 201, "end": 218, "text": "(Thurstone, 1927)", "ref_id": "BIBREF42" }, { "start": 326, "end": 351, "text": "(Christiano et al., 2017;", "ref_id": "BIBREF8" }, { "start": 352, "end": 374, "text": "Stiennon et al., 2020)", "ref_id": "BIBREF37" }, { "start": 397, "end": 419, "text": "(Ziegler et al., 2019)", "ref_id": "BIBREF48" }, { "start": 508, "end": 531, "text": "Kreutzer et al. (2018b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability and Learnability of Feedback", "sec_num": "3.2" }, { "text": "Regarding the first question, Kreutzer et al. (2018b) found that the common assumption -that pairwise comparisons are easier to judge than a single output on a Likert scale (Thurstone, 1927)turned out to be false for the task of machine translation. Inter-rater reliability proved to be higher for 5-point ratings (Krippendorff's \u03b1 = 0.51) than for pairwise judgments (\u03b1 = 0.39). (Kreutzer et al., 2018b) explain two advantages that the Likert scale setup offers: (1) it is possible to standardize cardinal judgments for each rater to remove individual biases, (2) they offer an absolute anchoring for quality, while a preference rankings leave the overall positioning of the pair of outputs on a quality scale open. For pairwise judgments it is difficult or even impossible to reliably choose between two outputs that are similarly good or bad, e.g., differing by only a few words. Therefore, filtering out raters with low intra-rater reliability proved effective for absolute ratings, while filtering outputs with a high variance in ratings was most effective for pairwise ratings, yielding the final inter-rater reliability given above. Discarding rated outputs, however, reduces the size of the log to learn from, which is undesirable in settings where rewards are scarce or costly.", "cite_spans": [ { "start": 30, "end": 53, "text": "Kreutzer et al. (2018b)", "ref_id": "BIBREF22" }, { "start": 380, "end": 404, "text": "(Kreutzer et al., 2018b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability and Learnability of Feedback", "sec_num": "3.2" }, { "text": "To answer the second question, Kreutzer et al. (2018b) found a neural machine translation system can be significantly improved using a reward estimator trained on only a few hundred cardinal user judgments. This work highlights that future research in real-world RL might have to involve studies in user interfaces or user experience, since the interfaces for feedback collection influence the reward function that RL agents learn from -and thereby the downstream task success. Collecting implicit feedback (Kreutzer et al., 2018a; Jaques et al., 2020) might offer a better user experience.", "cite_spans": [ { "start": 31, "end": 54, "text": "Kreutzer et al. (2018b)", "ref_id": "BIBREF22" }, { "start": 507, "end": 531, "text": "(Kreutzer et al., 2018a;", "ref_id": "BIBREF21" }, { "start": 532, "end": 552, "text": "Jaques et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Reliability and Learnability of Feedback", "sec_num": "3.2" }, { "text": "For the challenges discussed in Sections 3.1 and 3.2, a promising approach is to tackle the arguably simpler problem of learning a reward estimator from human feedback first, then provide unlimited learned feedback to generalize to unseen outputs in off-policy RL. However, risks of bias introduction and potential benefits for noise reduction through replacing user feedback by reward estimators are yet to be quantified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability and Learnability of Feedback", "sec_num": "3.2" }, { "text": "There is large potential in NLP to leverage user interaction logs for system improvement. We discussed how algorithms for offline RL can offer promising solutions for this learning problem. However, specific challenges in offline RL arise due to the particular nature of NLP systems that collect human feedback in real-world applications. We presented cases where such challenges have been found and offered solutions that have helped. So far, the solutions have mainly been explored in the context of machine translation and semantic parsing. In the future, it will be interesting to explore further tasks and additional real-world use cases to find out how to best learn from human feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "The majority of RL research in NLP has focused on learning from online feedback(Sokolov et al., 2016;He et al., 2016;Bahdanau et al., 2017;Nguyen et al., 2017;Nogueira and Cho, 2017;Lam et al., 2018).2 The chatbot Tay might be one of the most illustrative examples for what can go wrong(Davis, 2016).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An actor-critic algorithm for sequence prediction", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Philemon", "middle": [], "last": "Brakel", "suffix": "" }, { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations, ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In 5th Inter- national Conference on Learning Representations, ICLR, Toulon, France.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Exploiting the natural exploration in contextual bandits", "authors": [ { "first": "H", "middle": [], "last": "Bastani", "suffix": "" }, { "first": "M", "middle": [], "last": "Bayati", "suffix": "" }, { "first": "K", "middle": [], "last": "Khosravi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Bastani, M. Bayati, and K. Khosravi. 2017. Ex- ploiting the natural exploration in contextual bandits. ArXiv e-prints, 1704.09011.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Safe model-based reinforcement learning with stability guarantees", "authors": [ { "first": "Felix", "middle": [], "last": "Berkenkamp", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Turchetta", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Schoellig", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Krause", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "908--918", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Berkenkamp, Matteo Turchetta, Angela Schoel- lig, and Andreas Krause. 2017. Safe model-based reinforcement learning with stability guarantees. In Advances in Neural Information Processing Systems (NeurIPS), pages 908-918, Long Beach, California.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising", "authors": [ { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Joaquin", "middle": [], "last": "Qui\u00f1onero-Candela", "suffix": "" }, { "first": "Denis", "middle": [ "X" ], "last": "Charles", "suffix": "" }, { "first": "D", "middle": [ "Max" ], "last": "Chickering", "suffix": "" }, { "first": "Elon", "middle": [], "last": "Portugaly", "suffix": "" }, { "first": "Dipanakar", "middle": [], "last": "Ray", "suffix": "" } ], "year": null, "venue": "Journal of Machine Learning Research", "volume": "14", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00e9on Bottou, Jonas Peters, Joaquin Qui\u00f1onero- Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipanakar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising. Journal of Machine Learning Re- search, 14.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Fast, cheap, and creative: Evaluating translation quality using Amazon's Mechanical Turk", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using Amazon's Me- chanical Turk. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), Singapore.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The process of post-editing: a pilot study", "authors": [ { "first": "Michael", "middle": [], "last": "Carl", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Dragsted", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Elming", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hardt", "suffix": "" }, { "first": "Arnt Lykke", "middle": [], "last": "Jakobsen", "suffix": "" } ], "year": 2011, "venue": "Copenhagen Studies in Language", "volume": "41", "issue": "", "pages": "131--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Carl, Barbara Dragsted, Jakob Elming, Daniel Hardt, and Arnt Lykke Jakobsen. 2011. The process of post-editing: a pilot study. Copenhagen Studies in Language, 41:131-142.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An empirical evaluation of Thompson sampling", "authors": [ { "first": "Olivier", "middle": [], "last": "Chapelle", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Chapelle and Lihong Li. 2011. An empirical evaluation of Thompson sampling. In Advances in Neural Information Processing Systems (NeurIPS), Granada, Spain.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On the weaknesses of reinforcement learning for neural machine translation", "authors": [ { "first": "Leshem", "middle": [], "last": "Choshen", "suffix": "" }, { "first": "Lior", "middle": [], "last": "Fox", "suffix": "" }, { "first": "Zohar", "middle": [], "last": "Aizenbud", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. 2020. On the weaknesses of reinforcement learning for neural machine translation. In Inter- national Conference on Learning Representations (ICLR), Virtual.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep Reinforcement Learning from Human Preferences", "authors": [ { "first": "Paul", "middle": [ "F" ], "last": "Christiano", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Leike", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Miljan", "middle": [], "last": "Martic", "suffix": "" }, { "first": "Shane", "middle": [], "last": "Legg", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul F. Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep Re- inforcement Learning from Human Preferences. In Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "AI amusements: the tragic tale of Tay the chatbot", "authors": [ { "first": "Ernest", "middle": [], "last": "Davis", "suffix": "" } ], "year": 2016, "venue": "AI Matters", "volume": "2", "issue": "4", "pages": "20--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ernest Davis. 2016. AI amusements: the tragic tale of Tay the chatbot. AI Matters, 2(4):20-24.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning dense models of query similarity from user click logs", "authors": [ { "first": "Fabio", "middle": [], "last": "De Bona", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Ama\u00e7", "middle": [], "last": "Herda\u01e7delen", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Holmqvist", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-ACL), Los Angeles", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio De Bona, Stefan Riezler, Keith Hall, Massi- miliano Ciaramita, Ama\u00e7 Herda\u01e7delen, and Maria Holmqvist. 2010. Learning dense models of query similarity from user click logs. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-ACL), Los An- geles, California.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Doubly robust policy evaluation and learning", "authors": [ { "first": "Miroslav", "middle": [], "last": "Dudik", "suffix": "" }, { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miroslav Dudik, John Langford, and Lihong Li. 2011. Doubly robust policy evaluation and learning. In Proceedings of the 28th International Conference on Machine Learning (ICML), Bellevue, WA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Challenges of real-world reinforcement learning", "authors": [ { "first": "Gabriel", "middle": [], "last": "Dulac-Arnold", "suffix": "" }, { "first": "Daniel", "middle": [ "J" ], "last": "Mankowitz", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Hester", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Dulac-Arnold, Daniel J. Mankowitz, and Todd Hester. 2019. Challenges of real-world reinforce- ment learning. CoRR, abs/1904.12901.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Collecting reliable human judgements on machine-generated language: The case of the QG-STEC data", "authors": [ { "first": "Keith", "middle": [], "last": "Godwin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Piwek", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 9th International Natural Language Generation conference (INLG)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Godwin and Paul Piwek. 2016. Collecting reli- able human judgements on machine-generated lan- guage: The case of the QG-STEC data. In Proceed- ings of the 9th International Natural Language Gen- eration conference (INLG), Edinburgh, UK.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning from dialogue after deployment: Feed yourself", "authors": [ { "first": "Braden", "middle": [], "last": "Hancock", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Pierre-Emmanuel", "middle": [], "last": "Mazar\u00e9", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazar\u00e9, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot!", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Safe exploration for reinforcement learning", "authors": [ { "first": "Alexander", "middle": [], "last": "Hans", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Schneega\u00df", "suffix": "" }, { "first": "Anton", "middle": [ "Maximilian" ], "last": "Sch\u00e4fer", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Udluft", "suffix": "" } ], "year": 2008, "venue": "ESANN", "volume": "", "issue": "", "pages": "143--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Hans, Daniel Schneega\u00df, Anton Maximilian Sch\u00e4fer, and Steffen Udluft. 2008. Safe exploration for reinforcement learning. In ESANN, pages 143- 148.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deep reinforcement learning with a natural language action space", "authors": [ { "first": "Ji", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P16-1153" ] }, "num": null, "urls": [], "raw_text": "Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Li- hong Li, Li Deng, and Mari Ostendorf. 2016. Deep reinforcement learning with a natural language ac- tion space. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (ACL), Berlin, Germany.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Way offpolicy batch deep reinforcement learning of human preferences in dialog", "authors": [ { "first": "Natasha", "middle": [], "last": "Jaques", "suffix": "" }, { "first": "Asma", "middle": [], "last": "Ghandeharioun", "suffix": "" }, { "first": "Judy", "middle": [ "Hanwen" ], "last": "Shen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Ferguson", "suffix": "" }, { "first": "Agata", "middle": [], "last": "Lapedriza", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Shixiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Rosalind", "middle": [], "last": "Picard", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. 2020. Way off- policy batch deep reinforcement learning of human preferences in dialog.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Doubly robust offpolicy value evaluation for reinforcement learning", "authors": [ { "first": "Nan", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 33rd International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nan Jiang and Lihong Li. 2016. Doubly robust off- policy value evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), New York, NY.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Batch policy gradient methods for improving neural conversation models", "authors": [ { "first": "Kirthevasan", "middle": [], "last": "Kandasamy", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Bachrach", "suffix": "" }, { "first": "Ryota", "middle": [], "last": "Tomioka", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "David", "middle": [], "last": "Carter", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, and David Carter. 2017. Batch policy gradient methods for improving neural conversation models. In 5th International Confer- ence on Learning Representations (ICLR).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A note on importance sampling using standardized weights", "authors": [ { "first": "Augustine", "middle": [], "last": "Kong", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Augustine Kong. 1992. A note on importance sam- pling using standardized weights. Technical Report 348, Department of Statistics, University of Chicago, Illinois.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Can neural machine translation be improved with user feedback?", "authors": [ { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Shahram", "middle": [], "last": "Khadivi", "suffix": "" }, { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "3", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N18-3012" ] }, "num": null, "urls": [], "raw_text": "Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, and Stefan Riezler. 2018a. Can neural machine translation be improved with user feedback? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 3 (ACL).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning", "authors": [ { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Uyheng", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P18-1165" ] }, "num": null, "urls": [], "raw_text": "Julia Kreutzer, Joshua Uyheng, and Stefan Riezler. 2018b. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A reinforcement learning approach to interactivepredictive neural machine translation", "authors": [ { "first": "Julia", "middle": [], "last": "Tsz Kin Lam", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 21st Annual Conference of the European Association for Machine Translation (EAMT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsz Kin Lam, Julia Kreutzer, and Stefan Riezler. 2018. A reinforcement learning approach to interactive- predictive neural machine translation. In Proceed- ings of the 21st Annual Conference of the European Association for Machine Translation (EAMT), Ali- cante, Spain.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Exploration scavenging", "authors": [ { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Strehl", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Wortman", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "https://dl.acm.org/doi/pdf/10.1145/1390156.1390223" ] }, "num": null, "urls": [], "raw_text": "John Langford, Alexander Strehl, and Jennifer Wort- man. 2008. Exploration scavenging. In Proceed- ings of the 25th International Conference on Ma- chine Learning (ICML), Helsinki, Finland.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Counterfactual Learning for Machine Translation: Degeneracies and Solutions", "authors": [ { "first": "Carolin", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Pratik", "middle": [], "last": "Gajane", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the NIPS WhatIf Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carolin Lawrence, Pratik Gajane, and Stefan Riezler. 2017a. Counterfactual Learning for Machine Trans- lation: Degeneracies and Solutions. In Proceedings of the NIPS WhatIf Workshop, Long Beach, Califor- nia, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback", "authors": [ { "first": "Carolin", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carolin Lawrence and Stefan Riezler. 2018. Improving a Neural Semantic Parser by Counterfactual Learn- ing from Human Bandit Feedback. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Aus- tralia.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Counterfactual Learning from Bandit Feedback under Deterministic Logging : A Case Study in Statistical Machine Translation", "authors": [ { "first": "Carolin", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Artem", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carolin Lawrence, Artem Sokolov, and Stefan Riezler. 2017b. Counterfactual Learning from Bandit Feed- back under Deterministic Logging : A Case Study in Statistical Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), Copenhagen, Denmark.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Deep reinforcement learning for dialogue generation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep re- inforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), Austin, Texas. Association for Computational Lin- guistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Assessing inter-annotator agreement for translation error annotation", "authors": [ { "first": "Arle", "middle": [], "last": "Lommel", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Popovic", "suffix": "" }, { "first": "Aljoscha", "middle": [], "last": "Burchardt", "suffix": "" } ], "year": 2014, "venue": "MTE: Workshop on Automatic and Manual Metrics for Operational Translation Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arle Lommel, Maja Popovic, and Aljoscha Burchardt. 2014. Assessing inter-annotator agreement for trans- lation error annotation. In MTE: Workshop on Auto- matic and Manual Metrics for Operational Transla- tion Evaluation.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Reinforcement learning for bandit neural machine translation with simulated human feedback", "authors": [ { "first": "Khanh", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khanh Nguyen, Hal Daum\u00e9 III, and Jordan Boyd- Graber. 2017. Reinforcement learning for bandit neural machine translation with simulated human feedback. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Taskoriented query reformulation with reinforcement learning", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D17-1061" ] }, "num": null, "urls": [], "raw_text": "Rodrigo Nogueira and Kyunghyun Cho. 2017. Task- oriented query reformulation with reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Towards holistic and automatic evaluation of open-domain dialogue generation", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Nijkamp", "suffix": "" }, { "first": "Wenjuan", "middle": [], "last": "Han", "suffix": "" }, { "first": "Linqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yixian", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kewei", "middle": [], "last": "Tu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.333" ] }, "num": null, "urls": [], "raw_text": "Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, and Kewei Tu. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics (ACL), Online.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Eligibility traces for off-policy policy evaluation", "authors": [ { "first": "Doina", "middle": [], "last": "Precup", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Sutton", "suffix": "" }, { "first": "Satinder", "middle": [ "P" ], "last": "Singh", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Seventeenth International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doina Precup, Richard S. Sutton, and Satinder P. Singh. 2000. Eligibility traces for off-policy policy eval- uation. In Proceedings of the Seventeenth Inter- national Conference on Machine Learning (ICML), San Francisco, CA.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Excitement and concerns about machine learning-based chatbots and talkbots: A survey", "authors": [ { "first": "Pablo", "middle": [], "last": "Rivas", "suffix": "" }, { "first": "Kerstin", "middle": [], "last": "Holzmayer", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Hernandez", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Grippaldi", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Symposium on Technology and Society (ISTAS)", "volume": "", "issue": "", "pages": "156--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Rivas, Kerstin Holzmayer, Cristian Hernandez, and Charles Grippaldi. 2018. Excitement and con- cerns about machine learning-based chatbots and talkbots: A survey. In 2018 IEEE International Sym- posium on Technology and Society (ISTAS), pages 156-162. IEEE.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The central role of the propensity score in observational studies for causal effects", "authors": [ { "first": "R", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Donald", "middle": [ "B" ], "last": "Rosenbaum", "suffix": "" }, { "first": "", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1983, "venue": "Biometrika", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul R. Rosenbaum and Donald B. Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Stochastic structured prediction under bandit feedback", "authors": [ { "first": "Artem", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Lo", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artem Sokolov, Julia Kreutzer, Stefan Riezler, and Christopher Lo. 2016. Stochastic structured pre- diction under bandit feedback. In Advances in Neural Information Processing Systems (NeurIPS), Barcelona, Spain.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Learning to summarize from human feedback", "authors": [ { "first": "Nisan", "middle": [], "last": "Stiennon", "suffix": "" }, { "first": "Long", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Daniel", "middle": [ "M" ], "last": "Ziegler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Chelsea", "middle": [], "last": "Voss", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Christiano", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Learning from logged implicit exploration data", "authors": [ { "first": "Alexander", "middle": [ "L" ], "last": "Strehl", "suffix": "" }, { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" }, { "first": "M", "middle": [], "last": "Sham", "suffix": "" }, { "first": "", "middle": [], "last": "Kakade", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Sytems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander L. Strehl, John Langford, Lihong Li, and Sham M. Kakade. 2010. Learning from logged implicit exploration data. In Advances in Neural Information Processing Sytems (NIPS), Vancouver, Canada.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The self-normalized estimator for counterfactual learning", "authors": [ { "first": "Adith", "middle": [], "last": "Swaminathan", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adith Swaminathan and Thorsten Joachims. 2015. The self-normalized estimator for counterfactual learn- ing. In Advances in Neural Information Processing Systems (NIPS), Montreal, Canada.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Guided dialog policy learning: Reward estimation for multi-domain task-oriented dialog", "authors": [ { "first": "Ryuichi", "middle": [], "last": "Takanobu", "suffix": "" }, { "first": "Hanlin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D19-1010" ] }, "num": null, "urls": [], "raw_text": "Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward es- timation for multi-domain task-oriented dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), Hong Kong, China.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Dataefficient off-policy policy evaluation for reinforcement learning", "authors": [ { "first": "Philip", "middle": [ "S" ], "last": "Thomas", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Brunskill", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 33nd International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip S. Thomas and Emma Brunskill. 2016. Data- efficient off-policy policy evaluation for reinforce- ment learning. In Proceedings of the 33nd Inter- national Conference on Machine Learning (ICML), New York, NY.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A law of comparative judgement", "authors": [ { "first": "", "middle": [], "last": "Louis Leon Thurstone", "suffix": "" } ], "year": 1927, "venue": "Psychological Review", "volume": "34", "issue": "", "pages": "278--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Louis Leon Thurstone. 1927. A law of comparative judgement. Psychological Review, 34:278-286.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Interactive NLP in clinical care: Identifying incidental findings in radiology reports", "authors": [ { "first": "Gaurav", "middle": [], "last": "Trivedi", "suffix": "" }, { "first": "R", "middle": [], "last": "Esmaeel", "suffix": "" }, { "first": "", "middle": [], "last": "Dadashzadeh", "suffix": "" }, { "first": "M", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Wendy", "middle": [ "W" ], "last": "Handzel", "suffix": "" }, { "first": "Shyam", "middle": [], "last": "Chapman", "suffix": "" }, { "first": "Harry", "middle": [], "last": "Visweswaran", "suffix": "" }, { "first": "", "middle": [], "last": "Hochheiser", "suffix": "" } ], "year": 2019, "venue": "Applied clinical informatics", "volume": "10", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gaurav Trivedi, Esmaeel R Dadashzadeh, Robert M Handzel, Wendy W Chapman, Shyam Visweswaran, and Harry Hochheiser. 2019. Interactive NLP in clinical care: Identifying incidental findings in radiology reports. Applied clinical informatics, 10(4):655.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Evaluation of machine translation and its evaluation", "authors": [ { "first": "Luke", "middle": [], "last": "Joseph P Turian", "suffix": "" }, { "first": "", "middle": [], "last": "Shea", "suffix": "" }, { "first": "", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2003, "venue": "Proceedings of MT Summit", "volume": "", "issue": "", "pages": "386--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph P Turian, Luke Shea, and I Dan Melamed. 2003. Evaluation of machine translation and its evaluation. Proceedings of MT Summit, pages 386-393.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Creating a reference data set for the summarization of discussion forum threads", "authors": [ { "first": "Suzan", "middle": [], "last": "Verberne", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" }, { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "Antal", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Bosch", "suffix": "" } ], "year": 2018, "venue": "Language Resources and Evaluation", "volume": "52", "issue": "2", "pages": "461--483", "other_ids": { "DOI": [ "https://link.springer.com/content/pdf/10.1007/s10579-017-9389-4.pdf" ] }, "num": null, "urls": [], "raw_text": "Suzan Verberne, Emiel Krahmer, Iris Hendrickx, Sander Wubben, and Antal van den Bosch. 2018. Creating a reference data set for the summarization of discussion forum threads. Language Resources and Evaluation, 52(2):461-483.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Reinforcement learning with perturbed rewards", "authors": [ { "first": "Jingkang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingkang Wang, Yang Liu, and Bo Li. 2020. Rein- forcement learning with perturbed rewards. In AAAI, New York, New York.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "End-to-end offline goal-oriented dialog policy learning via policy gradient", "authors": [ { "first": "Li", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Small", "suffix": "" }, { "first": "O", "middle": [], "last": "Rokhlenko", "suffix": "" }, { "first": "C", "middle": [], "last": "Elkan", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Zhou, Kevin Small, O. Rokhlenko, and C. Elkan. 2017. End-to-end offline goal-oriented dialog policy learning via policy gradient. ArXiv, abs/1712.02838.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Fine-tuning language models from human preferences", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Nisan", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Stiennon", "suffix": "" }, { "first": "Tom", "middle": [ "B" ], "last": "Wu", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Christiano", "suffix": "" }, { "first": "", "middle": [], "last": "Irving", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.08593" ] }, "num": null, "urls": [], "raw_text": "Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- guage models from human preferences. arXiv preprint arXiv:1909.08593.", "links": null } }, "ref_entries": {} } }