diff --git "a/4dE0T4oBgHgl3EQfeQDL/content/tmp_files/load_file.txt" "b/4dE0T4oBgHgl3EQfeQDL/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/4dE0T4oBgHgl3EQfeQDL/content/tmp_files/load_file.txt" @@ -0,0 +1,824 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf,len=823 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='02389v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='LG] 6 Jan 2023 Provable Reset-free Reinforcement Learning by No-Regret Reduction Hoai-An Nguyen Ching-An Cheng Rutgers University Microsoft Research Abstract Real-world reinforcement learning (RL) is of- ten severely limited since typical RL algorithms heavily rely on the reset mechanism to sample proper initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In practice, the reset mech- anism is expensive to implement due to the need for human intervention or heavily engineered en- vironments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To make learning more practical, we propose a generic no-regret reduction to sys- tematically design reset-free RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Our reduction turns reset-free RL into a two-player game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We show that achieving sublinear regret in this two player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the origi- nal RL problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This means that the agent even- tually learns to perform optimally and avoid re- sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By this reduction, we design an instantia- tion for linear Markov decision processes, which is the first provably correct reset-free RL algo- rithm to our knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 1 INTRODUCTION Reinforcement learning (RL) enables an artificial agent to learn problem-solving skills directly through interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' However, RL is notorious for its sample inefficiency, and successful stories of RL so far are mostly limited to appli- cations where an accurate simulator of the world is avail- able (like in games).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Real-world RL, such as robot learn- ing, remains a challenging open question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' One key obstacle preventing the collection of a large num- ber of samples in real-world RL is the need for reset- ting the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The ability to reset the agent to proper initial states plays an important role in typical RL algo- rithms, as it affects which region the agent can explore and whether the agent can recover from its past mis- takes (Kakade and Langford, 2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In the absence of a reset mechanism, agents may get stuck in absorbing states, such as those where it has damaged itself or irreparably al- tered the learning environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, in most settings, completely avoiding resets without prior knowledge of the reset states or environment is infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For instance, a robot learning to walk would inevitably fall before perfecting the skill, and timely intervention is needed to prevent damaging the hardware and to return the robot to a walkable configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Another example we can consider is a robot manipulator learning to stack three blocks on top of each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Unrecoverable states that would require intervention would include the robot knock- ing a block off the table, or the robot smashing itself force- fully into the table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Reset would then reconfigure the scene to a meaningful initial state that is good for the robot to learn from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Resetting is a necessary part of the real-world learning pro- cess if we want an agent to be able to adapt to any en- vironment, but it is non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Unlike in simulation, we cannot just set a real-world agent (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', a robot) to an ar- bitrary initial state with a click of a button.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Resetting in the real world is usually quite expensive and requires con- stant human monitoring and intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Consider again the example of a robot learning to stack blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Normally, a person would oversee the entire learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' During the process, they would manually reset the robot to a mean- ingful starting state before it enters an unrecoverable state where the problem can no longer be solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Sometimes au- tomatic resetting can be implemented by cleverly engineer- ing the physical learning environment (Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021), but it is not always feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' An approach we can take to make real-world RL more cost-efficient is through reset-free RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The goal of reset- free RL is to have an agent learn how to perform well while minimizing the amount of external resets required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Some examples of problems that have been approached in reset-free RL include agents learning dexterity skills, such as picking up an item or inserting a pipe, and learning how to walk (Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' While there has been numerous works proposing reset-free RL algorithms using approaches such as multi-task learning (Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2020), learning a reset pol- Provable Reset-free Reinforcement Learning by No-Regret Reduction icy (Eysenbach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Sharma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022), and skill- space planning (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2020), to our knowledge, there has not been any work with provable guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In this work, we take the first step to provide a provably correct framework to design reset-free RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Our framework is based on the idea of a no-regret reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' First, we reduce the reset-free RL problem to a sequence of safe RL problems with an adaptive initial state sequence, where each safe RL problem is modeled as a constrained Markov decision process (CMDP) with the states requiring resets marked as unsafe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then we derive our main no-regret reduction, which further turns this sequence into a two- player game between a primal player (updating the Marko- vian policy) and a dual player (updating the Lagrange mul- tiplier function of the constrained MDPs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Interestingly, we show that such a reduction can be constructed without using the typical Slater’s condition for strong duality and despite the fact that CMDPs with different initial states in general do not share a common Markovian optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We show that if no regret is achieved in this game, then the regret of the original RL problem and the total num- ber of required resets are both provably sublinear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This means that the agent eventually learns to perform optimally and avoids resets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Using this reduction, we design a reset- free RL algorithm instantiation under the linear MDP as- sumption, using Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022) as the baseline algo- rithm for the primal player and projected gradient descent for the dual player.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We prove that our algorithm achieves ˜O( √ d3H4K) regret and ˜O( √ d3H4K) resets with high probability, where d is the feature dimension, H is the length of an episode, and K is the total number of episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 2 RELATED WORK Reset-free RL is a relatively new concept in the literature, and the work thus far, to our knowledge, has been limited to non-provable approaches with empirical verification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' One such approach is by learning a reset policy in addition to the main policy (Eysenbach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Sharma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The idea is to learn a policy that will bring the agent back to a safe initial state if they encounter a reset state concur- rently with a policy that maximizes reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A reset state is a state in which human intervention normally would have been required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This approach prevents the need for man- ual resets;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' however, there is usually some required assump- tions on knowledge of the reset policy reward function and therefore knowledge of the reset states (Eysenbach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Sharma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022) avoid this assumption but as- sume given demonstrations on how to accomplish the goal and a fixed initial state distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Another popular approach is using multi-task learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This is similar to learning a reset policy, but can be thought of as a way to increase the amount of possible actions an agent can take to perform a reset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The objective is to learn a number of tasks so that a combination of them can achieve the main goal, and in addition, some tasks can perform nat- ural resets for other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' One problem that was tackled by Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021) was that of inserting a light bulb into a lamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The tasks their agent learns is recentering, insert- ing, lifting, and flipping the bulb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Here, if the bulb starts on the ground, the agent can recenter the bulb, lift it, flip it over (if needed), and finally insert it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In addition, many of the tasks perform resets for the others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For example, if the agent drops the bulb while lifting it, it can recenter the bulb and then try lifting it again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This approach breaks down the reset process and (possibly) makes it easier to learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' How- ever, this approach often requires the order in which tasks should be learned to be provided manually (Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A related problem is infinite-horizon non-episodic RL with provable guarantees (see Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020, 2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2019) and the references within) as this prob- lem is also motivated by not using resets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In this setting, there is only one episode that goes on indefinitely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The ob- jective is to maximize the cumulative reward, and progress is usually measured in terms of regret with the compara- tor being an optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' However, compared with the reset-free RL setting we study here, extra assumptions, such as the absence or knowledge of absorbing states, are usually required to achieve sublinear regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In addition, the objective does not necessarily lead to a minimization of resets as the agent can leverage reset transitions to max- imize rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Learning in infinite-horizon CMDPs has been studied (Zheng and Ratliff, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Jain et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022), but to our knowledge, all such works make strong assump- tions such as a fixed initial state distribution or known dy- namics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In this paper, we focus on an episodic setting of reset-free RL (see Section 3);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' a non-episodic formulation of reset-free RL could be an interesting one for further re- search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To our knowledge, we propose the first provable reset- free RL technique in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By borrowing ideas from literature on the much more extensively studied area of safe RL, we propose to associate states requiring re- sets with the concept of unsafe states in safe RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe reinforcement learning involves solving the standard RL problem while adhering to some safety constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' There has been a lot of work in safe RL, with approaches such as utilizing a baseline safe (but not optimal) policy (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Garcia Polo and Fernandez Rebollo, 2011), pessimism (Amani and Yang, 2022), and shielding (Alshiekh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Wagener et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' These works have had promising empirical results but usually require extra assumptions such as a given baseline policy or knowl- edge of unsafe states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' There are also provable safe RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To our knowledge, all involve framing safe RL as a CMDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Here, the safety constraints are modeled as a cost, and Hoai-An Nguyen, Ching-An Cheng the overall goal is to maximize performance while keep- ing the cost below a threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The provable guaran- tees are commonly either sublinear regret and constraint violations, or sublinear regret with zero constraint vi- olation (Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' HasanzadeZonuzy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Qiu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Wachi and Sui, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Efroni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' However, most works (including all the aforementioned ones), consider the episodic case where the initial state distribution of each episode is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This prevents a very natural extension to reset-free learning as human intervention would be re- quired to reset the environment at the end of each episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Works that allow for arbitrary initial state require fairly strong assumptions, such as knowledge (and the existence) of safe actions from each state (Amani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In our work, we utilize techniques from provable safe RL for reset-free RL, but weaken the typical assumptions to allow for arbitrary initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This relaxation is neces- sary for the reset-free RL problem and also allows for eas- ier extensions to both lifelong and multi-task learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We achieve this relaxation with a key observation that identifies a shared Markovian-policy saddle-point across CMDPs of perfectly safe RL with different initial states (that is, the constraint in the CMDP imposes perfect safety).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This ob- servation is new to our knowledge, and it is derived from the particular structure of perfectly safe RL, which is a sub- problem used in our reset-free RL reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We note that general CMDPs with different initial states do not generally admit shared Markovian-policy saddle-points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, on the technical side, our algorithm can also be viewed as the first safe RL algorithm that allows for arbitrary initial state sequences without strong assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' While we propose a generic reduction technique to de- sign reset-free RL algorithms, our regret and constraint violation bounds are still comparable to the above works when specialized to their setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Under the linear MDP assumption, our algorithm achieves ˜O( √ d3H4K) regret and violation (equivalently, the number of resets in reset- free RL), which is asymptotically equivalent to Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022) and comparable to the bounds of ˜O √ d2H6K from Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021) for a fixed initial state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 3 PRELIMINARY We consider episodic reset-free RL: in each episode, the agent aims to optimize for a fixed-horizon return starting from the last state of the previous episode or some state that the agent was reset to in the previous episode if reset occurs (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', due to the robot falling over).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Problem Setup and Notation Formally, we can de- fine episodic reset-free RL as a Markov decision process (MDP), (S, A, P, r, H), where S is the state space, A is the action space, P = {Ph}H h=1 is the transition dynamics, r = {rh}H h=1 is the reward function, and H is the task hori- zon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We assume P and r are unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We allow S to be large or continuous but assume A is relatively small so that maxa∈A can be performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We designate the set of reset states as Sreset ⊆ S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' we do not assume that the agent has knowledge of Sreset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We also do not assume that there is a reset-free action for each state, as opposed to (Amani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, the agent needs to plan for the long-term to avoid resets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We assume rh : S × A → [0, 1], and for simplicity, we assume rh is deterministic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' However, we note that it would be easy to extend this to the setting where rewards are stochastic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The agent interacts with the environment for K total episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Following the convention of episodic problems, we suppose the state space S is layered, and a state sτ ∈ S at time τ is factored into two components sτ = (¯s, τ) where ¯s denotes the time-invariant part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Reset happens at time τ if ¯s ∈ Sreset (which we also write as sτ ∈ Sreset), and the initial state of the next episode will be s1 = (¯s′, 1) where ¯s′ is sampled from a fixed but unknown state distri- bution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Otherwise, the initial state of the next episode is the last state of the current episode, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', for episode k + 1, sk+1 1 = (¯s, 1) if sk H = (¯s, H) in episode k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 We denote the set of Markovian policies as ∆, and a policy π ∈ ∆ as π = {πh(ah|sh)}H h=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We define the state value function and the state-action value function under π as2 V π r,h(s) := Eπ � min(H,τ) � t=h rt(st, at)|sh = s � (1) Q�� r,h(s, a) := rh(s, a) + E � V π r,h+1(sh+1)|sh = s, ah = a � , where h ≤ τ, and we recall τ is the time step when the agent enters Sreset (if at all).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Objective The overall goal is for the agent to learn a Markovian policy to maximize its cumulative reward while avoiding resets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, our performance measures are 1We can extend this setup to reset-free multi-task or lifelong RL problems that are modeled as contextual MDPs since our algo- rithm can work with any initial state sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In this case, we can treat each state here as sτ = (¯s, c, τ), where c denotes the context that stays constant within an episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' If no reset happens, the ini- tial state of episode k + 1 can be written as sk+1 1 = (¯s, ck+1, 1) if sk H = (¯s, ck, H) in episode k, where the new context ck+1 can follow any distribution and may depend on the current context ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 2This value function definition is the same as the H-step cu- mulative reward in an MDP formulation where we place the agent into a fictitious zero-reward absorbing state (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', a mega-state ab- stracting Sreset) after the agent enters Sreset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We choose the cur- rent formulation to make the definition of resets more transparent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provable Reset-free Reinforcement Learning by No-Regret Reduction as follows (we seek to minimize both quantities): Regret(K) = max π∈∆0(K) K � k=1 V π r,1(sk 1) − V πk r,1 (sk 1) (2) Resets(K) = K � k=1 Eπk \uf8ee \uf8f0 min(H,τ) � h=1 1[sh ∈ Sreset] ���s1 = sk 1 \uf8f9 \uf8fb (3) where ∆0(K) ⊆ ∆ is the set of Markovian policies that avoid resets for all episodes, and πk is the policy used by the agent in episode k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Note that by the reset mechanism �min(H,τ) h=1 1[sh ∈ Sreset] ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Notice that the initial states in our regret and reset measures are determined by the learner, not the optimal policy like in some classical definitions of regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Given the motivation behind reset-free RL (see Section 1), we can expect that the initial states here are by construction meaningful for perfor- mance comparison;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' otherwise, a reset would have occurred to set the learner to a meaningful state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A following impli- cation is that all bad absorbing states are in Sreset;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' hence, the agent cannot use the trivial solution of hiding in a bad absorbing state to achieve small regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To make the problem feasible, we assume that achieving no resets is possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We state this formally in the assumption below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any sequence {sk 1}K k=1, the set ∆0(K) is not empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' That is, there is a Markovian policy π ∈ ∆ such that Eπ[�H h=1 1[sh ∈ Sreset]|s1 = sk 1] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This is a reasonable assumption in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' If reset hap- pens, the agent should be set to a state that the agent can continue to operate in without reset;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' if the agent is at a state where no such reset-free policy exists, reset should happen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This assumption is similar to the assumption on the exis- tence of a perfectly safe policy in safe RL literature, which is a common and relatively weak assumption (Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' If there were to be initial states that inevitably lead to a reset, the problem would be infea- sible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 4 A NO-REGRET REDUCTION FOR RESET-FREE RL We present our main reduction of reset-free RL to regret minimization in a two-player game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In the following, we first show that reset-free RL can be framed as a sequence of CMDPs of safe RL problems with an adaptive initial state sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then we design a two-player game based on a primal-dual analysis of this sequence of CMDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Finally, we show achieving sublinear regret in this two-player game implies sublinear regret and resets in the original reset-free RL problem in (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The complete proofs for this section can be found in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 Reset-free RL as a Sequence of CMDPs The first step of our reduction is to cast the reset-free RL problem in Section 3 to a sequence of CMDP problems which share the same rewards, constraints, and dynamics, but have different initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Each problem instance in this sequence corresponds to an episode of the reset-free RL problem, and its constraint describes the probability of the agent entering a state that requires reset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Specifically, we denote these constrained MDPs3 as {(S, A, P, r, H, c, sk 1)}K k=1: in episode k, the CMDP problem is defined as max π∈∆ V π r,1(sk 1), s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' V π c,1(sk 1) ≤ 0 (4) where we define the cost as ch(s, a) := 1[s ∈ Sreset] and V π c,1, defined similarly to (1), is the state value func- tion with respect to the cost c .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We note that the initial state, sk 1, depends on the past behaviors of the agent, and Assumption 1 ensures each CMDP in (4) is a feasible prob- lem (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', there is a Markovian policy satisfying the con- straint).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We can interpret each CMDP in (4) as a safe RL problem by treating Sreset as the unsafe states that a safe RL agent should avoid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' From this perspective, the con- straint in (4) can be viewed as the probability of a trajectory entering an unsafe state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since CMDPs are typically defined without early episode termination unlike (1), with abuse of notation, we extend the definitions of P, S, r, c as follows so that the CMDP definition above is consistent with the common literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We introduce a fictitious absorbing state denoted as s† in S, where rh(s†, a) = 0 and ch(s†, a) = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' once the agent enters s†, it stays there until the end of the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We extend the definition P such that, after the agent is in a state s ∈ Sreset, any action it takes brings it to s† in the next time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In this way, we can write the value function, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' for reward, as V π r,h(s) = Eπ � �H t=h rt(st, at)|sh = s � in terms of this extended dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We note that these two formulations are mathematically the same for the pur- pose of learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' when the agent enters s†, it means that the agent is reset in the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By the construction above, we can write Resets(K) = K � k=1 V πk c,1 (sk 1) which is the same as the number of total constraint vio- lations across the CMDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Because we do not make any 3In general, the solution (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', optimal policy) to a CMDP depends on its initial state, unlike in MDPs (see remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2 in Altman (1999)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hoai-An Nguyen, Ching-An Cheng assumptions about the agent’s knowledge of the constraint function (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', the agent does not know states ∈ Sreset), we allow the agent to reset during learning while minimizing the total number of resets over all K episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2 Reduction to Two-Player Game From the problem formulation above, we see that the ma- jor dif���culty of reset-free RL is the coupling between an adaptive initial state sequence and the constraint on reset probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' If we were to remove either of them, we can use standard algorithms, since the problem will become a single CMDP problem (Altman, 1999) or an episodic RL problem with varying initial states (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We propose a reduction to systematically design algorithms for this sequence of CMDPs and therefore for reset-free RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The main idea is to approximately solve the saddle point problem of each CMDP in (4), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', max π∈∆ min λ≥0 V π r,1(sk 1) − λV π c,1(sk 1) (5) where λ denotes the dual variable (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' the La- grange multiplier).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Each CMDP can be framed as a linear program (Hern´andez-Lerma and Lasserre, 2002) whose primal and dual optimal values match (see section 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 in Hazan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, for each CMDP, maxπ∈∆ minλ≥0 V π r,1(sk 1) − λV π c,1(sk 1) = minλ≥0 maxπ∈∆ V π r,1(sk 1) − λV π c,1(sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' While using a primal-dual algorithm to solve for the sad- dle point of a single CMDP is straightforward and known, using this approach for a sequence of CMDPs is not obvi- ous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Each CMDP’s optimal policy and Lagrange multiplier can be a function of the initial state (Altman, 1999), and in general, a common saddle point of Markovian polices and Lagrange multipliers does not necessarily exist for a sequence of CMDPs with varying initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='4 As a re- sult, it is unclear if there exists a primal-dual algorithm to solve this sequence, especially given that the initial states here are adaptively chosen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Existence of a Shared Saddle-Point Fortunately, there is a shared saddle-point with a Markovian policy across all the CMDPs considered here due to the special structure of reset-free RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' It is a proof that does not use Slater’s condi- tion for strong duality, unlike similar literature, but attains the desired property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Instead we use Assumption 1 and the fact that the cost c is non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We formalize this be- low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' There exist a function ˆλ(·) where for each s, ˆλ(s) ∈ arg min y≥0 � max π∈∆ V π r,1(s) − yV π c,1(s) � , 4A shared saddle-point with a non-Markovian policy always exists on the other hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and a Markovian policy π∗ ∈ ∆, such that (π∗, ˆλ) is a saddle-point to the CMDPs max π∈∆ V π r,1(s1), s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' V π c,1(s1) ≤ 0 for all initial states s1 ∈ S such that the CMDP is feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' That is, for all π ∈ ∆, λ : S → R, and s1 ∈ S, V π∗ r,1 (s1) − λ(s1)V π∗ c,1 (s1) ≥ V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1) ≥ V π r,1(s1) − ˆλ(s1)V π c,1(s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (6) Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For π∗ in Theorem 1, it holds that Regret(K) = �K k=1 V π∗ r,1 (sk 1) − V πk r,1 (sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We prove for ease of construction that the pair (π∗, λ∗) where λ∗(·) = ˆλ(·) + 1 is also a saddle-point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any saddle-point to the CMDPs max π∈∆ V π r,1(s1), s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' V π c,1(s1) ≤ 0 of (π∗, ˆλ) from Theorem 1, (π∗, ˆλ + 1) =: (π∗, λ∗) is also a saddle-point as defined in eq (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, the pair (π∗, λ∗) in Corollary 2 is a saddle-point to all the CMDPs the agent faces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This makes potentially designing a two-player game reduction possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In the next section, we give the details of our construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Two-Player Game Our two-player game proceeds itera- tively in the following manner: in episode k, a dual player determines a state value function λk : S → R, and a primal player determines a policy πk which can depend on λk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then the primal and dual player receive losses Lk(πk, λ) and −Lk(π, λk), respectively, where Lk(π, λ) is a Lagrangian function defined as Lk(π, λ) := V π r,1(sk 1) − λ(sk 1)V π c,1(sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (7) The regret of these two players are defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Let πc and λc be comparators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The regret of the primal and the dual players are Rp({πk}K k=1, πc) := K � k=1 Lk(πc, λk) − Lk(πk, λk) (8) Rd({λk}K k=1, λc) := K � k=1 Lk(πk, λk) − Lk(πk, λc).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (9) We present our main reduction theorem for reset-free RL below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By Theorem 2, if both players have sublinear re- gret in the two-player game, then the resulting policy se- quence will have sublinear performance regret and a sub- linear number of resets in the original RL problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since there are many standard techniques (Hazan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2016) Provable Reset-free Reinforcement Learning by No-Regret Reduction from online learning to solve such a two-player game, we can leverage them to systematically design reset-free RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In the next section, we will give an example algorithm of this reduction for linear MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Under Assumption 1, for any sequences {πk}K k=1 and {λk}K k=1 , it holds that Regret(K) ≤ Rp({πk}K k=1, π∗) + Rd({λk}K k=1, 0) Resets(K) ≤ Rp({πk}K k=1, π∗) + Rd({λk}K k=1, λ∗) where (π∗, λ∗) is the saddle-point defined in Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof Sketch of Theorem 1 Let Q∗ c(s, a) = minπ∈∆ Qπ c (s, a) and V ∗ c (s) = minπ∈∆ V π c (s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We define π∗ in Theorem 1 as the optimal policy to the following MDP: (S, A, P, r, H), where we define a state-dependent action space A as As = {a ∈ A : Q∗ c(s, a) ≤ V ∗ c (s)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By definition, As is non-empty for all s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We also define a shorthand notation: we write π ∈ A(s) if Eπ[�H t=1 1{at /∈ Ast}|s1 = s] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We have the follow- ing lemma, which is an application of the performance dif- ference lemma (see Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 in (Kakade and Langford, 2002) and Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 in (Cheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2021)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any s1 ∈ S such that V ∗ c (s1) = 0 and any π ∈ ∆, it is true that π ∈ A(s1) if and only if V π c (s1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We prove our main claim, (6), below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Because V π∗ c,1 (s1) = 0, the first inequality is trivial: V π∗ r,1 (s1)−λ(s1)V π∗ c,1 (s1) = V π∗ r,1 (s1) = V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To prove the second inequality, we use Lemma 1: V π r,1(s1) − ˆλ(s1)V π c,1(s1) ≤ max π∈∆ V π r,1(s1) − ˆλ(s1)V π c,1(s1) = min y≥0 max π∈∆ V π r,1(s1) − yV π c,1(s1) = max π∈Ac(s1) V π r,1(s1) (By Lemma 1 ) =V π∗ r,1 (s1) = V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof Sketch of Theorem 2 We first establish the fol- lowing intermediate result that will help us with our de- composition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any primal-dual sequence {πk, λk}K k=1, �K k=1(Lk(π∗, λ′) − Lk(πk, λk)) ≤ Rp({π}K k=1, π∗), where (π∗, λ′) is the saddle-point defined in either Theorem 1 or Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then we upper bound Regret(K) and Resets(K) by Rp({πk}K k=1, πc) and Rd({λk}K k=1, λc) for suitable com- parators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This decomposition is inspired by the techniques used in Ho-Nguyen and Kılınc¸-Karzan (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first bound Resets(K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any primal-dual sequence {πk, λk}K k=1, �K k=1 V πk c,1 (sk 1) ≤ Rp({π}K k=1, π∗) + Rd({λ}K k=1, λ∗), where (π∗, λ∗) is the saddle-point defined in Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Notice �K k=1 V πk c,1 (sk 1) = �K k=1 Lk(πk, ˆλ) − Lk(πk, λ∗) where (π∗, ˆλ) is the saddle-point defined in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By (6), and adding and subtracting �K k=1 Lk(πk, λk), we can bound this difference by K � k=1 Lk(π∗, ˆλ) − Lk(πk, λk) + Lk(πk, λk) − Lk(πk, λ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Using Lemma 2 and Definition 1 to upper bound the above, we get the desired result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lastly, we bound Regret(K) with the lemma below and Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any primal-dual sequence {πk, λk}K k=1, �K k=1(V π∗ r,1 (sk 1) − V πk r,1 (sk 1)) ≤ Rp({π}K k=1, π∗) + Rd({λ}K k=1, 0), where (π∗, λ∗) is the saddle-point defined in Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Note that L(π∗, λ∗) = L(π∗, 0) since V π∗ c,1 = 0 for all k ∈ [K] = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', K}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since by definition, for any π, Lk(π, 0) = V π r,1(sk 1), we have the following: K � k=1 V π∗ r,1 (sk 1) − V πk r,1 (sk 1) = K � k=1 Lk(π∗, λ∗) − Lk(πk, 0) = K � k=1 Lk(π∗, λ∗) − Lk(πk, λk) + Lk(πk, λk) − Lk(πk, 0) ≤Rp({π}K k=1, π∗) + Rd({λ}K k=1, 0) where the last inequality follows from Lemma 2 and Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 5 RESET-FREE LEARNING FOR LINEAR MDP To demonstrate the utility of our reduction, we design a provably correct algorithm instantiation for reset-free RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We consider a linear MDP setting, which is common in the RL theory literature (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We assume (S, A, P, r, c, H) is linear with a known feature map φ : S × A → Rd: for any h ∈ [H], there exists d unknown signed measures µh = {µ1 h, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', µd h} over S such that for any (s, a, s′) ∈ S × A × S, we have Ph(s′|a) = ⟨φ(s, a), µh(s′)⟩, Hoai-An Nguyen, Ching-An Cheng and there exists unknown vectors ωr,h, ωc,h ∈ Rd such that for any (s, a) ∈ S × A, rh(s, a) = ⟨φ(s, a), ωr,h⟩ ch(s, a) = ⟨φ(s, a), ωc,h⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We assume, for all (s, a, h) ∈ S×A×[H], ||φ(s, a)||2 ≤ 1, and max{||µh(s)||2, ||ωr,h||2, ||ωc,h||2} ≤ √ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In addition, we make a linearity assumption on the function λ∗ defined in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We assume the knowledge of a feature ξ : S → Rd such that ∀s ∈ S, ||ξ(s)||2 ≤ 1 and λ∗(s) = ⟨ξ(s), θ∗⟩, for some unknown vector θ∗ ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In addition, we assume the knowledge of a convex set5 U ⊆ Rd such that θ∗, 0 ∈ U and ∀θ ∈ U, ||θ||2 ≤ B and ⟨ξ(s), θ⟩ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 Algorithm The basis of our algorithm lies between the interaction be- tween the primal and dual players.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We let the dual player perform projected gradient descent and the primal player update policies based on upper confidence bound with the knowledge of the decision of the dual player.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This sequen- tial strategy resembles the optimistic style update in online learning (Mertikopoulos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Specifically, in each episode, upon receiving the initial state, we execute actions according to the policy based on a softmax (lines 5-8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then, we perform the dual update through projected gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The dual player plays for the next round, k + 1, after observing its loss after the primal player plays for the current round, k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The projec- tion is to a l2 ball containing λ∗(·) (lines 9-11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Finally, we perform the update of the primal player by computing the Q-functions for both the reward and cost with a bonus to encourage exploration (lines 12-20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This algorithm builds upon Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' However, notably, we extend it to handle the adaptive initial state se- quence seen in reset-free RL by Theorems 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2 Analysis We show below that our algorithm achieves regret and number of resets that are sublinear in the total number of time steps, KH, using Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This result is asymptot- ically equivalent to Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022) and comparable to the bounds of ˜O √ d2H6K from Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 5Such a set can be constructed by upper bounding the values using scaling and ensuring non-negativity using a sum of squares approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 6From the previous section, we can see that the optimal func- tion for the dual player is not necessarily unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' So, we as- sume bounds on at least one optimal function that we designate as λ∗(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Under Assumptions 1, 2, and 3, with high probability, Regret(K) ≤ ˜O((B + 1) √ d3H4K) and Resets(K) ≤ ˜O((B + 1) √ d3H4K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof Sketch of Theorem 3 We provide a proof sketch here and defer the complete proof to Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first bound the regret of {πk}K k=1 and {λk}K k=1, and then use this to prove the bounds on our algorithm’s regret and number of resets with Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first bound the regret of {λk}K k=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Consider λc(s) = ⟨ξ(s), θc⟩ for some θc ∈ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then it holds that Rd({λk}K k=1, λc) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5B √ K + �K k=1(λk(sk 1) − λc(sk 1))(V k c,1(sk 1) − V πk c,1 (sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We notice first an equality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Rd({λk}K k=1, λc) = K � k=1 Lk(πk, λk) − Lk(πk, λc) = K � k=1 λc(sk 1)V πk c,1 (sk 1) − λk(sk 1)V πk c,1 (sk 1) = K � k=1 (λk(sk 1) − λc(sk 1))(−V k c,1(sk 1)) + K � k=1 (λk(sk 1) − λc(sk 1))(V k c,1(sk 1) − V πk c,1 (sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We observe that the first term is an online linear problem for θk (the parameter of λk(·)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In episode k ∈ [K], λk is played, and then the loss is revealed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since the space of θk is convex, we use standard results (Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (Hazan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2016)) to show that updating θk through projected gradient descent results in an upper bound for �K k=1(λk(sk 1) − λc(sk 1))(−V k c,1(sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We now bound the regret of {π}K k=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Consider any πc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' With high probability, Rp({π}K k=1, πc) ≤ 2H(1 + B + H) + �K k=1 V k r,1(sk 1) − V πk r,1 (sk 1) + λk(sk 1)(V πk c,1 (sk 1) − V k c,1(sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' First we expand the regret into two terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Rp({π}K k=1, πc) = K � k=1 Lk(πc, λk) − Lk(πk, λk) = K � k=1 V πc r,1 (sk 1) − λk(sk 1)V πc c,1(sk 1) − [V πk r,1 (sk 1) − λk(sk 1)V πk c,1 (sk 1)] = K � k=1 V πc r,1 (sk 1) − λk(sk 1)V πc c,1(sk 1) − [V k r,1(sk 1) − λk(sk 1)V k c,1(sk 1)] + K � k=1 V k r,1(sk 1) − V πk r,1 (sk 1) + λk(sk 1)(V πk c,1 (sk 1) − V k c,1(sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provable Reset-free Reinforcement Learning by No-Regret Reduction Algorithm 1 Primal-Dual Reset-Free RL Algorithm for Linear MDP with Adaptive Initial States 1: Input: Feature maps φ and ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Failure probability p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Some universal constant c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 2: Initialization: θ1 = 0, wr,h = 0, wc,h = 0, α = log(|A|)K 2(1 + B + H), β = cdH � log(4 log |A|dKH/p) 3: for episodes k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='K do 4: Observe the initial state sk 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 5: for step h = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', H do 6: Compute πh,k(a|·) ← exp(α(Qk r,h(·, a) − λk(sk 1)Qk c,h(·, a))) � a exp(α(Qk r,h(·, a) − λk(sk 1)Qk c,h(·, a))).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 7: Take action ak h ∼ πh,k(·|sk h) and observe sk h+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 8: end for 9: ηk ← B √ k 10: Update θk+1 ← ProjU(θk + ηk · ξ(sk 1)V k c,1(sk 1)) 11: λk+1(·) ← ⟨θk+1, ξ(·)⟩ 12: for step h = H, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 1 do 13: Λk+1 h ← k� i=1 φ(si h, ai h)φ(si h, ai h)T + λI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 14: wk+1 r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h ← (Λk+1 h )−1[ k� i=1 φ(si h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ai h)[rh(si h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ai h) + V k+1 r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h+1(si h+1)]] 15: wk+1 c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h ← (Λk+1 h )−1[ k� i=1 φ(si h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ai h)[ch(si h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ai h) + V k+1 c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h+1(si h+1)]] 16: Qk+1 r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h (·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·) ← max{min{⟨wk+1 r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' φ(·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·)⟩ + β(φ(·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·)T (Λk+1 h )−1φ(·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·))1/2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' H − h + 1},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 0} 17: Qk+1 c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h (·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·) ← max{min{⟨wk+1 c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' φ(·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·)⟩ − β(φ(·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·)T (Λk+1 h )−1φ(·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' ·))1/2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 1},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 0} 18: V k+1 r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h (·) = � a πh,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='k(a|·)Qk+1 r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h (·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' a) 19: V k+1 c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h (·) = � a πh,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='k(a|·)Qk+1 c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='h (·,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' a) 20: end for 21: end for To bound the first term,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' we use Lemma 3 from Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022), which characterizes the property of upper confi- dence bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lastly, we derive a bound on Rd({λk}K k=1, λc) + Rp({πk}K k=1, πc), which directly implies our final up- per bound on Regret(K) and Resets(K) in Theorem 3 by Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Combining the upper bounds in Lemma 5 and Lemma 6, we have a high-probability upper bound Rd({λk}K k=1, λc) + Rp({πk}K k=1, πc) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5B √ K + 2H(1 + B + H)+ + K � k=1 V k r,1(sk 1) − V πk r,1 (sk 1) + λc(sk 1)(V πk c,1 (sk 1) − V k c,1(sk 1)) where the last term is the overestimation error due to opti- mism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Note that for all k ∈ [K], V k r,1(sk 1) and V k c,1(sk 1) are as defined in Algorithm 1 and are optimistic estimates of V π∗ r,1 (sk 1) and V π∗ c,1 (sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To bound this term, we use Lemma 4 from (Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 6 CONCLUSION We propose a generic no-regret reduction for designing provable reset-free RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Our reduction casts reset-free RL into the regret minimization problem of a two-player game, for which many existing no-regret al- gorithms are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' As a result, we can reuse these techniques to systematically build new reset-free RL algo- rithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In particular, we design a reset-free RL algorithm for linear MDPs using our new reduction techniques, taking the first step towards designing provable reset-free RL al- gorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Extending these techniques to nonlinear function approximators and verifying their effectiveness empirically are important future research directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Acknowledgements Part of this work was done during Hoai-An Nguyen’s in- ternship at Microsoft Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' References Alshiekh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Bloem, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Ehlers, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', K¨onighofer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Niekum, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Topcu, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe reinforcement Hoai-An Nguyen, Ching-An Cheng learning via shielding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Altman, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Constrained Markov decision pro- cesses: stochastic modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Routledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Amani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Thrampoulidis, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe reinforcement learning with linear function approxima- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 243–253.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Amani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Doubly pessimistic algo- rithms for strictly safe off-policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Annual Conference on Information Sciences and Systems, pages 113–118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Cheng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Kolobov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Swaminathan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Heuristic-guided reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:13550– 13563.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ding, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Wei, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Jovanovic, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provably efficient safe exploration via primal- dual policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In International Conference on Artificial Intelligence and Statistics, pages 3304–3312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Dong, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Q- learning with UCB exploration is sample efficient for infinite-horizon MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' CoRR, abs/1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='09311.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Efroni, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Pirotta, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Exploration- exploitation in constrained mdps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='02189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Eysenbach, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Gu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Ibarz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Leave no trace: Learning to reset for safe and au- tonomous reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In International Con- ference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Garcia Polo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Fernandez Rebollo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe reinforcement learning in high-risk tasks through policy improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pages 76– 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ghosh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Zhou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Shroff, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provably ef- ficient model-free constrained rl with linear function ap- proximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='11889.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Gupta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Zhao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Kumar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Rovinsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Xu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Devlin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Reset-free re- inforcement learning via multi-task learning: Learning dexterous manipulation behaviors without human inter- vention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In IEEE International Conference on Robotics and Automation, pages 6664–6671.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ha, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Xu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Tan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Tan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Learning to walk in the real world with minimal human effort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Conference on Robot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' HasanzadeZonuzy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Bura, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Kalathil, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Shakkot- tai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Learning with safety constraints: Sam- ple complexity of reinforcement learning for constrained mdps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artifi- cial Intelligence, volume 35, pages 7667–7674.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hazan, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Introduction to online convex op- timization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Foundations and Trends® in Optimization, 2(3-4):157–325.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hern´andez-Lerma, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Lasserre, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' The Lin- ear Programming Approach, pages 377–407.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Springer US, Boston, MA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Ho-Nguyen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Kılınc¸-Karzan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Primal–dual algorithms for convex optimization via regret minimiza- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' IEEE Control Systems Letters, 2(2):284–289.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Huang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Liang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe explo- ration incurs nearly no additional sample complexity for reward-free rl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Jain, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Vaswani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Babanezhad, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Szepesv´ari, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Precup, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Towards painless policy optimiza- tion for constrained MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Cussens, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', editors, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, volume 180 of Proceedings of Machine Learning Research, pages 895– 905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Jin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Jordan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Prov- ably efficient reinforcement learning with linear function approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' CoRR, abs/1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='05388.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Kakade, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Langford, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Approximately op- timal approximate reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Interna- tional Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Citeseer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Grover, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Abbeel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Mordatch, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Reset-free lifelong learning with skill-space planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='03548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Mertikopoulos, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Lecouat, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Zenati, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Foo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Chandrasekhar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Piliouras, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='02629.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Qiu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Wei, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Upper confidence primal-dual optimization: Stochasti- cally constrained markov decision processes with adver- sarial losses and unknown transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='00660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Sharma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Ahmad, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Finn, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A state- distribution matching approach to non-episodic rein- forcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='05212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Wachi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Sui, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe reinforcement learning in constrained markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Interna- tional Conference on Machine Learning, pages 9797– 9806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Wagener, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Boots, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Cheng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Safe reinforcement learning using advantage-based interven- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 10630–10640.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provable Reset-free Reinforcement Learning by No-Regret Reduction Wei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Jafarnia-Jahromi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Luo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Jain, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Learning infinite-horizon average-reward mdps with lin- ear function approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' CoRR, abs/2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='11849.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Wei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Jafarnia-Jahromi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Luo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Sharma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Jain, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Model-free reinforcement learning in infinite-horizon average-reward markov decision pro- cesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' CoRR, abs/1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='07072.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Wei, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Ying, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A provably-efficient model-free algorithm for constrained markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='01577.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' and Ratliff, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Constrained upper con- fidence reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In Bayen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Jad- babaie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Pappas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Parrilo, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Recht, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', Tomlin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', and Zeilinger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', editors, Proceedings of the 2nd Conference on Learning for Dynamics and Control, vol- ume 120 of Proceedings of Machine Learning Research, pages 620–629.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hoai-An Nguyen, Ching-An Cheng A Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 Missing Proofs for Section 4 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 Proof of Theorem 1 Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' There exist a function ˆλ(·) where for each s, ˆλ(s) ∈ arg min y≥0 � max π∈∆ V π r,1(s) − yV π c,1(s) � , and a Markovian policy π∗ ∈ ∆, such that (π∗, ˆλ) is a saddle-point to the CMDPs max π∈∆ V π r,1(s1), s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' V π c,1(s1) ≤ 0 for all initial states s1 ∈ S such that the CMDP is feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' That is, for all π ∈ ∆, λ : S → R, and s1 ∈ S, V π∗ r,1 (s1) − λ(s1)V π∗ c,1 (s1) ≥ V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1) ≥ V π r,1(s1) − ˆλ(s1)V π c,1(s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (6) For policy π∗, we define it by the following construction (we ignore writing out the time dependency for simplicity): first, we define a cost-based MDP Mc = (S, A, P, c, H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Let Q∗ c(s, a) = minπ∈∆ Qπ c (s, a) and V ∗ c (s) = minπ∈∆ V π c (s) be the optimal values, where we recall V π c and Qπ c are the state and state-action values under policy π with respect to the cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Now we construct another reward-based MDP M = (S, A, P, r, H), where we define the state-dependent action space A as As = {a ∈ A : Q∗ c(s, a) ≤ V ∗ c (s)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By definition, As is non-empty for all s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We define a shorthand notation: we write π ∈ A(s) if Eπ[�H t=1 1{at /∈ Ast}|s1 = s] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then we have the following lemma, which is a straightforward application of the performance difference lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any s1 ∈ S such that V ∗ c (s1) = 0 and any π ∈ ∆, it is true that π ∈ A(s1) if and only if V π c (s1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By performance difference lemma (Kakade and Langford, 2002), we can write V π c (s1) − V ∗ c (s1) = Eπ � H � t=1 Q∗ c(st, at) − V ∗ c (st)|s1 = s1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' If for some s1 ∈ S, π ∈ A(s1), then Eπ ��H t=1 Q∗ c(st, at) − V ∗ c (st) � ≤ 0, which implies V π c (s1) ≤ V ∗ c (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' But since V ∗ c is optimal, V π c (s1) = V ∗ c (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' On the other hand, suppose V π c (s1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' It implies Eπ ��H t=1 Q∗ c(st, at) − V ∗ c (st) � = 0 since V ∗ c (s1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Because by definition of optimality Q∗ c(st, at) − V ∗ c (st) ≥ 0, this implies π ∈ A(s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We set our candidate policy π∗ as the optimal policy of this M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By Lemma 1, we have V π∗ c (s) = V ∗ c (s), so it is also an optimal policy to Mc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We prove our main claim of Theorem 1 below: V π∗ r,1 (s1) − λ(s1)V π∗ c,1 (s1) ≥ V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1) ≥ V π r,1(s1) − ˆλ(s1)V π c,1(s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Because V π∗ c,1 (s1) = 0 (for an initial state s1 such that the CMDP is feasible), the first inequality is trivial: V π∗ r,1 (s1) − λ(s1)V π∗ c,1 (s1) = V π∗ r,1 (s1) = V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provable Reset-free Reinforcement Learning by No-Regret Reduction For the second inequality, we use Lemma 1: V π r,1(s1) − ˆλ(s1)V π c,1(s1) ≤ max π∈∆ V π r,1(s1) − ˆλ(s1)V π c,1(s1) = min y≥0 max π∈∆ V π r,1(s1) − yV π c,1(s1) = max π∈Ac(s1) V π r,1(s1) (By Lemma 1 ) = V π∗ r,1 (s1) = V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2 Proof of Corollary 1 Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For π∗ in Theorem 1, it holds that Regret(K) = �K k=1 V π∗ r,1 (sk 1) − V πk r,1 (sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To prove Regret(K) = �K k=1 V π∗ r,1 (sk 1) − V πk r,1 (sk 1), it suffices to prove �K k=1 V π∗ r,1 (sk 1) = maxπ∈∆0(K) �K k=1 V π r,1(sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By Lemma 1 and under Assumption 1, we notice that maxπ∈∆0(K) �K k=1 V π r,1(sk 1) = maxπ∈A(sk 1 ),∀k∈[K] �K k=1 V π r,1(sk 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This is equal to �K k=1 V π∗ r,1 (sk 1) by the definition of π∗ in the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='3 Proof of Corollary 2 Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any saddle-point to the CMDPs max π∈∆ V π r,1(s1), s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' V π c,1(s1) ≤ 0 of (π∗, ˆλ) from Theorem 1, (π∗, ˆλ + 1) =: (π∗, λ∗) is also a saddle-point as defined in eq (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We prove that eq (6) holds for (π∗, λ∗), that is V π∗ r,1 (s1) − λ(s1)V π∗ c,1 (s1) ≥ V π∗ r,1 (s1) − λ∗(s1)V π∗ c,1 (s1) ≥ V π r,1(s1) − λ∗(s1)V π c,1(s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Because V π∗ c,1 (s1) = 0 (for an initial state s1 such that the CMDP is feasible), the first inequality is trivial: V π∗ r,1 (s1) − λ(s1)V π∗ c,1 (s1) = V π∗ r,1 (s1) = V π∗ r,1 (s1) − λ∗(s1)V π∗ c,1 (s1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For the second inequality, we use Theorem 1: V π r,1(s1) − λ∗(s1)V π c,1(s1) ≤V π r,1(s1) − ˆλ(s1)V π c,1(s1) ≤V π∗ r,1 (s1) − ˆλ(s1)V π∗ c,1 (s1) =V π∗ r,1 (s1) − λ∗(s1)V π∗ c,1 (s1) where the first step is because V π c,1(s1) by definition is in [0, 1] and λ∗ = ˆλ + 1, and the second step is by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='4 Proof of Theorem 2 Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Under Assumption 1, for any sequences {πk}K k=1 and {λk}K k=1 , it holds that Regret(K) ≤ Rp({πk}K k=1, π∗) + Rd({λk}K k=1, 0) Resets(K) ≤ Rp({πk}K k=1, π∗) + Rd({λk}K k=1, λ∗) where (π∗, λ∗) is the saddle-point defined in Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first establish the following intermediate result that will help us with our decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hoai-An Nguyen, Ching-An Cheng Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any primal-dual sequence {πk, λk}K k=1, �K k=1(Lk(π∗, λ′) − Lk(πk, λk)) ≤ Rp({π}K k=1, π∗), where (π∗, λ′) is the saddle-point defined in either Theorem 1 or Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We derive this lemma by Theorem 1 and Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' First notice by Theorem 1 and Corollary 2 that for λ′ = λ∗, ˆλ, K � k=1 Lk(π∗, λ′) = K � k=1 V π∗ r,1 (sk 1) − λ′(sk 1)V π∗ c,1 (sk 1) ≤ K � k=1 V π∗ r,1 (sk 1) − λk(sk 1)V π∗ c,1 (sk 1) = K � k=1 Lk(π∗, λk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then we can derive K � k=1 (Lk(π∗, λ′) − Lk(πk, λk)) = K � k=1 Lk(π∗, λ′) − Lk(π∗, λk) + Lk(π∗, λk) − Lk(πk, λk) ≤ K � k=1 Lk(π∗, λk) − Lk(πk, λk) = Rp({π}K k=1, π∗) which finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then we upper bound Regret(K) and Resets(K) by Rp({πk}K k=1, πc) and Rd({λk}K k=1, λc) for suitable comparators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This decomposition is inspired by the techniques used in Ho-Nguyen and Kılınc¸-Karzan (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first bound Resets(K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any primal-dual sequence {πk, λk}K k=1, �K k=1 V πk c,1 (sk 1) ≤ Rp({π}K k=1, π∗) + Rd({λ}K k=1, λ∗), where (π∗, λ∗) is the saddle-point defined in Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Notice �K k=1 V πk c,1 (sk 1) = �K k=1 Lk(πk, ˆλ) − Lk(πk, λ∗) where (π∗, ˆλ) is the saddle-point defined in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' This is because, as defined, λ∗ = ˆλ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, we bound the RHS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We have K � k=1 Lk(πk, ˆλ) − Lk(πk, λ∗) = K � k=1 Lk(πk, ˆλ) − Lk(πk, λk) + Lk(πk, λk) − Lk(πk, λ∗) ≤ K � k=1 Lk(π∗, ˆλ) − Lk(πk, λk) + Lk(πk, λk) − Lk(πk, λ∗) ≤Rp({π}K k=1, π∗) + Rd({λ}K k=1, λ∗) where second inequality is because �K k=1 Lk(π∗, ˆλ) ≥ �K k=1 Lk(πk, ˆλ) by Theorem 1, and the first inequality follows from Lemma 2 and Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lastly, we bound Regret(K) with the lemma below and Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any primal-dual sequence {πk, λk}K k=1, �K k=1(V π∗ r,1 (sk 1)−V πk r,1 (sk 1)) ≤ Rp({π}K k=1, π∗)+Rd({λ}K k=1, 0), where (π∗, λ∗) is the saddle-point defined in Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Note that L(π∗, λ∗) = L(π∗, 0) since V π∗ c,1 (sk 1) = 0 for all k ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since by definition, for any π, Lk(π, 0) = V π r,1(sk 1), we have the following: K � k=1 V π∗ r,1 (sk 1) − V πk r,1 (sk 1) = K � k=1 Lk(π∗, λ∗) − Lk(πk, 0) = K � k=1 Lk(π∗, λ∗) − Lk(πk, λk) + Lk(πk, λk) − Lk(πk, 0) ≤Rp({π}K k=1, π∗) + Rd({λ}K k=1, 0) where the last inequality follows from Lemma 2 and Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provable Reset-free Reinforcement Learning by No-Regret Reduction A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2 Missing Proofs for Section 5 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 Proof of Theorem 3 Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Under Assumptions 1, 2, and 3, with high probability, Regret(K) ≤ ˜O((B + 1) √ d3H4K) and Resets(K) ≤ ˜O((B + 1) √ d3H4K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first bound the regret of {πk}K k=1 and {λk}K k=1, and then use this to prove the bounds on our algorithm’s regret and number of resets with Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We first bound the regret of {λk}K k=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Consider λc(s) = ⟨ξ(s), θc⟩ for some θc ∈ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Then it holds that Rd({λk}K k=1, λc) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5B √ K + �K k=1(λk(sk 1) − λc(sk 1))(V k c,1(sk 1) − V πk c,1 (sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We notice first an equality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Rd({λk}K k=1, λc) = K � k=1 Lk(πk, λk) − Lk(πk, λc) = K � k=1 λc(sk 1)V πk c,1 (sk 1) − λk(sk 1)V πk c,1 (sk 1) = K � k=1 λc(sk 1)V πk c,1 (sk 1) − λk(sk 1)V πk c,1 (sk 1) + K � k=1 λc(sk 1)V k c,1(sk 1) − λc(sk 1)V k c,1(sk 1) + λk(sk 1)V k c,1(sk 1) − λk(sk 1)V k c,1(sk 1) = K � k=1 (λk(sk 1) − λc(sk 1))(−V k c,1(sk 1)) + K � k=1 (λk(sk 1) − λc(sk 1))(V k c,1(sk 1) − V πk c,1 (sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We observe that the first term is an online linear problem for θk (the parameter of λk(·)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In episode k ∈ [K], λk is played, and then the loss is revealed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since the space of θk is convex, we use standard results (Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (Hazan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2016)) to show that updating θk through projected gradient descent results in an upper bound for �K k=1(λk(sk 1) − λc(sk 1))(−V k c,1(sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We restate the lemma here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 7 (Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (Hazan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Let S ⊆ Rd be a bounded convex and closed set in Euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Denote D as an upper bound on the diameter of S, and G as an upper bound on the norm of the subgradients of convex cost functions fk over S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Using online projected gradient descent to generate sequence {xk}K k=1 with step sizes {ηk = D G √ k, k ∈ [K]} guarantees, for all K ≥ 1: RegretK = max x∗∈K K � k=1 fk(xk) − fk(x∗) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5GD √ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Let us bound D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' By Assumption 3, λ∗ = ⟨ξ(s), θ∗⟩ and ||θ∗||2 ≤ B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Since the comparator we use is λ∗, we can set D to be B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To bound G, we observe that the subgradient of our loss function is ξ(s)V k c,1(sk 1) for each k ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, since V k c,1(sk 1) ∈ [0, 1] and ||ξ(s)||2 ≤ 1 by Assumption 3, we can set G to be 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We now bound the regret of {π}K k=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Consider any πc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' With high probability, Rp({π}K k=1, πc) ≤ 2H(1 + B + H) + �K k=1 V k r,1(sk 1) − V πk r,1 (sk 1) + λk(sk 1)(V πk c,1 (sk 1) − V k c,1(sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hoai-An Nguyen, Ching-An Cheng Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' First we expand the regret into two terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Rp({π}K k=1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' πc) = K � k=1 Lk(πc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' λk) − Lk(πk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' λk) = K � k=1 V πc r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − λk(sk 1)V πc c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − [V πk r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (sk 1) − λk(sk 1)V πk c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (sk 1)] = K � k=1 V πc r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − λk(sk 1)V πc c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − [V πk r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (sk 1) − λk(sk 1)V πk c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (sk 1)] + K � k=1 [V k r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − λk(sk 1)V k c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1)] − [V k r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − λk(sk 1)V k c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1)] = K � k=1 V πc r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − λk(sk 1)V πc c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − [V k r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − λk(sk 1)V k c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1)] + K � k=1 V k r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1) − V πk r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (sk 1) + λk(sk 1)(V πk c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1 (sk 1) − V k c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='1(sk 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To bound the first term, we use Lemma 3 from Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022), which characterize the property of upper confidence bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For completeness, we re-write the lemma here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 7 Lemma 8 (Lemma 3 (Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' With probability 1−p/2, it holds that T1 = �K k=1 � V πc r,1(sk 1)−λkV πc c,1(sk 1) � − � V k r,1(sk 1) − λkV k c,1(sk 1) � ≤ KH log(|A|)/α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Hence, for α = log(|A|)K 2(1 + C + H), T1 ≤ 2H(1 + C + H), where C is such that λk ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' In our problem setting, we can set C = B in the lemma above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Therefore, the first term is bounded by 2H(1+B+H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lastly, we derive a bound on Rd({λk}K k=1, λc) + Rp({πk}K k=1, πc), which directly implies our final upper bound on Regret(K) and Resets(K) in Theorem 3 by Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' For any πc and λc(s) = ⟨ξ(s), θc⟩ such that ∥θc∥ ≤ B, we have with probability 1 − p, Rd({λk}K k=1, λc) + Rp({πk}K k=1, πc) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5B √ K + 2H(1 + B + H) + O((B + 1) √ d3H4Kι2) where ι = log[log(|A|)4dKH/p].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Combining the upper bounds in Lemma 5 and Lemma 6, we have an upper bound of Rd({λk}K k=1, λc) + Rp({πk}K k=1, πc) =1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5B √ K + K � k=1 (λk(sk 1) − λc(sk 1))(V k c,1(sk 1) − V πk c,1 (sk 1)) + 2H(1 + B + H) + K � k=1 V k r,1(sk 1) − V πk r,1 (sk 1) + λk(sk 1)(V πk c,1 (sk 1) − V k c,1(sk 1)) =1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content='5B √ K + 2H(1 + B + H)+ + K � k=1 V k r,1(sk 1) − V πk r,1 (sk 1) + λc(sk 1)(V πk c,1 (sk 1) − V k c,1(sk 1)) where the last term is the overestimation error due to optimism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' To bound this term, we use Lemma 4 from Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' We re-write the lemma here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Lemma 10 (Lemma 4 (Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=', 2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' WIth probability at least 1 − p/2, for any λ ∈ [0, C], �K k=1 � V k r,1(sk 1) − V πk r,1 (sk 1) � + λ �K k=1 � V πk c,1 (sk 1) − V k c,1(sk 1) � ≤ O((λ + 1) √ d3H4Kι2) where ι = log[log(|A|)4dKH/p].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' 7Note that Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' (2022) use a utility function rather than a cost function to denote the constraint on the MDP (cost is just −1× utility).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Also note that their Lemma 3 is proved for an arbitrary initial state sequence and for any comparator (which includes π∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'} +page_content=' Provable Reset-free Reinforcement Learning by No-Regret Reduction Since we have a bound on all λc(sk 1) of B for all k ∈ [K], we have a bound of O((B + 1) √ d3H4Kι2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4dE0T4oBgHgl3EQfeQDL/content/2301.02389v1.pdf'}