diff --git "a/8NE4T4oBgHgl3EQf2w06/content/tmp_files/load_file.txt" "b/8NE4T4oBgHgl3EQf2w06/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/8NE4T4oBgHgl3EQf2w06/content/tmp_files/load_file.txt" @@ -0,0 +1,584 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf,len=583 +page_content='DEEP REINFORCEMENT LEARNING FOR ASSET ALLOCATION: REWARD CLIPPING Jiwon Kim SK Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='(SK C&C) kjiwon831@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='com MOON-JU KANG sktel1020@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='com KangHun Lee SK Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='(SK C&C) potter0923@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='com HyungJun Moon SK Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='(SK C&C) alladdgg@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='com BO-KWAN JEON SK Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='(SK C&C) bk_jeon@sk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='com ABSTRACT Recently, there are many trials to apply reinforcement learning in asset allocation for earning more stable profits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In this paper, we compare performance between several reinforcement learning algorithms - actor-only, actor-critic and PPO models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Furthermore, we analyze each models’ character and then introduce the advanced algorithm, so called Reward clipping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' It seems that the Reward Clipping model is better than other existing models in finance domain, especially portfolio optimization - it has strength both in bull and bear markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Finally, we compare the performance for these models with traditional investment strategies during decreasing and increasing markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Keywords DEEP REINFORCEMENT LEARNING · PORTFOLIO MANAGEMENT · POLICY GRADIENT · PROXIMAL POLICY OPTIMIZATION · REWARD CLIPPING 1 Introduction In recent years, AI algorithms-deep or machine learnings are used in financial market for various fields like stock prediction, auto trading, deep hedging, [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' At the same time, passing through the bull market right after COVID-19, we are currently experiencing difficulties in dealing with market by inflation and the rising interest rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In this situation, to get more return with less risk, asset allocation(portfolio optimization) using Robo-Advisor with reinforcement learning is in the spotlight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Zhipeng Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' implement three reinforcement learning algorithm - DDPG, PPO and Adversarial PG in portfolio management in [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' They showed PG algorithm outperforms URCP in China stock market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Also, in [3], Farzan Soleymani and Eric Paquet present a deep reinforcement learning combined with a restricted stacked autoencoder and a convolutional neural network in portfolio management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Here they apply SARSA algorithm which is enforced with a CNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Jung hoon Kim proposed reinforcement learning to make a short position especially in downward trends of stock markets ([4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' This paper is composed of three parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Firstly, we conduct existing three reinforcement algorithms: actor-only, actor- critic and PPO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In several research, it is shown that PPO has potential in portfolio management, [5], [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Especially, Amine Mohamed Aboussalah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' provide the stability of several RL models including PPO with a cross-sectional analysis ([7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Here, our all three models are based on policy gradient method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Note that from [8] chapter13, in a policy gradient method, the reward function is defined by J(θ) = Σs∈Sµπ(s)Σa∈Aπθ(a|s)qπ(s, a) (1) where µπ(s) = limt→∞P(st = s|s0, πθ) the stationary distribution for the policy πθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Furthermore (see [8]) arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='05300v1 [q-fin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='CP] 2 Jan 2023 Reinforcement Learning in Asset Allocation ∇θJ(θ) ∝ µπ(s)Σa∈Aqπ(s, a)∇θπθ(a|s) (2) Our actor-critic model contains this policy gradient method, and actor-only model has the only actor part of the actor-critic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Also, from [9], for PPO algorithm we use LCLIP (θ) = ˆEt[min(rt(θ) ˆAt, clip(rt(θ), 1 − ϵ, 1 + ϵ) ˆAt)] (3) where ˆEt indicates the empirical average over a finite batch of samples, in an algorithm that alternates between sampling and optimization and ˆAt is an estimator of the advantage function at timestep t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And LCLIP ′(θ) = ˆEt[LCLIP (θ) − c1(Vθ(s) − Vtarget)2 + c2H(s, πθ)] (4) where c1 and c2 are hyperparameter constants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Here, the equation(4) means that when applying PPO for policy (actor) as well as value (critic) functions, besides the clipped reward, the objective function is strengthened with an error term on the value estimation and an entropy term to incentivize sufficient exploration [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Secondly, after checking the performance of three existing RL models and analyzing the characteristics of them, we introduce the modified new model which is called Reward Clipping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' When we test three RL models, actor-only and actor-critic models show high-risk and high-return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' They gain high profitability in a bull market, but also have a big loss rate during a bear market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' On the other hand, PPO model moves opposite way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' It shows good defensive movement when a stock market is decreasing, but it cannot get enough return when a stock market is growing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' So we combine these models to get advantages only - the result model gets high return during bull market but also good defense in a bear market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' For this, we use modified PPO algorithm: LCLIP (θ)NEW = ˆEt[min( ˆAt, clip( ˆAt, 1 − ϵ1, 1 + ϵ2)] (5) In original PPO algorithm (equation (3)), clipping is given for the probability ratio rt(θ) = πθ(at|st) πθold(at|st), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' in our case for the proportion of each asset(product).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' But in financial market stability is needed for advantages- return, MDD and so on, not for portions of portfolio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Furthermore, bigger return and sharpe ratio are better, we set different values ϵ1, ϵ2 saying lower and upper bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' So we modify the clipping logic in PPO to equation (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' This is more intuitive since by controlling the advantage function directly, we can get immediate effects in our rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And it looks that this is more fittable model in finance area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Finally, we compare performance of RL models with traditional quant investment strategies - All Weather Portfolio, 6:4 (equity:bond) and equal weight rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' These results can suggest us the direction of our RL models and give necessity of use of AI models in financial portfolio optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In the following experiments, we use two sets of products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The first set is composed of 68 products, 22 in Europe, Korea, US bond, 44 in US, Europe, Korea, Japan equity and 2 in gold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' From this we can see that the RL models give us not only an optimal asset allocation but also a product selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' For the second set, we use 25 products, 16 in US and KOREA stocks, 4 in intermediate-term treasuries, 2 in long-term treasuries, 2 in commodities including REITs and gold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' With the second product set, we compare the performance of RL models to ALL Weather Portfolio strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 2 Existing Models In this section, we implement three different existing methodologies, actor-only, actor-critic and PPO in asset opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We show how each models work, especially in the view of returns, sharpe ratio, standard deviation and MDD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1 Construction and Experiments In our experiments, one state includes previous closing price, volume or some other financial indices in a fixed window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And an action is the desired allocating weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The actor-only model is the actor part of the actor-critic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And PPO model is the model constructed from the actor-critic model by replacing actor part to PPO algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Hence all three models have the same architecture for the actor part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The following Figure 1 is the common architecture of three models and the output after doing softmax is the proportion for each asset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 2 Reinforcement Learning in Asset Allocation Figure 1: Architecture Note that Q-value function is estimated using a function approximator with weight vector θ : Q(s, a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θ) for action values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And DQN iteratively improves an estimate of Q∗ by minimizing the sequence of loss functions: Li(θi) = Es,a,r,s′[(yDQN i − Q(s, a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θi))2], (6) with yDQN i = r + γmaxa′Q(s′, a′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θi−1) (7) Harm van Seijen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' proposed in [10] to decompose the reward function Renv into n reward functions (see Figure 1 in [10]): Renv(s, a, s′) = n � k=1 Rk(s, a, s′), (8) for all s, a, s′, and to train a separate reinforcement-learning agent on each of these reward functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Hence the associated sequence of loss function is: Li(θi)′ = Es,a,r,s′[ n � k=1 (yk,i − Qk(s, a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θi))2], (9) with yk,i = Rk(s, a, s′) + γ � a′∈A 1 |A|Qk(s′, a′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θi−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' (10) (See [10]) Here, we use return, sharpe ratio and antibias as our rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2 Experimental Results Following is the result for three RL models: actor-only, actor-critic(AC) and PPO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We train the models from 2010-01-01 to 2019-06-10, and test them from 2019-07-18 to 2021-06-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We want to see the movement of models for sharp drawing down and increasing stock market during the COVID-19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' For this experiment, the first data set is used (a product selection of 68 products is also reflected).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' From the Table 1 and Figure 2, we can see that Actor-only and AC models have big draw down(MDD) but AC has good return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' On the other hand, PPO model has less MDD than other two models, but small return too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Also, Figure 2 shows 3 CNN CNN CNN Conv1 Canv1 Conv1 Conv2 Cov2 Conv2 BatchNormallzation BatchNormalization BatchNormallzation Max Pooling Max Pooling Max Pooling Q Dropout Dropout Dropout R .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Conv Con Conv Q: BatchNormalization BathNomalization BahNgwalanon Fc1 Fc1 Fc1 DNN Fc2 (Dropout) Fc3 (Dropout) 10 1m =I=0 Softmax 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2Reinforcement Learning in Asset Allocation Model Annual Return Sharpe Ratio Standard Deviation MDD Sortino Actor-only 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='8068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1670 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1432 Actor-critic 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0635 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1616 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='12 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6766 PPO 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0160 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0966 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='36 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4575 Table 1: Performance of Actor-only vs AC vs PPO Figure 2: Actor-only vs AC vs PPO some patterns for these three models as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' As we can see in the graph, general policy gradient actor(for AC and actor-only model) makes big drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' But if we compare AC and actor-only model, critic part makes the model get more returns although it cannot defense the crash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' On the other hand, PPO model has smooth movement in its returns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' From this, we infer that AC model works for bull-market and PPO model is good for bear market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 3 Reward Clipping Model As we can see in the previous section, Actor-critic and PPO algorithm have its own characteristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The previous one has a strength for increasing market but failing to defense in the depressed stock market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' On the other hand, the PPO algorithm operates the other way around.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' So here, we introduce the new algorithm so called Reward Clipping model which is strong both in increasing and decreasing stock markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1 Idea Note that from the PPO equation([9]), it ensures that the update is not too large, that is the old policy is not too different from the new policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We guess this logic makes PPO move smoothly by giving clipping on the main object, in our case proportion for each asset in a portfolio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' But in a financial market, especially in an asset allocation, big changes in proportions of assets between old and new portfolio is not a problem if we get enough benefit to the point where we can ignore a turnover.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Since our main purpose is return or sharp ratio even though our output is the portfolio, we apply clipping logic to our rewards, not to the main object-asset portfolio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' With simple experiment(not with full products) , we can see the effect of upper and lower bound in reward clipping (see Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Here, RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 model is the reward clipping model with both upper and lower bounds which are -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The model RC_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 is the reward clipping model with upper bound only which is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' It says RC_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 model has no restriction moving downward on its rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Similarly, RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 means the reward clipping has lower bound -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 only (no upper bound so it can move upward freely).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' As you can see the result in Figure 3, if models have the upper clipping bound on their rewards, it seems that they have a limitation to go up, so they cannot get enough return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' On the other hands, if a model has no upper clipping bound (lower 4 2019-07-18-2021-06-16 140000 Actor-only Actor-critic 130000 PPO 120000 110000 100000 90000 80000 2019-07-18 2019-12-05 2020-04-23 2020-09-10 2021-01-28Reinforcement Learning in Asset Allocation Figure 3: Reward Clipping Upper and Lower bound effects bound only, RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4), it moves go up (gets more profit) more than other models, but less down (than other models move).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Hence we can conclude that if we don’t set a upper reward clipping, we can get more rewards and prevent loss of reward by giving a lower reward clipping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Especially, we can find if the model has an upper bound on its reward clipping, it cannot be converged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' It is because that since the model was constructed to purchase more rewards (higher reward is better), if we set up the upper bound, it seems to make confliction with the model’s pursuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The followings are the figures of the convergence for the three models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The leftmost is the convergence of RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4, the middle is for RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 and the rightmost is for RC_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Figure 4: Convergence for RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4, RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4, RC_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2 Construction The basic construction for the Reward Clipping model is same with Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The only different part is that we apply clipping logic in PPO onto reward parts in Actor-Only model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' For example, for the return reward, our formula is max(avg(Σn i=1(Wi · daily returni))) where Wi is the weight and daily returni = (Ai,t, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='Ai,t+T ) and Ai,t is the daily return at time t of i asset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The following pseudo-codes show that which parts are modified from original PPO algorithm to Reward Clipping one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 5 2019-06-10-2021-07-30 160000 RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 RC_-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 150000 wy RC_0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4 140000 130000 120000 110000 100000 90000 2019-06-10 2019-10-28 2020-03-16 2020-08-03 2020-12-21 2021-05-101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='26 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='22 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 118 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='16 114 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='12 0 2500 5000 7500 10000 12500 15000175001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='40 135 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 0 2500 5000 7500 1000012500 15000175001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='23 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='22 121 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 119 2500 5000 7500 10000 12500 15000 17500Reinforcement Learning in Asset Allocation Algorithm 1 PPO-Clip 1: for iteration = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' do 2: for actor = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' , N do 3: Run policy πθold in environment for T time steps 4: Compute advantage estimates ˆA1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' , ˆAT where ˆAt = Wi · Ai,t 5: end for 6: Update the policy by maximizing the PPO-Clip objective: θk+1 = argmaxθ 1 T T � t=0 min( πθ(at|st) πθk(at|st)Aπθk (st, at), g(ϵ, Aπθk (st, at))) 7: Optimize surrogate L wrt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θ, with K epochs and minibatch size M ≤ NT 8: θold ← θ 9: end for Algorithm 2 Reward-Clip 1: for iteration = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' do 2: for actor = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' , N do 3: Run policy πθold in environment for T time steps 4: Compute advantage estimates ˆA1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' , ˆAT where ˆAt = Wi · Ai,t 5: end for 6: Update the policy by maximizing the Reward-Clip objective: θk+1 = argmaxθ 1 T T � t=0 min( At At−1 , ϵ1, ϵ2) (11) where ϵ1, ϵ2 are lower and upper bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 7: Optimize surrogate L wrt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' θ, with K epochs and minibatch size M ≤ NT 8: θold ← θ 9: end for The Equation 11 in 2 is the biggest changed part in our new model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Note that in PPO algorithm, the clipping object is the result of the action-portfolio, but the Reward-Clip object is the reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Since we apply reward clipping to actor-only, in the above pseudo-code, the critic part is excluded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' With the simple experiment introduced in the previous section, we only consider the model with a lower clipping bound in its rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3 Experimental Results and Model Comparisons In the next two subsections, we give two experimental results and comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' To see that the reward clipping model has strength in a bear market but has enough profit in a bull market, we conduct two experiments during two period- falling and increasing markets and compare its performance with other models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1 Reward Clipping in a falling market Here, we check the effect of the Reward Clipping model in a falling market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We train the model from 2010-01-01 to 2021-06-10 and test it from 2021-07-26 to 2022-07-22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We pick this test period to see how the reward clipping model with lower bound work in current market situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Since we apply the reward clipping logic on actor-only model, to see the effect of lower bounded reward clipping, we compare performance with actor-only model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' As you can see in Figure 5, reward clipping with lower bound (and no upper bound) is effective for a falling market but the same return with actor-only model when a market is increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The detail is given in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' To see the market trend like the degree of decline, we put KOSPI and S&P500 indices too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 6 Reinforcement Learning in Asset Allocation Model Annual Return Sharpe Ratio Standard Deviation MDD Sortino Actor-only 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3616 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1633 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5544 Reward Clipping 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2809 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1256 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4422 KOSPI 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='80 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6585 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1653 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='13 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5470 S&P500 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4394 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1972 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6689 Table 2: Table for Reward Clipping effect in a falling market Figure 5: Reward Clipping effect in a falling market In this test(in a falling market), to compare the results with All weather portfolio (in the next section), we use the second set of products (16 products in equity, 6 in bond, 2 in commodities and 1 gold).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' With the above MDD and sortino (and Annual Return) in Table 2, we can see that the reward clipping model with a lower bound has a good defense in a falling market situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The following Figure 6 is shown the proportion of asset classes for Actor-only and RC models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Figure 6: Proportion of asset classes in bear market Here, we can see that RC model defense the bear market better than the Actor-only model (especially after April, 2022) by increasing the portion of Intermediate-term bond (ITBOND).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 7 2021-07-26 -2022-07-22 110000 Actor-Only 105000 KOSPI S&P500 100000 Reward Clipping 95000 90000 85000 80000 75000 70000 2021-07-26 2021-10-04 2021-12-13 2022-02-21 2022-05-02 2022-07-11100 Actor-only : 2021-07-26 - 2022-07-22 08 60 40 20 0 2021-07-26 2021-08-23 2021-09-20 2021-10-18 2021-11-15 2021-12-13 2022-01-10 2022-03-07 2022-04-04,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 2022-05-02 2022-05-30 2022-06-27Reward-Clipping : 2021-07-26 - 2022-07-22 100 COMMODITIES_MT COMMODITIES_REITS 80 EQUITY-KR EQUITY-US GOLD 60 ITBOND LTBOND 40 0 - 2021-07-26 2021-08-23 2021-09-20 2021-10-18 2021-11-15 2021-12-13 2022-01-10 2022-02-07 L0-E0-7 2022-04-04 2022-05-30 2022-06-27 2022-Reinforcement Learning in Asset Allocation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2 Model Comparison for four models The below Table 3 and Figure 7 show comparison of four models- Actor-only, AC, PPO and Reward Clipping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2 we’ve already seen the result for existing three models, so we just add the performance of Reward Clipping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Model Annual Return Sharpe Ratio Standard Deviation MDD Sortino Actor-only 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='8068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1670 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1432 Actor-critic 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0635 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1616 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='12 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6766 PPO 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0160 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0966 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='36 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4575 Reward Clipping 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2746 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1301 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='45 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0391 Table 3: Comparison four models Figure 7: comparison four models If you compare Actor-only and Reward clipping models in Figure 7, we can see that Reward clipping has less draw down but more benefits in increasing situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' You can check this in Table 3 by comparing MDD, sortino and Annual Return - Reward clipping model has less MDD but bigger Annual Return and sortino than Actor-only model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' It has the almost same bottom point to PPO but the same top point to AC at the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' It has the same increasing strength with Actor-critic but also the same defensive power with the PPO algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' This means by clipping onto reward in Actor-only model, we can get advantages of both Actor-critic and PPO algorithms - strength both in increasing and decreasing stock markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Furthermore, as you can see in Figure 4, reward clipping model with lower bound doesn’t much resource (actually it turns out that the reward clipping model requires less resources than PPO model) so we have benefit in the point of view of resources and time saving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The following Figure 8 shows the change of proportion of asset classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Note that we apply rebalancing every month regularly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' As we can see in Figure 8 the existing three models - Actor only, Actor critic and PPO have stable movement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Especially, PPO shows almost constant movement - it is almost the same with equal weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' On the other hand, Reward Clipping model moves actively that is supposed the basis why the model has good performance in both bull and bear markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Furthermore, since PPO model needs more resources - for example, time for convergence, in the above result we can see not only the goodness of the performance but also resource effectiveness (Figure 4) of the reward clipping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 8 2019-07-18-2021-06-16 140000 Actor-only Actor-critic 130000 PPO Reward Clipping 120000 110000 100000 90000 80000 2019-07-18 2019-12-05 2020-04-23 2020-09-10 2021-01-28Reinforcement Learning in Asset Allocation Figure 8: Proportion of asset classes in bull market 4 Further work There are still many interesting further work using deep reinforcement learning in asset allocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Firstly, we deal with ETF(Exchange Traded Fund)s only since each of them has representative index so AI models can train the indices - and so we can also contain ETF’s which are launched recently although there are not enough time to train a model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' But many financial corporation or customers require to expand products to several financial products - stocks, funds and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Secondly, in this paper we have applied reward clipping algorithm to actor-only model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' So, the next step is to apply reward clipping algorithm to actor-critic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Since actor-critic model has higher return than actor-only model and similar draw down, by defending the fall of actor-critic model, we expect that actor-critic model with lower bounded reward clipping has better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Thirdly, in our tests, we execute rebalancing every month regularly, but in a real situation, risk management system is also a necessary requisite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Actually there are several trials to apply AI to detect and react risks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In [11] Yang-Yu Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' show a phenomenon called "loss aversion" which says that people are much more sensitive to losses than to gains of the same magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And it will affect individual decision-makings and portfolio asset prices in financial markets ([12], [13]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' With this prior research outcomes, Qing Yang Eddy Lim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' provide an alternative view in maximising portfolio returns using RL by considering dynamic risks appropriate to market conditions through dynamic portfolio rebalancing ([14]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Finally, we can still try other RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Although we select Actor-critic and PPO models by limitations of resources in this paper, there are many other trials to apply RL algorithms in asset allocation ([15]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In [16], Ricard Durall conduct 9 different algorithms including A2C, PPO, DDPG, SAC and TD3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 5 Conclusion In this paper, we apply deep reinforcement learning algorithms to portfolio optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' At first, we compare the performances of existing models- Actor-only, Actor-critic and PPO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And then analyze the characteristics of three models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Finally, we introduce a new model which has strengths only of each models - the new model, Reward Clipping model has out-performed return in a bull market but also a good defense in a bear market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' To see the model’s performance, we compare them with the traditional approaches - Equal Weight, 6:4(equity:bond) and All-Weather portfolio ([17]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Here we apply All-Weather portfolio only to the second product set (in a bear market) because the second set is consisted of proper asset classes for All-Weather method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' In Table 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Figure 9 and Table 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' we can see that Equal weight has less MDD than other models,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' but small return in a ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='Actor-only : 2019-07-18 - 2021-06-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-07-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-08-15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-09-12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-10-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-11-07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-01-30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-02-27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-03-26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-04-23 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0-05-21 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-06-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-07-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-08-13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-09-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-10-08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-12-03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='LE* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='019-12- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0-11- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-12- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='TO- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='021-02- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='021Actor-critic : 2019-07-18 - 2021-06-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='USA_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='USA_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='GOLD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='KOR_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='KOR_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='JP_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='UK_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='UK_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='DX_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='DX_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-07-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='019-08-15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-09-12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-10-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-11-07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-01-30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='Z~ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='06-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='09-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='10-08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='12-03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='LE* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-04-22 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-05-20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-12- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-02- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='LO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20-11- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0-12- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1-01- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-03- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='021 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021PPO : 2019-07-18 - 2021-06-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-07-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-08-15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-09-12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-10-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-11-07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-01-02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0E-T0-0Z0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-02-27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-03-26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-04-23 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-05-21 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-06-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-07-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-08-13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-09-1( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-10-08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-11-05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='12-03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-01-28 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-02-25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-03-25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='021-04- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='N ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='05Reward Clipping : 2019-07-18 - 2021-06-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='USA_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='USA_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='GOLD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='KOR_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='KOR_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='JP_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='UK_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='UK_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='DX_EQUIT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='DX_BOND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-07-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-08-15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-09-12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-10-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-11-07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2019-12-05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='Z0-T0-0Z02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-01-30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-02-27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-03-26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-04-23 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-05-21 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-06-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-07-16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-08-13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-09-10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2020-10-08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2021-05-20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='020-11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='t0-Reinforcement Learning in Asset Allocation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='bull market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' When we consider Return, MDD, and sortino rate, Reward Clipping model works best for both bull and bear markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Model Annual Return Sharpe Ratio Standard Deviation MDD Sortino Actor-only 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='8068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1670 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1432 Actor-critic 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0635 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1616 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='12 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6766 PPO 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0160 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0966 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='36 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4575 Reward Clipping 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2746 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1301 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='45 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0391 Equal weight 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0968 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='44 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4386 6:4 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='9588 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1074 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='36 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3811 Table 4: models vs traditional approaches during COVID-19 Model Annual Return Sharpe Ratio Standard Deviation MDD Sortino Actor-only 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='3616 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1633 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5544 Reward Clipping 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2809 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1256 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4422 Equal weight 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='23 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0956 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0950 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='27 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5984 6:4 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='53 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5420 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0921 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='43 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='2477 All-Weather 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='73 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='7794 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='0880 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='39 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4975 KOSPI 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='80 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6585 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1653 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='13 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='5470 S&P500 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='4394 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='1972 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='6689 Table 5: models vs traditional approaches in a bear market Figure 9: models vs traditional approaches during COVID-19(bull market) and a bear market In our experiments, the existing models have its own characteristics - some have an advantage in defense for drawing down and others have a strength for profits but not in both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And also depending on the direction of each model, it seems that each model selects two or three main products/asset classes to achieve their purpose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' But Reward Clipping model which the model has advantages of existing models we introduced has a strong strength for both in two opposite market situations (Table 4, Table 5, and Figure 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' And it turns out that the Reward Clipping model has dynamics to select products/asset classes to pursue more profits managing a draw-down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 6 Acknowledgment We appreciate Seongjae Huh for his advice about traditional investment strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' We also would like to say thanks to Yong Qu Lee, team leader of SK C&C for his support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 10 2019-07-18-2021-06-16 140000 Actor-critic Reward Clipping 130000 Actor-only PPO 120000 60:40 Equal weight 110000 100000 90000 80000 2019-07-18 2019-12-05 2020-04-23 2020-09-10 2021-01-282021-07-26-2022-07-22 Actor-only 105000 Reward-Clipping 60:40 All whether 100000 Equal weight 95000 90000 85000 2021-07-26 2021-10-04 2021-12-13 2022-02-21 2022-05-02 2022-07-11Reinforcement Learning in Asset Allocation References [1] Thomas G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Fischer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Reinforcement learning in financial markets - a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' FAU Discussion Papers in Economics, (12/2018), October 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [2] Zipeng Liang, Hao Chen, Junhao Zhu, Kangkang Jiang, and Yanran Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Adversarial deep reinforcement learning in portfolio management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' arXiv:1808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='09940v3 [q-fin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='PM], November 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [3] Farzan Soleymani and Eric Paquet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Financial portfolio optimization with online deep reinforcement learning and restricted stacked autoencoder-deepbreath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Expert Systems with Applications, 156(113456), October 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [4] Jung hoon Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Efficient portfolio management using deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Seoul National University, December 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [5] Andres Heurtas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' A reinforcement learning application for portfolio optimization in the stock market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' UNIVERSITY OF HELSINKI, June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [6] Amine Mohamed Aboussalah, Ziyun Xu, and Chi-Guhn Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' What is the value of the cross-sectional approach to deep reinforcement learning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Quantitative Finance, 22(Issue 6):1091–1111, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [7] Jeffrey M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Wooldridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Part 1: Regression analysis with cross sectional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Introductory Econometrics A Modern Approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 4th edition, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [8] Richard S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Sutton and Andrew G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Barto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Reinforcement Learning: An Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The MIT Press, 2nd edition, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [9] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' arXiv:1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='06347v2 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='LG], August 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [10] Harm van Seijen, Mehdi Fatemi, Joshua Romoff, Romain Laroche, Tavian Barnes, and Jeffrey Tsang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Hybrid reward architecture for reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' arxiv:1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='04208v2 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='LG], November 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [11] Yang-Yu Liu, Jose C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Nacher, Tomoshiro Ochiai, Mauro Martino, and Yaniv Altshuler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Prospect theory for online financial trading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' PLOS ONE, 9(Issue 10, e109458), 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [12] Donghyun Cheong, Young Min Kim, Hyun Woo Byun, Kyong Joo Oh, and Tae Yoon Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Using genetic algorithm to support clustering-based portfolio optimization by investor information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Applied Soft Computing, 61:593–602, December 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [13] Liyan Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Loss aversion in financial markets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Journal of Mechanism and Institution Design, 4(1):119–137, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [14] Qing Yang Eddy Lim, Qi Cao, and Chai Quek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Dynamic portfolio rebalancing through reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Neural Computing and Applications, 34:7125–7139, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [15] Miquel Noguer i Alonso and Sonam Srivastava.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Deep reinforcement learning for asset allocation in us equities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='04404v1 [q-fin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='PM], October 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [16] Ricard Durall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' Asset allocation: From markowitz to deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='07158v1 [q-fin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content='PM], July 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' [17] Youssef Louraoui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' The all-weather portfolio approach: The holy grail of portfolio management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' SSRN, (4021133), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'} +page_content=' 11' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE4T4oBgHgl3EQf2w06/content/2301.05300v1.pdf'}