diff --git a/-dFLT4oBgHgl3EQfCy73/content/tmp_files/2301.11977v1.pdf.txt b/-dFLT4oBgHgl3EQfCy73/content/tmp_files/2301.11977v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ac7b854a02fcc809e694c1bdea807dc40887786 --- /dev/null +++ b/-dFLT4oBgHgl3EQfCy73/content/tmp_files/2301.11977v1.pdf.txt @@ -0,0 +1,1236 @@ +A Memory Efficient Deep Reinforcement Learning +Approach For Snake Game Autonomous Agents +Md. Rafat Rahman Tushar1 +Department of Electrical and Computer Engineering +North South University +Dhaka, Bangladesh +rafat.tushar@northsouth.edu +Shahnewaz Siddique2 +Department of Electrical and Computer Engineering +North South University +Dhaka, Bangladesh +shahnewaz.siddique@northsouth.edu +Abstract—To perform well, Deep Reinforcement Learning +(DRL) +methods +require +significant +memory +resources +and +computational +time. +Also, +sometimes +these +systems +need +additional environment information to achieve a good reward. +However, it is more important for many applications and devices +to reduce memory usage and computational times than to achieve +the maximum reward. This paper presents a modified DRL +method that performs reasonably well with compressed imagery +data without requiring additional environment information and +also uses less memory and time. We have designed a lightweight +Convolutional Neural Network (CNN) with a variant of the +Q-network that efficiently takes preprocessed image data as +input and uses less memory. Furthermore, we use a simple +reward mechanism and small experience replay memory so as to +provide only the minimum necessary information. Our modified +DRL method enables our autonomous agent to play Snake, a +classical control game. The results show our model can achieve +similar performance as other DRL methods. +Index Terms—Deep Reinforcement Learning, Convolutional +Neural Network, Deep Q Learning, Hyperparameter Tuning, +Replay Size, Image Preprocessing +I. INTRODUCTION +Complex problems can be solved in real-world applications +by carefully designing Deep Reinforcement Learning (DRL) +models by taking high dimensional input data and producing +discrete or continuous outputs. It is challenging to build a +agent using sensory data capable of controlling and acting +in an environment. The environment is also complex and +primarily unknown to the acting agent. The agent needs to +learn the underlying distribution of the state and action spaces, +and the distribution changes as the agent encounters new +data from an environment. Previously reinforcement learning +algorithms [1]–[5] were presented with lower constraint prob- +lems to demonstrate the algorithms effectiveness. However, +these systems were not well generalized for high dimensional +inputs; thus, they could not meet the requirements of practical +applications. +Recently, DRL has had success in CNN based vision-based +problems [6]–[8]. They have successfully implemented DRL +methods that learn to control based on image pixel. Although +1Research Assistant. +2Assistant Professor, IEEE Member. +*GitHub implementation: https://github.com/rafattushar/rl-snake +the image-based DRL methods have enjoyed considerable +success, they are memory intensive during training as well as +deployment. Since they require a massive amount of memory, +they are not suitable for implementation in mobile devices or +mid-range autonomous robots for training and deployment. +All modern reinforcement learning algorithms use replay +buffer for sampling uncorrelated data for online training in +mainly off-policy algorithms. Experience replay buffer also +improves the data efficiency [9] during data sampling. Since +the use of neural networks in various DRL algorithms is +increasing, it is necessary to stabilize the neural network +with uncorrelated data. That is why the experience replay +buffer is a desirable property of various reinforcement learning +algorithms. The first successful implementation of DRL in +high dimensional observation space, the Deep Q-learning [6], +used a replay buffer of 106 size. After that, [8], [10]–[12], to +name a few, have solved complex high dimensional problems +but still use a replay buffer of the same size. +Experience replay buffer suffers from two types of issues. +One is to choose the size of the replay buffer, and the second +is the method of sampling data from the buffer. [13]–[15] +consider the latter problem to best sample from the replay +buffer. But the favorable size for the replay buffer remains +unknown. Although [15] points out that the learning algorithm +is sensitive to the size of the replay buffer, they have not come +up with a better conclusion on the size of the buffer. +In this paper, we tackle the memory usage of DRL al- +gorithms by implementing a modified approach for image +preprocessing and replay buffer size. Although we want the +agent to obtain a decent score, we are more concerned about +memory usage. We choose a Deep Q-Network (DQN) [6] +for our algorithm with some variations. Our objective is to +design a DRL model that can be implemented on mobile +devices during training and deployment. To be deployed on +mobile devices, memory consumption must be minimized as +traditional DRL model with visual inputs sometimes need half +a terabyte of memory. We achieve low memory consumption +by preprocessing the visual image data and tuning the replay +buffer size with other hyperparameters. Then, we evaluate +our model in our simulation environment using the classical +control game named Snake.* The results show that our model +can achieve similar performance as other DRL methods. +arXiv:2301.11977v1 [cs.AI] 27 Jan 2023 + +II. RELATED WORK +The core idea of reinforcement learning is the sequential +decision making process involving some agency that learns +from the experience and acts on uncertain environments. After +the development of a formal framework of reinforcement +learning, many algorithms have been introduced such as, [1]– +[5]. +Q-learning [1] is a model-free asynchronous dynamic pro- +gramming algorithm of reinforcement learning. Q-learning +proposes that by sampling all the actions in states and iterating +the action-value functions repeatedly, convergence can be +achieved. The Q-learning works perfectly on limited state +and action space while collapsing with high dimensional +infinite state space. Then, [6] proposes their Deep Q-network +algorithm that demonstrates significant results with image +data. Among other variations, they use a convolutional neural +network and replay buffer. Double Q-learning [16] is applied +with DQN to overcome the overestimation of the action-value +function and is named Deep Reinforcement Learning with +Double Q-Learning (DDQN) [8]. DDQN proposes another +neural network with the same structure as DQN but gets +updated less frequently. Refined DQN [17] proposes another +DRL method that involves a carefully designed reward mech- +anism and a dual experience replay structure. Refined DQN +evaluate their work by enabling their agent to play the snake +game. +The experience replay buffer is a desirable property of +modern DRL algorithms. It provides powerful, model-free, off- +policy DRL algorithms with correlated data and improves data +efficiency [9] during data sampling. DQN [6] shows the power +of replay buffer in sampling data. DQN uses the size 106 for +replay buffer. After that, [8], [10]–[12], [17], among others, +have shown their work with the same size and structure as +the replay buffer. Schaul et al. propose an efficient sampling +strategy in their prioritized experience replay (PER) [13]. PER +shows that instead of sampling data uniform-randomly, the +latest data gets the most priority; hence the latest data have +more probability of being selected, and this selection method +seems to improve results. [15] shows that a large experience +replay buffer can hurt the performance. They also propose that +when sampling data to train DRL algorithms, the most recent +data should the appended to the batch. +III. METHOD +Our objective is to reduce memory usage during training +time while achieving the best performance possible. The replay +memory takes a considerable amount of memory, as described +later. We try to achieve memory efficiency by reducing the +massive replay buffer requirement with image preprocessing +and the buffer size. The buffer size is carefully chosen so +that the agent has the necessary information to train well and +achieves a moderate score. We use a slight variation of the +deep Q-learning algorithm for this purpose. +TABLE I +REWARD MECHANISM FOR SNAKE GAME +Moves +Rewards +Results +Eats an apple ++1 +Score Increase +Hits with wall or itself +-1 +End of episode +Not eats or hits wall or itself +-0.1 +Continue playing games +TABLE II +MEMORY REQUIREMENT FOR DIFFERENT PIXEL DATA +RGB +Grayscale +Binary +Data Type +float +float +int +Size (kB) +165.375 +55.125 +6.890 +Memory Save % w.r.t. RGB +0% +67% +96% +Memory Save % w.r.t. Grayscale +- +0% +87.5% +(a) Before preprocessing +0 +20 +40 +60 +80 +0 +10 +20 +30 +40 +50 +60 +70 +80 +(b) After preprocessing +Fig. 1. Visual image data before and after preprocessing +A. Image Preprocessing +The agent gets the RGB values in the 3-D array format +from the games’ environments. We convert the RGB array into +grayscale because it would not affect the performance [18] and +it saves three times of memory. We resize the grayscale data +into 84 × 84 pixels. Finally, for more memory reduction, we +convert this resized grayscale data into binary data (values only +with 0 and 1). The memory requirement for storing various +image data (scaled-down between 0 and 1) is given in Table II. +Table II shows that it saves around 67% from converting +RGB into grayscale and around 96% from converting RBG +into binary. Also, the memory requirement reduces by around +87.5% converting from grayscale into binary. Visual pixel +data transformation with preprocessing is given in Fig. 1. The +preprocessing method is presented using a flowchart in Fig. 2. +B. Game Selection and Their Environments +The use-case of our target applications is less complex tasks. +For this reason, we implemented the classical Snake game [19] +Game Env +Graysclale +Resize 84X84 +Pixel value +0 or 1 +Fig. 2. Diagram of image preprocessing + +in the ’pygame’ module. The game screen is divided into a +12 × 12 grid. The resolution for the game is set to 252 × 252. +The initial snake size is 3. The controller has four inputs to +navigate. Table I shows the valid actions and respective reward +for the snake game environment. +C. Reinforcement Learning Preliminary +Any reinforcement learning or sequential decision-making +problem can be formulated with Markov Decision Processes +(MDPs). An MDP is a triplet M = (X, A, P0), where X +is a set of valid states, A is a set of valid actions, and P0 +is transition probability kernel that maps X × A into next +state transition probability. For a deterministic system, the state +transition is defined as, +st+1 = f(st, at) +(1) +The reward is defined as, +rt = R(st, at) +(2) +The cumulative reward over a trajectory or episode is called +the return, R(τ). The equation for discounted return is given +below, +R(τ) = +∞ +� +t=0 +γtrt +(3) +D. Deep Q-Learning +The goal of the RL agent is to maximize the expected return. +Following a policy π, the expected return, J(π), is defined as, +J(π) = E +τ∼π[R(τ)] +(4) +The optimal action-value or q function Q∗(s, a) maximizes +the expected return by taking any action at state s and acting +optimally in the following states. +Q∗(s, a) = max +π +E +τ∼π[R(τ)|s0 = s, a0 = a] +(5) +For finding out the optimal actions based on an optimal action- +value function at time t, the Q∗ must satisfy the Bellman +Equation, which is, +Q∗(s, a) = E +s′∼ρ +� +r(s, a) + γ max +a′ Q∗(s′, a′) +� +(6) +The optimal action-value function gives rise to optimal action +a∗(s). The a∗(s) can be described as, +a∗(s) = arg max +a +Q∗(s, a) +(7) +For training an optimal action-value function, sometimes a +non-linear function approximator like neural network [6] is +used. We used a convolutional neural network. +TABLE III +THE ARCHITECTURE OF NEURAL NETWORK +Layer +Filter +Stride +Layer +Acti- +Zero +Output +Name +vation +Padd +Input +84*84*4 +Conv1 +8*8 +4 +32 +ReLU +Yes +21*21*32 +M. Pool +2*2 +2 +Yes +11*11*32 +Conv2 +4*4 +2 +64 +ReLU +Yes +6*6*64 +M. Pool +2*2 +2 +Yes +3*3*64 +B. Norm +3*3*64 +Conv3 +3*3 +2 +128 +ReLU +Yes +2*2*128 +M. Pool +2*2 +2 +Yes +1*1*128 +B. Norm +1*1*128 +Flatten +128 +FC +512 +ReLU +512 +FC +512 +ReLU +512 +Output +No. of +Linear +No. of +actions +actions +M. Pool = Max Pooling, B. Norm = Batch Normalization, FC = Fully Connected +TABLE IV +MEMORY REQUIREMENT EXPERIENCE REPLAY +RGB +Grayscale +Binary +Memory Usage (GB) +1261.71 +420.57 +2.628 +Memory Save % w.r.t. RGB +0% +67% +99.7% +Memory Save % w.r.t. Grayscale +- +0% +99.4% +E. Neural Network +The action-value function is iteratively updated to achieve +the optimal action-value function. The neural network used +to approximate the action-value function and update at each +iteration is called Q-network. We train the Q-network, param- +eterized by θ, by minimizing a loss function Li(θi) at ith +iteration. +Li(θi) = +E +s,a∼ρ +� +(yi − Q(s, a; θi))2� +(8) +where yi = +E +s′∼ρ +� +r(s, a) + γmax +a′ Q′(s′, a′; θ′ +k) +� +is the target +for that update. Here Q′ is another Q-network with the +same shape as Q-network but with a frozen parameter called +target Q-network for training stability parameterized by θ′ +k. +We train the Q-network by minimizing this loss function (8) +w.r.t. the parameter θi. We use Adam [20] optimizer for fast +Environment +Random Action +or Actions taken +by Agent +Screen Data +Rewards +Replay +Memory +State, Action, Reward, Next State +State +E1= (s1,a1,r2,s2) +E2= (s2,a2,r3,s3) +E3= (s3,a3,r4,s4) +E4= (s4,a4,r5,s5) +.... +.... +.... +.... +E1= (st,at,rt+1,st+1) +Experience Replay Memory +Fig. 3. Structure of experience replay memory and flowchart + +St +Online DQN +At +ENV +St+1 +Rt+1 +Experience +Replay +Memory +Q0 +Q1 +Q2 +Q3 +Max Q +Et=(st, at, rt+1, st+1) +St+1 +St+1 +Q0' +Q1' +Q2' +Q3' +E2=(s2, a2, r3, s3) +s2 +Q0 +Q1 +Q2 +Q3 +s3 +Online Deep Q Network +Target Deep Q Network +Loss = [ yt - Q(At) ]2 +yt = Rt+1 + �.maxa Q'(a) +Random Mini-Batch +Sync weights +every p steps +Image Pre-processing +Fig. 4. The deep reinforcement learning design structure of our model +convergence. Our convolutional neural network structure is +shown in Table III. +F. Experience Replay Buffer +As our focus is to keep memory requirements as low as +possible during training, choosing the size of the replay buffer +is one of the critical design decisions. The size of the replay +buffer directly alters the requirement of memory necessity. We +use a replay buffer of size 50,000, requiring less memory +(only 5%) than [6], [8], [17], which use a replay buffer +of size 1,000,000. [6], [8], [17] store grayscale data into a +replay buffer. Table IV shows that we use 99.4% less memory +compared to these works. The replay buffer stores data in FIFO +(first in, first out) order so that the buffer contains only the +latest data. We present the complete cycle of the experience +replay buffer in Fig 3. Fig. 4 illustrates our complete design +diagram. +IV. EXPERIMENTS +A. Training +For training our model, we take a random batch of 32 +experiences from the replay buffer at each iteration. Our +model has two convolutional neural networks (online DQN +and target DQN) sharing the same structure but does not sync +automatically. The weights of the target network are frozen so +that it cannot be trained. The state history from the mini-batch +is fed into the Online DQN. The DQN outputs the Q-values, +Q(st, at). +Loss = [yt − Q(st, at)]2 +(9) +The yt is calculated from the target Q-network. We are passing +the next-state value to the target Q-network, and for each next- +state in the batch, we get Q-value, respectively. That is our +maxa′Q(s′, a′) value in the below equation. +yt = Rt+1 + γmaxa′Q(s′, a′) +(10) +The γ is the discount factor, which is one of many hyperpa- +rameters we are using in our model. Initially, we set γ value to +0.99. The Rt+1 is the reward in each experience tuple. So, we +get the yt value. The loss function is generated by putting these +values in (9). Then, we use this loss function to backpropagate +our Online DQN with an ‘Adam’ optimizer. Adam optimizer is +used instead of classical stochastic gradient descent for more +speed. The target DQN is synced with online DQN at every + +InputLayer +HiddenLayer +OutputLayer(a) Score vs. episode graph +(b) Reward vs. episode graph +Fig. 5. Results of our agent playing Snake game during training +(a) Score vs. episode graph +(b) Reward vs. episode graph +Fig. 6. Results of baseline DQN model playing Snake game during training +10,000 steps. The values of hyperparameters we choose are +listed in Table VI. +B. Results and Comparisons +We allow DRL agents to play 140,000 episodes of games +to match the training results presented in [17]. We train one +agent with our method and another with the DQN method +presented in [6], we refer to [6] as the baseline DQN model. +Next, we compare our model with the baseline DQN model +[6] and the refined DQN model [17]. The results of training +the snake game with our model are shown in Fig. 5. Fig. +5(a) shows the game’s score with our model during training. +Fig. 5(b) shows that even though our reward mechanism is +simpler than the refined DQN model, the agent maximizes the +cumulative reward optimally. +In section III-F we showed that our model is more memory +efficient than the baseline DQN model and the refined DQN +model during training. In this section we show that despite low +memory usage, our model can achieve similar if not better +(a) Score comparison +(b) Reward comparison +Fig. 7. Comparison between our model and baseline DQN model +0 +2 +4 +6 +8 +10 +12 +14 +104 +0 +0.5 +1 +1.5 +2 +2.5 +3 +(a) Performance evaluation in terms +of game score +0 +2 +4 +6 +8 +10 +12 +14 +104 +10 +20 +30 +40 +50 +60 +70 +80 +90 +(b) Performance evaluation in terms +of survival time +Fig. 2. Visualization of performance comparison. To improve clarity, we only +use the averaged values of each 1,000 games. +Moreover, for benchmarking purpose, we also conduct +experiments using a baseline model, which follows the same +strategy used in the DeepMind’s groundbreaking work [2] +(with the same network structure as shown in Table I). This +baseline model is trained in the same manner as our refined +DQN model, but without our carefully designed reward mech- +anism, training gap, and dual experience replay strategy. Fig. 2 +clearly demonstrates that our model outperforms the baseline +model in terms of both the game score and the survival +time. This finding empirically shows the effectiveness of our +improvements over the baseline model, i.e., the reward assign- +ment based on distance, the training gap, the timeout punish- +ment, and the dual experience replay strategies. Nevertheless, +as shown in Fig. 2, the highest values of the averaged game +score and the averaged number of steps survived are seemingly +small, i.e., around 2.5 and 80, respectively. However, please +note that these numbers are computed as the average of 1,000 +games, within which several outlier cases may drastically +lower the averaged performance. Furthermore, in the latter part +of this experiment section, we compare the performance of our +refined DQN model with human performance, trying to further +evaluate the capability of our proposed model. As shown in +Fig. 2, the performance of our refined DQN model in terms of +game score increases slowly over the first 50,000 games along +with the decay of ϵ. Moreover, the performance in terms of the +number of steps survived even gets decreasing (see Fig. 2(b)). +These findings are due to the exploration-exploitation trade- +off. As in the exploration phase, wherein ϵ linearly decays +from 0.5 to 0, the agent is actually getting familiar with +the game environment by accumulating knowledge learned +from random exploration. After the exploration phase, the +performance of the agent starts to improve by making all +the decisions based on the learned knowledge. As shown in +Fig. 2(a), the averaged game score generally keeps improving. +Similarly, as shown in Fig. 2(b), the averaged number of +steps survived also shows improvements in general. There is +a noticeable peak in terms of the number of steps survived +around 50,000th to 77,000th games. This unexpected peak may +be due to the completion of ϵ decay that the performance of +the agent starts to improve as it relies purely on the learned +knowledge for decision making. However, we suspect that the +0 +2 +4 +6 +8 +10 +12 +14 +16 +18 +0 +5 +10 +15 +20 +25 +30 +35 +40 +45 +50 +(a) Performance in terms of game +score +0 +1000 +2000 +3000 +4000 +5000 +6000 +0 +5 +10 +15 +20 +25 +30 +35 +40 +45 +50 +(b) Performance in terms of the num- +ber of steps survived +Fig. 3. The performance of our agent (after being training for 134,000 games) +in additional 50 games, wherein ϵ = 0 and training is turned off. +TABLE II +PERFORMANCE COMPARISON AMONG DIFFERENT MODELS +Performance +Score +Survival Steps +Human Average +1.98 +216.46 +Baseline Average +0.26 +31.64 +Refined DQN Average +9.04 +1477.40 +Human Best +15 +1389 +Baseline Best +2 +1015 +Refined DQN Best +17 +5039 +game play policies learned during the exploration phase may +not be optimal or near optimal that after a while (around +27,000 games after ϵ decays to 0), the performance of the +agent drops significantly (also shown as a slight drop in terms +of game scores in Fig. 2(a)). However, it is encouraging to +see that even after the exploration phase, our agent is able to +learn more appropriate knowledge and achieves monotonically +increasing performance after the performance drop. It seems +the period of ϵ decay, i.e., 50,000 games, is not sufficient +for the agent to obtain a converged knowledge set. However, +due to the limited computing resource we have, we are not +able to re-run all the experiments due to the time constraint. +Nonetheless, the monotonically increasing performance after +77,000th game empirically shows that our agent is able to learn +correctly in the Snake Game. Moreover, in the last paragraph +of this section, we show that although pre-converged, our agent +can already surpass average human players. +To further justify the performance of our agent, we let the +trained agent play additional 50 games with ϵ = 0 and show +the results in Fig. 3. In terms of game score, our agent obtains a +minimum score of 3, a maximum score of 17, and the averaged +score of around 9. The averaged score of 9 is significantly +higher than 2.5 shown in Fig. 2(a). Similarly, the averaged +number of steps survived is approximately 1,500, which is +again significantly higher than that of 80 shown in Fig. 2(b). +To further compare our refined DQN model with human +performance, we invite ten undergraduate students to play the +Snake Game for 50 games. Before they play 50 games for +performance comparisons, each human player played at least +10 games to get familiar with this particular Snake Game +implementation. The performance comparisons in terms of +game scores and the number of steps survived are shown +(a) Score graph of Refined DQN +(graph taken from [17]) +(b) Score graph of our model +Fig. 8. Comparison between Refined DQN model and our model +0 +2 +4 +6 +8 +10 +12 +14 +104 +0 +0.5 +1 +1.5 +2 +2.5 +3 +(a) Performance evaluation in terms +of game score +0 +2 +4 +6 +8 +10 +12 +14 +104 +10 +20 +30 +40 +50 +60 +70 +80 +90 +(b) Performance evaluation in terms +of survival time +Fig. 2. Visualization of performance comparison. To improve clarity, we only +use the averaged values of each 1,000 games. +Moreover, for benchmarking purpose, we also conduct +experiments using a baseline model, which follows the same +strategy used in the DeepMind’s groundbreaking work [2] +(with the same network structure as shown in Table I). This +baseline model is trained in the same manner as our refined +DQN model, but without our carefully designed reward mech- +anism, training gap, and dual experience replay strategy. Fig. 2 +clearly demonstrates that our model outperforms the baseline +model in terms of both the game score and the survival +time. This finding empirically shows the effectiveness of our +improvements over the baseline model, i.e., the reward assign- +ment based on distance, the training gap, the timeout punish- +ment, and the dual experience replay strategies. Nevertheless, +as shown in Fig. 2, the highest values of the averaged game +score and the averaged number of steps survived are seemingly +small, i.e., around 2.5 and 80, respectively. However, please +note that these numbers are computed as the average of 1,000 +games, within which several outlier cases may drastically +lower the averaged performance. Furthermore, in the latter part +of this experiment section, we compare the performance of our +refined DQN model with human performance, trying to further +evaluate the capability of our proposed model. As shown in +Fig. 2, the performance of our refined DQN model in terms of +game score increases slowly over the first 50,000 games along +with the decay of ϵ. Moreover, the performance in terms of the +number of steps survived even gets decreasing (see Fig. 2(b)). +These findings are due to the exploration-exploitation trade- +off. As in the exploration phase, wherein ϵ linearly decays +from 0.5 to 0, the agent is actually getting familiar with +the game environment by accumulating knowledge learned +from random exploration. After the exploration phase, the +performance of the agent starts to improve by making all +the decisions based on the learned knowledge. As shown in +Fig. 2(a), the averaged game score generally keeps improving. +Similarly, as shown in Fig. 2(b), the averaged number of +steps survived also shows improvements in general. There is +a noticeable peak in terms of the number of steps survived +around 50,000th to 77,000th games. This unexpected peak may +be due to the completion of ϵ decay that the performance of +the agent starts to improve as it relies purely on the learned +knowledge for decision making. However, we suspect that the +0 +2 +4 +6 +8 +10 +12 +14 +16 +18 +0 +5 +10 +15 +20 +25 +30 +35 +40 +45 +50 +(a) Performance in terms of game +score +0 +1000 +2000 +3000 +4000 +5000 +6000 +0 +5 +10 +15 +20 +25 +30 +35 +40 +45 +50 +(b) Performance in terms of the num- +ber of steps survived +Fig. 3. The performance of our agent (after being training for 134,000 games) +in additional 50 games, wherein ϵ = 0 and training is turned off. +TABLE II +PERFORMANCE COMPARISON AMONG DIFFERENT MODELS +Performance +Score +Survival Steps +Human Average +1.98 +216.46 +Baseline Average +0.26 +31.64 +Refined DQN Average +9.04 +1477.40 +Human Best +15 +1389 +Baseline Best +2 +1015 +Refined DQN Best +17 +5039 +game play policies learned during the exploration phase may +not be optimal or near optimal that after a while (around +27,000 games after ϵ decays to 0), the performance of the +agent drops significantly (also shown as a slight drop in terms +of game scores in Fig. 2(a)). However, it is encouraging to +see that even after the exploration phase, our agent is able to +learn more appropriate knowledge and achieves monotonically +increasing performance after the performance drop. It seems +the period of ϵ decay, i.e., 50,000 games, is not sufficient +for the agent to obtain a converged knowledge set. However, +due to the limited computing resource we have, we are not +able to re-run all the experiments due to the time constraint. +Nonetheless, the monotonically increasing performance after +77,000th game empirically shows that our agent is able to learn +correctly in the Snake Game. Moreover, in the last paragraph +of this section, we show that although pre-converged, our agent +can already surpass average human players. +To further justify the performance of our agent, we let the +trained agent play additional 50 games with ϵ = 0 and show +the results in Fig. 3. In terms of game score, our agent obtains a +minimum score of 3, a maximum score of 17, and the averaged +score of around 9. The averaged score of 9 is significantly +higher than 2.5 shown in Fig. 2(a). Similarly, the averaged +number of steps survived is approximately 1,500, which is +again significantly higher than that of 80 shown in Fig. 2(b). +To further compare our refined DQN model with human +performance, we invite ten undergraduate students to play the +Snake Game for 50 games. Before they play 50 games for +performance comparisons, each human player played at least +10 games to get familiar with this particular Snake Game +implementation. The performance comparisons in terms of +game scores and the number of steps survived are shown +(a) +Refined +DQN +score +(Taken +from [17]) +0 +10 +20 +30 +40 +50 +Episode +0.0 +2.5 +5.0 +7.5 +10.0 +12.5 +15.0 +17.5 +20.0 +Score +(b) Our model’s score +Fig. 9. Testing evaluation by playing random 50 episodes game +results than the baseline and refined DQN models. Fig. 6 +displays the baseline DQN results during training on the snake +game. In Fig. 7 we present the score and reward comparison +between our model and the baseline DQN model. The blue +line in Fig. 7(a) represents our model’s score, and the purple +line represents the score of the baseline DQN model. During +140,000 numbers of training episodes, our model remains +better at episode score though it requires fewer resources. +Fig. 7(b) demonstrates that our model is capable of achieving +higher cumulative rewards than the baseline DQN model. +We also compare the results between our model and the +refined DQN model [17]. Refined DQN follows a dual ex- +perience replay memory architecture and a complex reward +mechanism. However, our model surpasses their score. Since +their game is similar to ours, we compare our results with +the results provided in their paper. Fig. 8(a) shows the results +presented in [17], and Fig. 8(b) is our model’s results during +TABLE V +LIST OF PERFORMANCE COMPARISON OF DIFFERENT AGENTS +Performance +Score +Human Average +1.98 * +Baseline Average +0.26 * +Refined DQN Average +9.04 * +Our Average +9.53 +Human Best +15 * +Baseline Best +2 * +Refined DQN Best +17 * +Our Best +20 +* Data taken from [17] + +Jno +12 +Baseline DQN +10 +8 +Score +6 +4 +2 +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Episode +1e510 +0 +Reward +-10 +-20 +Our +-30 +Baseline DQN +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Episode +1e5Our +12 +10 +8 +Score +6 +4 +2 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Episode +le5Our +10 +0 +Reward +-10 +-20 +-30 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Episode +le5Baseline DQN +10 +8 +6 +Score +4 +2 +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Episode +1e510 +Baseline DQN +8 +6 +Reward +4 +2 +0 +-2 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Episode +1e5training. By comparing Fig. 8(a) and Fig. 8(b), we can safely +say that our model achieves better scores despite having a +simple replay buffer, a simple reward mechanism, and less +memory consumption. +Fig. 9(a) and Fig. 9(b) show scores of random 50 episodes +during testing of refined DQN and our model, respectively. +Table V summarizes the scores provided in the refined DQN +and our model. We can identify from Table V that their refined +DQN average is 9.04, while ours is 9.53, and their refined +DQN best score is 17, while ours is 20. So, we can see that our +model also performs better in the training and testing phase. +TABLE VI +LIST OF HYPERPARAMETERS +Hyperparameter +Value +Description +Discount Factor +0.99 +γ-value in max Q-function +Initial Epsilon +1.0 +Exploration epsilon initial value +Final Epsilon +0.01 +Exploration final epsilon value +Batch size +32 +Mini batch from replay memory +Max step +10,000 +Maximum number of steps +allowed per episode +Learning Rate +0.0025 +Learning rate for Adam optimizer +Clip-Norm +1.0 +Clipping value for Adam optimizer +Random Frames +50,000 +Number of random initial steps +Epsilon greedy +500,000 +Number of frames in which initial +frames +epsilon will be equal final epsilon +Experience Replay +50,000 +Capacity of experience replay +Memory +memory +Update of DQN +4 +The number of steps after each +update of DQN takes place +Update Target +10,000 +The number of steps after the +DQN +Target and Online DQN sync +V. CONCLUSION +In this paper, we have shown that better image preprocess- +ing and constructing a better mechanism for replay buffer +can reduce memory consumption on DRL algorithms during +training. We have also demonstrated that using our method, +the performance of the DRL agent on a lower constraint +application is entirely similar, if not better. We combined our +method with the DQN (with some modification) algorithm +to observe the method’s effectiveness. Our presented design +requires less memory and a simple CNN. We established that +our method’s result is as good as other DRL approaches for +the snake game autonomous agent. +ACKNOWLEDGMENT +This work was supported by North South University re- +search grant CTRG-21-SEPS-18. +The authors would like to gratefully acknowledge that the +computing resources used in this work was housed at the +National University of Sciences and Technology (NUST), +Pakistan. The cooperation was pursued under the South Asia +Regional Development Center (RDC) framework of the Belt +& Road Aerospace Innovation Alliance (BRAIA). +REFERENCES +[1] C. J. C. H. Watkins and P. Dayan, “Q-learning,” in Machine Learning, +1992, pp. 279–292. +[2] G. Tesauro, “Temporal difference learning and td-gammon,” Commun. +ACM, vol. 38, no. 3, p. 58–68, Mar. 1995. +[3] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradient +methods for reinforcement learning with function approximation,” in +Advances in Neural Information Processing Systems, S. Solla, T. Leen, +and K. M¨uller, Eds., vol. 12. +MIT Press, 1999. +[4] J. Peters, S. Vijayakumar, and S. Schaal, “Natural actor-critic,” in +Machine Learning: ECML 2005. +Berlin, Heidelberg: Springer Berlin +Heidelberg, 2005, pp. 280–291. +[5] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Ried- +miller, “Deterministic policy gradient algorithms,” in Proceedings of the +31st International Conference on International Conference on Machine +Learning - Volume 32, ser. ICML’14. JMLR.org, 2014, p. I–387–I–395. +[6] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wier- +stra, and M. A. Riedmiller, “Playing atari with deep reinforcement +learning,” Computing Research Repository, vol. abs/1312.5602, 2013. +[7] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, +A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen, +C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, +S. Legg, and D. Hassabis, “Human-level control through deep reinforce- +ment learning,” Nature, vol. 518, pp. 529–33, 02 2015. +[8] H. v. Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with +double q-learning,” in Proceedings of the Thirtieth AAAI Conference on +Artificial Intelligence, ser. AAAI’16. AAAI Press, 2016, p. 2094–2100. +[9] L.-J. Lin, “Self-improving reactive agents based on reinforcement learn- +ing, planning and teaching,” Mach. Learn., vol. 8, no. 3–4, p. 293–321, +may 1992. +[10] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, +D. Silver, and D. Wierstra, “Continuous control with deep reinforcement +learning,” Computing Research Repository, 2019. +[11] S. Li, Y. Wu, X. Cui, H. Dong, F. Fang, and S. Russell, “Robust multi- +agent reinforcement learning via minimax deep deterministic policy +gradient,” Proceedings of the AAAI Conference on Artificial Intelligence, +vol. 33, no. 01, pp. 4213–4220, Jul. 2019. +[12] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off- +policy maximum entropy deep reinforcement learning with a stochastic +actor.” in ICML, ser. Proceedings of Machine Learning Research, vol. 80. +PMLR, 2018, pp. 1856–1865. +[13] T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience +replay,” 2015. [Online]. Available: https://arxiv.org/abs/1511.05952 +[14] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, +B. McGrew, J. Tobin, O. Pieter Abbeel, and W. Zaremba, “Hindsight +experience replay,” in Advances in Neural Information Processing +Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, +S. Vishwanathan, and R. Garnett, Eds., vol. 30. +Curran Associates, +Inc., 2017. +[15] S. Zhang and R. S. Sutton, “A deeper look at experience replay,” +Computing Research Repository, vol. abs/1712.01275, 2017. +[16] H. Hasselt, “Double q-learning,” in Advances in Neural Information +Processing Systems, J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, +and A. Culotta, Eds., vol. 23. +Curran Associates, Inc., 2010. +[17] Z. Wei, D. Wang, M. Zhang, A.-H. Tan, C. Miao, and Y. Zhou, +“Autonomous agents in snake game via deep reinforcement learning,” in +2018 IEEE International Conference on Agents (ICA), 2018, pp. 20–25. +[18] T. D. Nguyen, K. Mori, and R. Thawonmas, “Image colorization using +a deep convolutional neural network,” Computing Research Repository, +vol. abs/1604.07904, 2016. +[19] A. Punyawee, C. Panumate, and H. Iida, “Finding comfortable settings +of snake game using game refinement measurement,” in Advances in +Computer Science and Ubiquitous Computing. +Singapore: Springer +Singapore, 2017, pp. 66–73. +[20] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza- +tion,” in 3rd International Conference on Learning Representations, +ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track +Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. + diff --git a/-dFLT4oBgHgl3EQfCy73/content/tmp_files/load_file.txt b/-dFLT4oBgHgl3EQfCy73/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..3bd24bc364ec179765a74683aebb0fcf11f3a0af --- /dev/null +++ b/-dFLT4oBgHgl3EQfCy73/content/tmp_files/load_file.txt @@ -0,0 +1,657 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf,len=656 +page_content='A Memory Efficient Deep Reinforcement Learning Approach For Snake Game Autonomous Agents Md.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Rafat Rahman Tushar1 Department of Electrical and Computer Engineering North South University Dhaka, Bangladesh rafat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='tushar@northsouth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='edu Shahnewaz Siddique2 Department of Electrical and Computer Engineering North South University Dhaka, Bangladesh shahnewaz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='siddique@northsouth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='edu Abstract—To perform well, Deep Reinforcement Learning (DRL) methods require significant memory resources and computational time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Also, sometimes these systems need additional environment information to achieve a good reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, it is more important for many applications and devices to reduce memory usage and computational times than to achieve the maximum reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This paper presents a modified DRL method that performs reasonably well with compressed imagery data without requiring additional environment information and also uses less memory and time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We have designed a lightweight Convolutional Neural Network (CNN) with a variant of the Q-network that efficiently takes preprocessed image data as input and uses less memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Furthermore, we use a simple reward mechanism and small experience replay memory so as to provide only the minimum necessary information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Our modified DRL method enables our autonomous agent to play Snake, a classical control game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The results show our model can achieve similar performance as other DRL methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Index Terms—Deep Reinforcement Learning, Convolutional Neural Network, Deep Q Learning, Hyperparameter Tuning, Replay Size, Image Preprocessing I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' INTRODUCTION Complex problems can be solved in real-world applications by carefully designing Deep Reinforcement Learning (DRL) models by taking high dimensional input data and producing discrete or continuous outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' It is challenging to build a agent using sensory data capable of controlling and acting in an environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The environment is also complex and primarily unknown to the acting agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The agent needs to learn the underlying distribution of the state and action spaces, and the distribution changes as the agent encounters new data from an environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Previously reinforcement learning algorithms [1]–[5] were presented with lower constraint prob- lems to demonstrate the algorithms effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, these systems were not well generalized for high dimensional inputs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' thus, they could not meet the requirements of practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Recently, DRL has had success in CNN based vision-based problems [6]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' They have successfully implemented DRL methods that learn to control based on image pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Although 1Research Assistant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2Assistant Professor, IEEE Member.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' GitHub implementation: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='com/rafattushar/rl-snake the image-based DRL methods have enjoyed considerable success, they are memory intensive during training as well as deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Since they require a massive amount of memory, they are not suitable for implementation in mobile devices or mid-range autonomous robots for training and deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' All modern reinforcement learning algorithms use replay buffer for sampling uncorrelated data for online training in mainly off-policy algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Experience replay buffer also improves the data efficiency [9] during data sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Since the use of neural networks in various DRL algorithms is increasing, it is necessary to stabilize the neural network with uncorrelated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' That is why the experience replay buffer is a desirable property of various reinforcement learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The first successful implementation of DRL in high dimensional observation space, the Deep Q-learning [6], used a replay buffer of 106 size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' After that, [8], [10]–[12], to name a few, have solved complex high dimensional problems but still use a replay buffer of the same size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Experience replay buffer suffers from two types of issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' One is to choose the size of the replay buffer, and the second is the method of sampling data from the buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [13]–[15] consider the latter problem to best sample from the replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' But the favorable size for the replay buffer remains unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Although [15] points out that the learning algorithm is sensitive to the size of the replay buffer, they have not come up with a better conclusion on the size of the buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' In this paper, we tackle the memory usage of DRL al- gorithms by implementing a modified approach for image preprocessing and replay buffer size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Although we want the agent to obtain a decent score, we are more concerned about memory usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We choose a Deep Q-Network (DQN) [6] for our algorithm with some variations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Our objective is to design a DRL model that can be implemented on mobile devices during training and deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To be deployed on mobile devices, memory consumption must be minimized as traditional DRL model with visual inputs sometimes need half a terabyte of memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We achieve low memory consumption by preprocessing the visual image data and tuning the replay buffer size with other hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Then, we evaluate our model in our simulation environment using the classical control game named Snake.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' * The results show that our model can achieve similar performance as other DRL methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='11977v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='AI] 27 Jan 2023 II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' RELATED WORK The core idea of reinforcement learning is the sequential decision making process involving some agency that learns from the experience and acts on uncertain environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' After the development of a formal framework of reinforcement learning, many algorithms have been introduced such as, [1]– [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Q-learning [1] is a model-free asynchronous dynamic pro- gramming algorithm of reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Q-learning proposes that by sampling all the actions in states and iterating the action-value functions repeatedly, convergence can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The Q-learning works perfectly on limited state and action space while collapsing with high dimensional infinite state space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Then, [6] proposes their Deep Q-network algorithm that demonstrates significant results with image data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Among other variations, they use a convolutional neural network and replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Double Q-learning [16] is applied with DQN to overcome the overestimation of the action-value function and is named Deep Reinforcement Learning with Double Q-Learning (DDQN) [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' DDQN proposes another neural network with the same structure as DQN but gets updated less frequently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Refined DQN [17] proposes another DRL method that involves a carefully designed reward mech- anism and a dual experience replay structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Refined DQN evaluate their work by enabling their agent to play the snake game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The experience replay buffer is a desirable property of modern DRL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' It provides powerful, model-free, off- policy DRL algorithms with correlated data and improves data efficiency [9] during data sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' DQN [6] shows the power of replay buffer in sampling data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' DQN uses the size 106 for replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' After that, [8], [10]–[12], [17], among others, have shown their work with the same size and structure as the replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Schaul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' propose an efficient sampling strategy in their prioritized experience replay (PER) [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' PER shows that instead of sampling data uniform-randomly, the latest data gets the most priority;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' hence the latest data have more probability of being selected, and this selection method seems to improve results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [15] shows that a large experience replay buffer can hurt the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' They also propose that when sampling data to train DRL algorithms, the most recent data should the appended to the batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' METHOD Our objective is to reduce memory usage during training time while achieving the best performance possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The replay memory takes a considerable amount of memory, as described later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We try to achieve memory efficiency by reducing the massive replay buffer requirement with image preprocessing and the buffer size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The buffer size is carefully chosen so that the agent has the necessary information to train well and achieves a moderate score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We use a slight variation of the deep Q-learning algorithm for this purpose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' TABLE I REWARD MECHANISM FOR SNAKE GAME Moves Rewards Results Eats an apple +1 Score Increase Hits with wall or itself 1 End of episode Not eats or hits wall or itself 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='1 Continue playing games TABLE II MEMORY REQUIREMENT FOR DIFFERENT PIXEL DATA RGB Grayscale Binary Data Type float float int Size (kB) 165.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='375 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='125 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='890 Memory Save % w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' RGB 0% 67% 96% Memory Save % w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Grayscale 0% 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5% (a) Before preprocessing 0 20 40 60 80 0 10 20 30 40 50 60 70 80 (b) After preprocessing Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Visual image data before and after preprocessing A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Image Preprocessing The agent gets the RGB values in the 3-D array format from the games’ environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We convert the RGB array into grayscale because it would not affect the performance [18] and it saves three times of memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We resize the grayscale data into 84 × 84 pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Finally, for more memory reduction, we convert this resized grayscale data into binary data (values only with 0 and 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The memory requirement for storing various image data (scaled-down between 0 and 1) is given in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Table II shows that it saves around 67% from converting RGB into grayscale and around 96% from converting RBG into binary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Also, the memory requirement reduces by around 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5% converting from grayscale into binary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Visual pixel data transformation with preprocessing is given in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The preprocessing method is presented using a flowchart in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Game Selection and Their Environments The use-case of our target applications is less complex tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' For this reason, we implemented the classical Snake game [19] Game Env Graysclale Resize 84X84 Pixel value 0 or 1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Diagram of image preprocessing in the ’pygame’ module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The game screen is divided into a 12 × 12 grid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The resolution for the game is set to 252 × 252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The initial snake size is 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The controller has four inputs to navigate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Table I shows the valid actions and respective reward for the snake game environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Reinforcement Learning Preliminary Any reinforcement learning or sequential decision-making problem can be formulated with Markov Decision Processes (MDPs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' An MDP is a triplet M = (X, A, P0), where X is a set of valid states, A is a set of valid actions, and P0 is transition probability kernel that maps X × A into next state transition probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' For a deterministic system, the state transition is defined as, st+1 = f(st, at) (1) The reward is defined as, rt = R(st, at) (2) The cumulative reward over a trajectory or episode is called the return, R(τ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The equation for discounted return is given below, R(τ) = ∞ � t=0 γtrt (3) D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Deep Q-Learning The goal of the RL agent is to maximize the expected return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Following a policy π, the expected return, J(π), is defined as, J(π) = E τ∼π[R(τ)] (4) The optimal action-value or q function Q∗(s, a) maximizes the expected return by taking any action at state s and acting optimally in the following states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Q∗(s, a) = max π E τ∼π[R(τ)|s0 = s, a0 = a] (5) For finding out the optimal actions based on an optimal action- value function at time t, the Q∗ must satisfy the Bellman Equation, which is, Q∗(s, a) = E s′∼ρ � r(s, a) + γ max a′ Q∗(s′, a′) � (6) The optimal action-value function gives rise to optimal action a∗(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The a∗(s) can be described as, a∗(s) = arg max a Q∗(s, a) (7) For training an optimal action-value function, sometimes a non-linear function approximator like neural network [6] is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We used a convolutional neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' TABLE III THE ARCHITECTURE OF NEURAL NETWORK Layer Filter Stride Layer Acti- Zero Output Name vation Padd Input 84*84*4 Conv1 8*8 4 32 ReLU Yes 21*21*32 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Pool 2*2 2 Yes 11*11*32 Conv2 4*4 2 64 ReLU Yes 6*6*64 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Pool 2*2 2 Yes 3*3*64 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Norm 3*3*64 Conv3 3*3 2 128 ReLU Yes 2*2*128 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Pool 2*2 2 Yes 1*1*128 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Norm 1*1*128 Flatten 128 FC 512 ReLU 512 FC 512 ReLU 512 Output No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' of Linear No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' of actions actions M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Pool = Max Pooling, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Norm = Batch Normalization, FC = Fully Connected TABLE IV MEMORY REQUIREMENT EXPERIENCE REPLAY RGB Grayscale Binary Memory Usage (GB) 1261.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='71 420.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='57 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='628 Memory Save % w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' RGB 0% 67% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='7% Memory Save % w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Grayscale 0% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4% E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Neural Network The action-value function is iteratively updated to achieve the optimal action-value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The neural network used to approximate the action-value function and update at each iteration is called Q-network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We train the Q-network, param- eterized by θ, by minimizing a loss function Li(θi) at ith iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Li(θi) = E s,a∼ρ � (yi − Q(s, a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' θi))2� (8) where yi = E s′∼ρ � r(s, a) + γmax a′ Q′(s′, a′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' θ′ k) � is the target for that update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Here Q′ is another Q-network with the same shape as Q-network but with a frozen parameter called target Q-network for training stability parameterized by θ′ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We train the Q-network by minimizing this loss function (8) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' the parameter θi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We use Adam [20] optimizer for fast Environment Random Action or Actions taken by Agent Screen Data Rewards Replay Memory State, Action, Reward, Next State State E1= (s1,a1,r2,s2) E2= (s2,a2,r3,s3) E3= (s3,a3,r4,s4) E4= (s4,a4,r5,s5) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='. .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='. .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='. .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='. E1= (st,at,rt+1,st+1) Experience Replay Memory Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=" Structure of experience replay memory and flowchart St Online DQN At ENV St+1 Rt+1 Experience Replay Memory Q0 Q1 Q2 Q3 Max Q Et=(st, at, rt+1, st+1) St+1 St+1 Q0' Q1' Q2' Q3' E2=(s2, a2, r3, s3) s2 Q0 Q1 Q2 Q3 s3 Online Deep Q Network Target Deep Q Network Loss = [ yt - Q(At) ]2 yt = Rt+1 + �." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content="maxa Q'(a) Random Mini-Batch Sync weights every p steps Image Pre-processing Fig." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The deep reinforcement learning design structure of our model convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Our convolutional neural network structure is shown in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Experience Replay Buffer As our focus is to keep memory requirements as low as possible during training, choosing the size of the replay buffer is one of the critical design decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The size of the replay buffer directly alters the requirement of memory necessity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We use a replay buffer of size 50,000, requiring less memory (only 5%) than [6], [8], [17], which use a replay buffer of size 1,000,000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [6], [8], [17] store grayscale data into a replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Table IV shows that we use 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4% less memory compared to these works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The replay buffer stores data in FIFO (first in, first out) order so that the buffer contains only the latest data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We present the complete cycle of the experience replay buffer in Fig 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 4 illustrates our complete design diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' EXPERIMENTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Training For training our model, we take a random batch of 32 experiences from the replay buffer at each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Our model has two convolutional neural networks (online DQN and target DQN) sharing the same structure but does not sync automatically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The weights of the target network are frozen so that it cannot be trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The state history from the mini-batch is fed into the Online DQN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The DQN outputs the Q-values, Q(st, at).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Loss = [yt − Q(st, at)]2 (9) The yt is calculated from the target Q-network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We are passing the next-state value to the target Q-network, and for each next- state in the batch, we get Q-value, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' That is our maxa′Q(s′, a′) value in the below equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' yt = Rt+1 + γmaxa′Q(s′, a′) (10) The γ is the discount factor, which is one of many hyperpa- rameters we are using in our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Initially, we set γ value to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The Rt+1 is the reward in each experience tuple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' So, we get the yt value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The loss function is generated by putting these values in (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Then, we use this loss function to backpropagate our Online DQN with an ‘Adam’ optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Adam optimizer is used instead of classical stochastic gradient descent for more speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The target DQN is synced with online DQN at every InputLayer HiddenLayer OutputLayer(a) Score vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' episode graph (b) Reward vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' episode graph Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Results of our agent playing Snake game during training (a) Score vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' episode graph (b) Reward vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' episode graph Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Results of baseline DQN model playing Snake game during training 10,000 steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The values of hyperparameters we choose are listed in Table VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Results and Comparisons We allow DRL agents to play 140,000 episodes of games to match the training results presented in [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We train one agent with our method and another with the DQN method presented in [6], we refer to [6] as the baseline DQN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Next, we compare our model with the baseline DQN model [6] and the refined DQN model [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The results of training the snake game with our model are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 5(a) shows the game’s score with our model during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 5(b) shows that even though our reward mechanism is simpler than the refined DQN model, the agent maximizes the cumulative reward optimally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' In section III-F we showed that our model is more memory efficient than the baseline DQN model and the refined DQN model during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' In this section we show that despite low memory usage, our model can achieve similar if not better (a) Score comparison (b) Reward comparison Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Comparison between our model and baseline DQN model 0 2 4 6 8 10 12 14 104 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 3 (a) Performance evaluation in terms of game score 0 2 4 6 8 10 12 14 104 10 20 30 40 50 60 70 80 90 (b) Performance evaluation in terms of survival time Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Visualization of performance comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To improve clarity, we only use the averaged values of each 1,000 games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Moreover, for benchmarking purpose, we also conduct experiments using a baseline model, which follows the same strategy used in the DeepMind’s groundbreaking work [2] (with the same network structure as shown in Table I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This baseline model is trained in the same manner as our refined DQN model, but without our carefully designed reward mech- anism, training gap, and dual experience replay strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2 clearly demonstrates that our model outperforms the baseline model in terms of both the game score and the survival time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This finding empirically shows the effectiveness of our improvements over the baseline model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', the reward assign- ment based on distance, the training gap, the timeout punish- ment, and the dual experience replay strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Nevertheless, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2, the highest values of the averaged game score and the averaged number of steps survived are seemingly small, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', around 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 and 80, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, please note that these numbers are computed as the average of 1,000 games, within which several outlier cases may drastically lower the averaged performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Furthermore, in the latter part of this experiment section, we compare the performance of our refined DQN model with human performance, trying to further evaluate the capability of our proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2, the performance of our refined DQN model in terms of game score increases slowly over the first 50,000 games along with the decay of ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Moreover, the performance in terms of the number of steps survived even gets decreasing (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' These findings are due to the exploration-exploitation trade- off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' As in the exploration phase, wherein ϵ linearly decays from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 to 0, the agent is actually getting familiar with the game environment by accumulating knowledge learned from random exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' After the exploration phase, the performance of the agent starts to improve by making all the decisions based on the learned knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(a), the averaged game score generally keeps improving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Similarly, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(b), the averaged number of steps survived also shows improvements in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' There is a noticeable peak in terms of the number of steps survived around 50,000th to 77,000th games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This unexpected peak may be due to the completion of ϵ decay that the performance of the agent starts to improve as it relies purely on the learned knowledge for decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, we suspect that the 0 2 4 6 8 10 12 14 16 18 0 5 10 15 20 25 30 35 40 45 50 (a) Performance in terms of game score 0 1000 2000 3000 4000 5000 6000 0 5 10 15 20 25 30 35 40 45 50 (b) Performance in terms of the num- ber of steps survived Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The performance of our agent (after being training for 134,000 games) in additional 50 games, wherein ϵ = 0 and training is turned off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' TABLE II PERFORMANCE COMPARISON AMONG DIFFERENT MODELS Performance Score Survival Steps Human Average 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='98 216.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='46 Baseline Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='26 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='64 Refined DQN Average 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='04 1477.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='40 Human Best 15 1389 Baseline Best 2 1015 Refined DQN Best 17 5039 game play policies learned during the exploration phase may not be optimal or near optimal that after a while (around 27,000 games after ϵ decays to 0), the performance of the agent drops significantly (also shown as a slight drop in terms of game scores in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, it is encouraging to see that even after the exploration phase, our agent is able to learn more appropriate knowledge and achieves monotonically increasing performance after the performance drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' It seems the period of ϵ decay, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', 50,000 games, is not sufficient for the agent to obtain a converged knowledge set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, due to the limited computing resource we have, we are not able to re-run all the experiments due to the time constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Nonetheless, the monotonically increasing performance after 77,000th game empirically shows that our agent is able to learn correctly in the Snake Game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Moreover, in the last paragraph of this section, we show that although pre-converged, our agent can already surpass average human players.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To further justify the performance of our agent, we let the trained agent play additional 50 games with ϵ = 0 and show the results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' In terms of game score, our agent obtains a minimum score of 3, a maximum score of 17, and the averaged score of around 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The averaged score of 9 is significantly higher than 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Similarly, the averaged number of steps survived is approximately 1,500, which is again significantly higher than that of 80 shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To further compare our refined DQN model with human performance, we invite ten undergraduate students to play the Snake Game for 50 games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Before they play 50 games for performance comparisons, each human player played at least 10 games to get familiar with this particular Snake Game implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The performance comparisons in terms of game scores and the number of steps survived are shown (a) Score graph of Refined DQN (graph taken from [17]) (b) Score graph of our model Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Comparison between Refined DQN model and our model 0 2 4 6 8 10 12 14 104 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 3 (a) Performance evaluation in terms of game score 0 2 4 6 8 10 12 14 104 10 20 30 40 50 60 70 80 90 (b) Performance evaluation in terms of survival time Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Visualization of performance comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To improve clarity, we only use the averaged values of each 1,000 games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Moreover, for benchmarking purpose, we also conduct experiments using a baseline model, which follows the same strategy used in the DeepMind’s groundbreaking work [2] (with the same network structure as shown in Table I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This baseline model is trained in the same manner as our refined DQN model, but without our carefully designed reward mech- anism, training gap, and dual experience replay strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2 clearly demonstrates that our model outperforms the baseline model in terms of both the game score and the survival time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This finding empirically shows the effectiveness of our improvements over the baseline model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', the reward assign- ment based on distance, the training gap, the timeout punish- ment, and the dual experience replay strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Nevertheless, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2, the highest values of the averaged game score and the averaged number of steps survived are seemingly small, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', around 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 and 80, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, please note that these numbers are computed as the average of 1,000 games, within which several outlier cases may drastically lower the averaged performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Furthermore, in the latter part of this experiment section, we compare the performance of our refined DQN model with human performance, trying to further evaluate the capability of our proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2, the performance of our refined DQN model in terms of game score increases slowly over the first 50,000 games along with the decay of ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Moreover, the performance in terms of the number of steps survived even gets decreasing (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' These findings are due to the exploration-exploitation trade- off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' As in the exploration phase, wherein ϵ linearly decays from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 to 0, the agent is actually getting familiar with the game environment by accumulating knowledge learned from random exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' After the exploration phase, the performance of the agent starts to improve by making all the decisions based on the learned knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(a), the averaged game score generally keeps improving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Similarly, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(b), the averaged number of steps survived also shows improvements in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' There is a noticeable peak in terms of the number of steps survived around 50,000th to 77,000th games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' This unexpected peak may be due to the completion of ϵ decay that the performance of the agent starts to improve as it relies purely on the learned knowledge for decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, we suspect that the 0 2 4 6 8 10 12 14 16 18 0 5 10 15 20 25 30 35 40 45 50 (a) Performance in terms of game score 0 1000 2000 3000 4000 5000 6000 0 5 10 15 20 25 30 35 40 45 50 (b) Performance in terms of the num- ber of steps survived Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The performance of our agent (after being training for 134,000 games) in additional 50 games, wherein ϵ = 0 and training is turned off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' TABLE II PERFORMANCE COMPARISON AMONG DIFFERENT MODELS Performance Score Survival Steps Human Average 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='98 216.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='46 Baseline Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='26 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='64 Refined DQN Average 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='04 1477.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='40 Human Best 15 1389 Baseline Best 2 1015 Refined DQN Best 17 5039 game play policies learned during the exploration phase may not be optimal or near optimal that after a while (around 27,000 games after ϵ decays to 0), the performance of the agent drops significantly (also shown as a slight drop in terms of game scores in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, it is encouraging to see that even after the exploration phase, our agent is able to learn more appropriate knowledge and achieves monotonically increasing performance after the performance drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' It seems the period of ϵ decay, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', 50,000 games, is not sufficient for the agent to obtain a converged knowledge set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, due to the limited computing resource we have, we are not able to re-run all the experiments due to the time constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Nonetheless, the monotonically increasing performance after 77,000th game empirically shows that our agent is able to learn correctly in the Snake Game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Moreover, in the last paragraph of this section, we show that although pre-converged, our agent can already surpass average human players.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To further justify the performance of our agent, we let the trained agent play additional 50 games with ϵ = 0 and show the results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' In terms of game score, our agent obtains a minimum score of 3, a maximum score of 17, and the averaged score of around 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The averaged score of 9 is significantly higher than 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Similarly, the averaged number of steps survived is approximately 1,500, which is again significantly higher than that of 80 shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' To further compare our refined DQN model with human performance, we invite ten undergraduate students to play the Snake Game for 50 games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Before they play 50 games for performance comparisons, each human player played at least 10 games to get familiar with this particular Snake Game implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The performance comparisons in terms of game scores and the number of steps survived are shown (a) Refined DQN score (Taken from [17]) 0 10 20 30 40 50 Episode 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 Score (b) Our model’s score Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Testing evaluation by playing random 50 episodes game results than the baseline and refined DQN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 6 displays the baseline DQN results during training on the snake game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 7 we present the score and reward comparison between our model and the baseline DQN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The blue line in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 7(a) represents our model’s score, and the purple line represents the score of the baseline DQN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' During 140,000 numbers of training episodes, our model remains better at episode score though it requires fewer resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 7(b) demonstrates that our model is capable of achieving higher cumulative rewards than the baseline DQN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We also compare the results between our model and the refined DQN model [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Refined DQN follows a dual ex- perience replay memory architecture and a complex reward mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' However, our model surpasses their score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Since their game is similar to ours, we compare our results with the results provided in their paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 8(a) shows the results presented in [17], and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 8(b) is our model’s results during TABLE V LIST OF PERFORMANCE COMPARISON OF DIFFERENT AGENTS Performance Score Human Average 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='98 * Baseline Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='26 * Refined DQN Average 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='04 * Our Average 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='53 Human Best 15 * Baseline Best 2 * Refined DQN Best 17 * Our Best 20 Data taken from [17] Jno 12 Baseline DQN 10 8 Score 6 4 2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 Episode 1e510 0 Reward 10 20 Our 30 Baseline DQN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 Episode 1e5Our 12 10 8 Score 6 4 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 Episode le5Our 10 0 Reward 10 20 30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 Episode le5Baseline DQN 10 8 6 Score 4 2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 Episode 1e510 Baseline DQN 8 6 Reward 4 2 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='4 Episode 1e5training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' By comparing Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 8(a) and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 8(b), we can safely say that our model achieves better scores despite having a simple replay buffer, a simple reward mechanism, and less memory consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 9(a) and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 9(b) show scores of random 50 episodes during testing of refined DQN and our model, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Table V summarizes the scores provided in the refined DQN and our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We can identify from Table V that their refined DQN average is 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='04, while ours is 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='53, and their refined DQN best score is 17, while ours is 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' So, we can see that our model also performs better in the training and testing phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' TABLE VI LIST OF HYPERPARAMETERS Hyperparameter Value Description Discount Factor 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='99 γ-value in max Q-function Initial Epsilon 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 Exploration epsilon initial value Final Epsilon 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='01 Exploration final epsilon value Batch size 32 Mini batch from replay memory Max step 10,000 Maximum number of steps allowed per episode Learning Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0025 Learning rate for Adam optimizer Clip-Norm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='0 Clipping value for Adam optimizer Random Frames 50,000 Number of random initial steps Epsilon greedy 500,000 Number of frames in which initial frames epsilon will be equal final epsilon Experience Replay 50,000 Capacity of experience replay Memory memory Update of DQN 4 The number of steps after each update of DQN takes place Update Target 10,000 The number of steps after the DQN Target and Online DQN sync V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' CONCLUSION In this paper, we have shown that better image preprocess- ing and constructing a better mechanism for replay buffer can reduce memory consumption on DRL algorithms during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We have also demonstrated that using our method, the performance of the DRL agent on a lower constraint application is entirely similar, if not better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We combined our method with the DQN (with some modification) algorithm to observe the method’s effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Our presented design requires less memory and a simple CNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' We established that our method’s result is as good as other DRL approaches for the snake game autonomous agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' ACKNOWLEDGMENT This work was supported by North South University re- search grant CTRG-21-SEPS-18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The authors would like to gratefully acknowledge that the computing resources used in this work was housed at the National University of Sciences and Technology (NUST), Pakistan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' The cooperation was pursued under the South Asia Regional Development Center (RDC) framework of the Belt & Road Aerospace Innovation Alliance (BRAIA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' REFERENCES [1] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Watkins and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Dayan, “Q-learning,” in Machine Learning, 1992, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 279–292.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [2] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Tesauro, “Temporal difference learning and td-gammon,” Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' ACM, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 58–68, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [3] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Sutton, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' McAllester, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Singh, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Mansour, “Policy gradient methods for reinforcement learning with function approximation,” in Advances in Neural Information Processing Systems, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Solla, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Leen, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' M¨uller, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' MIT Press, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [4] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Peters, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Vijayakumar, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Schaal, “Natural actor-critic,” in Machine Learning: ECML 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 280–291.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Silver, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Lever, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Heess, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Degris, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wierstra, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Ried- miller, “Deterministic policy gradient algorithms,” in Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' ICML’14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='org, 2014, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' I–387–I–395.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [6] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Mnih, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Kavukcuoglu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Silver, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Graves, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Antonoglou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wier- stra, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Riedmiller, “Playing atari with deep reinforcement learning,” Computing Research Repository, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' abs/1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='5602, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [7] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Mnih, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Kavukcuoglu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Silver, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Rusu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Veness, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Bellemare, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Graves, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Riedmiller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fidjeland, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Ostrovski, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Petersen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Beattie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Sadik, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Antonoglou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' King, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Kumaran, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wierstra, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Legg, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Hassabis, “Human-level control through deep reinforce- ment learning,” Nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 518, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 529–33, 02 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [8] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Hasselt, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Guez, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Silver, “Deep reinforcement learning with double q-learning,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' AAAI’16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' AAAI Press, 2016, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2094–2100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Lin, “Self-improving reactive agents based on reinforcement learn- ing, planning and teaching,” Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 8, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 3–4, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 293–321, may 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Lillicrap, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Hunt, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Pritzel, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Heess, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Erez, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Tassa, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Silver, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wierstra, “Continuous control with deep reinforcement learning,” Computing Research Repository, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [11] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Cui, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Dong, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Russell, “Robust multi- agent reinforcement learning via minimax deep deterministic policy gradient,” Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 01, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 4213–4220, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [12] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Haarnoja, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Zhou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Abbeel, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Levine, “Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor.” in ICML, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Proceedings of Machine Learning Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' PMLR, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 1856–1865.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Schaul, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Quan, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Antonoglou, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Silver, “Prioritized experience replay,” 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='org/abs/1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='05952 [14] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Andrychowicz, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wolski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Ray, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Schneider, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fong, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Welinder, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' McGrew, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Tobin, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Pieter Abbeel, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Zaremba, “Hindsight experience replay,” in Advances in Neural Information Processing Systems, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Guyon, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Luxburg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Bengio, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wallach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Vishwanathan, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Garnett, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Zhang and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Sutton, “A deeper look at experience replay,” Computing Research Repository, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' abs/1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='01275, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [16] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Hasselt, “Double q-learning,” in Advances in Neural Information Processing Systems, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Lafferty, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Williams, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Shawe-Taylor, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Zemel, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Culotta, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [17] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Zhang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Tan, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Miao, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Zhou, “Autonomous agents in snake game via deep reinforcement learning,” in 2018 IEEE International Conference on Agents (ICA), 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 20–25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [18] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Nguyen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Mori, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Thawonmas, “Image colorization using a deep convolutional neural network,” Computing Research Repository, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' abs/1604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content='07904, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Punyawee, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Panumate, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Iida, “Finding comfortable settings of snake game using game refinement measurement,” in Advances in Computer Science and Ubiquitous Computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Singapore: Springer Singapore, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' 66–73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' [20] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Kingma and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Ba, “Adam: A method for stochastic optimiza- tion,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' Bengio and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=' LeCun, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dFLT4oBgHgl3EQfCy73/content/2301.11977v1.pdf'} diff --git a/-tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss b/-tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..8e5f4aa641de0d40f2d1db0db549329866073798 --- /dev/null +++ b/-tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2e052a07b2202f6af43331e81e092230447f957a77064d6fa520cf89fb631dc +size 6160429 diff --git a/.gitattributes b/.gitattributes index f1a1067acf51ce3f615094c798582eb132928c19..8fa725f4cd78013133c2c970a6a21601c8278105 100644 --- a/.gitattributes +++ b/.gitattributes @@ -4181,3 +4181,64 @@ DNE2T4oBgHgl3EQfoQhP/content/2301.04016v1.pdf filter=lfs diff=lfs merge=lfs -tex btAzT4oBgHgl3EQfLfs5/content/2301.01114v1.pdf filter=lfs diff=lfs merge=lfs -text gNE0T4oBgHgl3EQfpAFc/content/2301.02533v1.pdf filter=lfs diff=lfs merge=lfs -text odE1T4oBgHgl3EQf1wVk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +iNE2T4oBgHgl3EQfHwaf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +TtAzT4oBgHgl3EQfXvxz/content/2301.01323v1.pdf filter=lfs diff=lfs merge=lfs -text +7NE4T4oBgHgl3EQfcgzB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ndE2T4oBgHgl3EQfewdN/content/2301.03919v1.pdf filter=lfs diff=lfs merge=lfs -text +yNFST4oBgHgl3EQfTDh5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +oNE1T4oBgHgl3EQfOwM_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +-tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +gtE1T4oBgHgl3EQfzAWY/content/2301.03440v1.pdf filter=lfs diff=lfs merge=lfs -text +SNFJT4oBgHgl3EQfLSwz/content/2301.11468v1.pdf filter=lfs diff=lfs merge=lfs -text +mtE1T4oBgHgl3EQf1AXN/content/2301.03464v1.pdf filter=lfs diff=lfs merge=lfs -text +htE0T4oBgHgl3EQfXwCw/content/2301.02298v1.pdf filter=lfs diff=lfs merge=lfs -text +TtAzT4oBgHgl3EQfXvxz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +otAyT4oBgHgl3EQfzPl9/content/2301.00698v1.pdf filter=lfs diff=lfs merge=lfs -text +gNE0T4oBgHgl3EQfpAFc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ctE3T4oBgHgl3EQfeAqc/content/2301.04540v1.pdf filter=lfs diff=lfs merge=lfs -text +9tAzT4oBgHgl3EQf-_7r/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +gtE1T4oBgHgl3EQfzAWY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +QtE1T4oBgHgl3EQfHQOA/content/2301.02924v1.pdf filter=lfs diff=lfs merge=lfs -text +P9E5T4oBgHgl3EQfZA8C/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +lNE1T4oBgHgl3EQfNwNu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ttE_T4oBgHgl3EQf9hxa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +k9AzT4oBgHgl3EQfNfvd/content/2301.01151v1.pdf filter=lfs diff=lfs merge=lfs -text +QtE1T4oBgHgl3EQfHQOA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +P9E5T4oBgHgl3EQfZA8C/content/2301.05577v1.pdf filter=lfs diff=lfs merge=lfs -text +8dE4T4oBgHgl3EQfdQww/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +i9E0T4oBgHgl3EQfYQAP/content/2301.02303v1.pdf filter=lfs diff=lfs merge=lfs -text +eNE0T4oBgHgl3EQfWwDh/content/2301.02284v1.pdf filter=lfs diff=lfs merge=lfs -text +9NE0T4oBgHgl3EQffwBQ/content/2301.02408v1.pdf filter=lfs diff=lfs merge=lfs -text +9NE0T4oBgHgl3EQffwBQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +JNE2T4oBgHgl3EQf_wld/content/2301.04251v1.pdf filter=lfs diff=lfs merge=lfs -text +htE0T4oBgHgl3EQfXwCw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +i9AzT4oBgHgl3EQfM_vp/content/2301.01143v1.pdf filter=lfs diff=lfs merge=lfs -text +UtFJT4oBgHgl3EQfNixZ/content/2301.11478v1.pdf filter=lfs diff=lfs merge=lfs -text +39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf filter=lfs diff=lfs merge=lfs -text +mtE1T4oBgHgl3EQf1AXN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ItFIT4oBgHgl3EQfYivh/content/2301.11249v1.pdf filter=lfs diff=lfs merge=lfs -text +otAyT4oBgHgl3EQfzPl9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +SNFJT4oBgHgl3EQfLSwz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf filter=lfs diff=lfs merge=lfs -text +H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf filter=lfs diff=lfs merge=lfs -text +TdE3T4oBgHgl3EQfzguF/content/2301.04729v1.pdf filter=lfs diff=lfs merge=lfs -text +UtFJT4oBgHgl3EQfNixZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +RdFQT4oBgHgl3EQfajZo/content/2301.13320v1.pdf filter=lfs diff=lfs merge=lfs -text +yNE3T4oBgHgl3EQf_wth/content/2301.04837v1.pdf filter=lfs diff=lfs merge=lfs -text +YdFPT4oBgHgl3EQftTXv/content/2301.13152v1.pdf filter=lfs diff=lfs merge=lfs -text +ltFLT4oBgHgl3EQfeC95/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +i9E0T4oBgHgl3EQfYQAP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf filter=lfs diff=lfs merge=lfs -text +W9E2T4oBgHgl3EQfYQe0/content/2301.03853v1.pdf filter=lfs diff=lfs merge=lfs -text +2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf filter=lfs diff=lfs merge=lfs -text +39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +_NAyT4oBgHgl3EQfRfYM/content/2301.00065v1.pdf filter=lfs diff=lfs merge=lfs -text +yNE3T4oBgHgl3EQf_wth/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ANFAT4oBgHgl3EQfrR6g/content/2301.08652v1.pdf filter=lfs diff=lfs merge=lfs -text +eNE0T4oBgHgl3EQfWwDh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +stE5T4oBgHgl3EQfmQ9N/content/2301.05677v1.pdf filter=lfs diff=lfs merge=lfs -text +TdE3T4oBgHgl3EQfzguF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text diff --git a/19AzT4oBgHgl3EQf8_5O/content/tmp_files/2301.01912v1.pdf.txt b/19AzT4oBgHgl3EQf8_5O/content/tmp_files/2301.01912v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e87a92eff7bfb6ddb25d5c2f32774e9941b802c2 --- /dev/null +++ b/19AzT4oBgHgl3EQf8_5O/content/tmp_files/2301.01912v1.pdf.txt @@ -0,0 +1,1268 @@ +Observation of room temperature anomalous +Hall effect in graphene-WSe2 heterostructures +Priya Tiwari1†, Divya Sahani1†, Atasi Chakraborty2, Kamal Das2, Kenji +Watanabe3, Takashi Taniguchi4, Amit Agarwal2∗, and Aveek Bid1∗ +1Department of Physics, Indian Institute of Science, Bangalore 560012, India +2 Department of Physics, Indian Institute of Technology Kanpur, Kanpur-208016, India +3 Research Center for Functional Materials, National Institute for Materials Science, 1-1 Namiki, +Tsukuba 305-0044, Japan +4 International Center for Materials Nanoarchitectonics, National Institute for Materials Science, +1-1 Namiki, Tsukuba 305-0044, Japan +† These authors contributed equally. +E-mail: amitag@iitk.ac.in,aveek@iisc.ac.in +Abstract +Proximity-induced spin–orbit coupling in graphene offers an exciting platform to probe +spin-based effects in chiral Dirac fermionic systems. These systems are believed to be intrinsically +time-reversal symmetric, which should ensure that the charge Hall response vanishes without +a magnetic field. In contrast to this expectation, we report the first observation of anomalous +Hall effect (AHE) in single-layer graphene/single-layer WSe2 heterostructures that persists +up to room temperature. The magnitude and the sign of the AHE can be tuned using an +external perpendicular electric field. Our joint experimental and theoretical study establishes +that the observed anomalous Hall signal arises from the combined effect of strain and spin- +orbit coupling in graphene, which induces time-reversal symmetry breaking and manifests +1 +arXiv:2301.01912v1 [cond-mat.mes-hall] 5 Jan 2023 + +as a valley asymmetry. Our observation broadens the prospects of realizing high-temperature +anomalous Hall effects in a completely new system, namely graphene-transition metal dichalcogenide- +based heterostructures. +Introduction +Topological and band geometric effects in two-dimensional systems have attracted significant +attention due to their fascinating physics and potential applications in spintronics and novel electronic +devices1–5. Graphene-based heterostructures offer one such exciting platform for studying band +geometric effects. The coupling to the charge, spin, and valley degrees of freedom in graphene +gives rise to, among other things, a multitude of Hall effects such as the spin Hall6–9, and the +valley Hall effects10–15. A possible common origin of these effects is the emergence of a non- +trivial Berry curvature on breaking the inversion symmetry, which induces opposite anomalous +velocity in the two valleys of graphene16–18. Note that in the absence of exchange interactions, +time-reversal symmetry (TRS) forces the Berry curvatures at the K and K′ valleys to be equal and +opposite Ωz(K) = −Ωz(K′), causing signatures of the anomalous Hall effect (AHE) in the charge +sector to vanish19. +Several other unconventional Hall effects have been predicted and explored in graphene. Some +prominent examples include the nonlinear anomalous Hall effect20–23, layer contrasted Hall effect3,24, +and linear Hall effect in corrugated systems25. The study in corrugated systems is particularly +fascinating as it demonstrates the appearance of a linear Hall response even under time-reversal +symmetric conditions for systems with tilted bands in a reduced-symmetry scenario. More recently, +AHE has been observed in graphene-based moiré heterostructures at half- or quarter-filling of the +bands owing to the spontaneously broken time-reversal symmetry and magnetization arising from +the enhancement of the exchange interactions by the large density of states of the flat bands26–33. +Several studies have reported extrinsic AHE in graphene where suitable dopants or magnetic +substrate induce an exchange interaction (see for example 15,34,35). However, despite being a testbed +2 + +for band geometric effects, the observation of intrinsic AHE in graphene-based non-magnetic +heterostructures remains rare. +In this letter, we report the observation of a large linear AHE originating from lifting the valley- +degeneracy in the high-mobility heterostructures of single-layer graphene (SLG) with proximity- +induced spin-orbit coupling (SOC) from single-layer WSe2. We find that the dependence of the +transverse resistance at a zero magnetic field Rxy(B = 0) on the charge carrier density mimics the +finite B-field classical Hall signal in graphene and is observed up to room temperature. +Single-layer WSe2 used as a substrate influences the graphene bands in two significant ways. The +first of these is well studied: Graphene on WSe2 possesses spin-split bands owing to the Ising-like +SOC, which gives rise to the spin Hall effect36–38. The second effect, equally vital for our purposes +but ill-explored to date, is the appearance of a substantial lateral strain in the graphene layer. We +propose that the combined effect of this proximity-induced SOC and lattice-induced strain lifts the +valley-degeneracy in graphene, leading to the appearance of the AHE signal near the Dirac point. +We establish that the AHE is zero in the absence of the WSe2 layer. Note that previous studies +on the SLG-WSe2 heterostructure (or graphene on transition metal dichalcogenides in general) +focused primarily on the spin aspects of the transport36,37,39–41 where a non-local signal is measured +as a signature of the spin Hall effect and weak (anti-) localization measurements were used to +quantify the spin-orbit coupling strength38,42–47. Interestingly, these studies did not probe the finite +Hall effect without a magnetic field. This makes our observation of AHE in this system unique. +Results +Device characteristics +Heterostructures of SLG and single-layer WSe2, encapsulated by crystalline hexagonal boron +nitrate (hBN), were fabricated using a dry transfer technique48,49. +One-dimensional electrical +contacts were formed by electron beam lithography, followed by etching (using a mixture of CHF3 +3 + +and O2) and deposition of 5 nm/60 nm Cr/Au contacts and top-gate electrode (see Section S3 +Supplementary Information for details). A schematic of the device structure is shown in Fig. 1(a), +and an optical image of the device is shown in Fig. 1(b). The dual-gated architecture of the devices +allows independent control of the charge-carrier density n and the vertical displacement field D; +n= (CtgVtg + CbgVbg)/e − n0 and D = (CtgVtg − CbgVbg)/2ϵ0 − D0. Here Cbg (Ctg) is the +capacitance per unit area of the back-gate (top-gate), Vbg (Vtg) is the back-gate (top-gate) bias. +n0 and D0 are the residual charge carrier density and residual vertical displacement field induced +by impurities in the device channel. +Electrical transport measurements were performed at 10 nA source-drain current using low-frequency +lock-in detection techniques. All data were obtained at 20 mK unless specified otherwise. The +measurements were performed on multiple devices; the results were similar. In the main manuscript, +we present the data from a single device, SW1. The data from another device, SW2, are shown in +the Supplementary Information. +A map of the measured longitudinal conductance Gxx as a function of charge carrier density n and +perpendicular magnetic field B is shown in Fig. 1(c). The appearance of broken symmetry quantum +Hall states at low B-fields implies a complete lifting of the spin and valley degeneracies in SLG +bands. The splitting of the spin-degenerate bands in SLG (shown schematically in Fig. 1(f)) is also +evident from the beating pattern seen in the Shubnikov de Haas oscillations [Fig. 1(d)], and the +double periodicity in the corresponding Fourier spectrum [Fig. 1(e)]. Fig. 1(g) is a representation +of the lifting of the valley degeneracy; the valley-splitting energy scale is marked as ∆vs. The +lifting of spin- and valley-degeneracies in the band dispersion (along with the high field-effect +mobility µ ∼ 140, 000 cm2V−1s−1 of the device) shows that the graphene and WSe2 interface is +atomically clean with significant interfacial coupling and minimal random potential fluctuations. +4 + +Room temperature anomalous Hall effect at B = 0 T +In Fig. 2(a), we present the data for the longitudinal resistance, Rxx (left-axis, red line), and +transverse resistance, Rxy (right-axis, blue line) measured at B = 0 T. We observe a finite Rxy +signal in a narrow range of charge carrier densities ∆n = ±1015 m−2 centered about the charge +neutrality point, a feature conspicuously absent in hBN/graphene/hBN heterostructures. The Rxy +features an evident change in the sign about the charge neutrality point – it is positive for n < 0 +(hole-band) and negative for n > 0 (electron-band). The current independence of Rxy establishes +it to be a linear anomalous Hall effect – (see Fig. 2(c) for the data for two representative values +of current - 30 nA and 120 nA). The finite Rxy(B = 0) survives at least to room temperature with +diminished amplitude as shown in Figs. 2(b) and (d). This observation of room temperature B = 0 +anomalous Hall effect in hBN/graphene/WSe2/hBN heterostructures is the central result of this +letter. +We find the nonlinear anomalous Hall resistance (quantified by the second harmonic R2ω +xy signal) +to be negligibly small for our device (Fig. S5 of Supplementary Information). To establish that the +absence of the second harmonic signal is not an experimental artifact, we present in the same figure +data from similar measurements on hBN/graphene moiré devices where a small but finite nonlinear +signal does show up in the measured R2ω +xy near the primary Dirac point as per previous reports50. +Note also that the data for Rxy(B = 0) were reproduced in cryostats without a superconducting +magnet, ruling out the remnant field of a magnet as the origin of the AHE. +We attribute the observed zero-field anomalous Hall effect (AHE) to an effective time-reversal +symmetry breaking of the system captured by valley splitting. In the presence of time-reversal +symmetry, the anomalous Hall effect, defined as σxy = − e2 +ℏ +� +dk +(2π)2Ωzf(k), vanishes. Here f(k) +is the Fermi distribution function. The vanishing of AHE can be understood by recalling that as +Ωz(K) = −Ωz(K′) in the presence of time-reversal symmetry, the contribution of each valley to +the AHE are equal and opposite, making the total AHE zero. However, on breaking the valley +degeneracy, the valleys have different fillings, as shown in Fig. 2(e). In this case, the resulting total +5 + +anomalous Hall response is finite. We calculate this non-zero AHE explicitly for the graphene- +WSe2 heterostructure (see Supplementary Information for the details of the calculation), and the +theoretical results for the Hall conductivity (which has the opposite sign to the Hall resistivity) +are shown in Fig. 2(f). Our calculations capture the existence of zero-field AHE in valley split +graphene-WSe2 device along with the sign reversal in the AHE on going from the hole (valence) +band to the electron (conduction) band. We emphasize that here we aim for a qualitative match +with the experimental data, as the microscopic origin of valley splitting (and hence the magnitude +of the split) is not evident. +The valley polarization can arise from different physical mechanisms such as enhanced impurity- +induced inter-valley scattering, selective exchange coupling of the two valleys, or non-periodic +lattice deformations51–54. However, we do not find evidence of valley splitting or finite AHE +in hBN/graphene/hBN devices without the intervening WSe2 layer. Thus, it is obvious that the +valley-specific asymmetry is induced by WSe2-graphene combination. The lattice constant for +graphene is ∼ 2.46 Å while that for WSe2 is ∼ 3.27 Å. The large lattice-mismatch generates +a significant strain across the graphene flake as the heterostructure relaxes to the stable ground +state. From Raman spectroscopy, we estimate the magnitude of the strain on the SLG layer in our +hBN/SLG/WSe2/hBN heterostructure to be ≈ 0.15% − 0.20% (see Section S6 of Supplementary +Information). This combination of strain and spin-orbit coupling feasibly lifts the valley degeneracy. +While the microscopic origin of valley splitting is not completely clear, we model it by shifting the +two valleys in energy, as indicated in Fig. 1(f). +Hall response with vertical displacement and magnetic field +Having demonstrated the AHE, we now focus on the dependence of the AHE on a perpendicular +displacement field D (Fig. 3). It is illuminating to map the transverse zero-B-field conductivity +Rxy(B = 0) data in the n − D plane (Fig. 3(a)). The plot shows Rxy(B = 0) to be finite only at +the band edges, consistent with the idea of the Berry curvature hot spots lying in the vicinity of +the band edges. This can be seen clearly in the line plots of Rxy(B = 0) for different values of D +6 + +shown in Fig. 3(b). Note that the plots are vertically offset by 200 Ω for clarity. The measured +Rxy(B = 0) has an intriguing D dependence; it changes its sign as the direction of D flips [Fig. 3(a- +b)]. To understand this, we analyze the dependence of the Berry curvature near the band edges on +the direction of D. Our theoretical calculations show that as the polarity of D changes, the Berry +curvature near the band edges changes sign. Consequently, the sign of the anomalous Hall voltage +(determined by the sign of the Berry curvature) in the SLG/WSe2 heterostructure flips. This is +reminiscent of the change in the sign of the gap in bilayer graphene on flipping the direction of D, +which changes the sign of the Berry curvature. +Measurements in a finite magnetic field B applied perpendicular to the device interface (see Section +S5 of Supplementary Information) reveal the interplay between the classical Hall effect and the +B = 0 AHE. The data smoothly crosses over from the anomalous hall phase at B = 0 to the +conventional Hall phase at finite B-field with an anti-crossing feature. This feature resembles +the planar Hall effect in corrugated bilayer graphene25. A non-zero intercept of the plot of Rxy +versus B [shown for a fixed n in Fig. 3(c)] on the B-axis captures the AHE. We note that Rxy +is non-hysteretic in the presence of a small non-quantizing magnetic field (see Section S7 of +Supplementary Information), ruling out emergent ferromagnetism in the system. +In Fig. 4(a), we present a plot of Rxx in the n − D plane measured at B = 0. We observe that with +increasing D, the resistance peak at the charge neutrality point splits into two maxima. This feature +can be better appreciated from Fig. 4(b), where we show individual plots of Rxx(B = 0) versus n +at several representative values of D. At higher values of |D|, we find two distinct peaks in Rxx +separated by a shallow valley. Such a displacement field-dependent dispersion of the bands near +the Dirac point is not captured by the existing models for graphene/WSe2 heterostructures42,55–61. +To remedy this, we construct a new model Hamiltonian for the graphene/WSe2 system, retaining +both the WSe2 and the graphene Hamiltonian blocks, which allows us to include the impact of a +vertical displacement field systematically (see Section S1 and S2 of Supplementary Information for +details). Fig. 4(c) is a plot of the theoretically calculated σxx as a function of the chemical potential +7 + +– the panels show the splitting of the conductivity minima into two asymmetric conductivity +minima at finite D. Our model thus reproduces the prominent features of σxx both at zero displacement +field55,57 and at a finite D, along with the observed AHE. +Discussion +To summarize, we report the first observation of room temperature anomalous Hall effect in +heterostructures of graphene/WSe2. Primarily known for their promising spintronic aspects, the +charge Hall response of such a heterostructure was expected to be relatively mundane. Contrary +to this, we show that the dual effect of spin-orbit coupling and strain in the system gives rise to +time-reversal symmetry-breaking through valley splitting. Combined with a finite Berry curvature, +this results in a finite anomalous Hall effect in the system. The anomalous Hall response persists +at least to room temperature and features a unique perpendicular electric field tunability. Our work +establishes graphene-WSe2 heterostructure as an excellent platform for further exploration of band +geometry-induced interplay of charge, spin, and valley responses in two-dimensional systems. +AUTHOR INFORMATION +Author Contributions +A.B., P.T., and D.S. conceptualized the study, performed the measurements, and analyzed the data. +A.A., A.C., and K.D. performed the theoretical analysis. K.W. and T.T. grew the hBN single +crystals. All the authors contributed to preparing the manuscript. +Notes +The authors declare no competing financial interest. +8 + +Acknowledgement +A.B. acknowledges funding from the DST FIST program, DST fellowship (DST/SJF/PSA01/2016- +17), and US Army DECVCOM and ITC IPAC (project: FA520922P0166/2232). A.C. acknowledges +the Indian Institute of Technology, Kanpur, and the Science and Engineering Research Board +(SERB) National Postdoctoral Fellowship (PDF/2021/000346), India, for financial support. A.A. +acknowledges the Science and Engineering Research Board for Project No. MTR/2019/001520, +and the Department of Science and Technology for Project No. DST/NM/TUE/QM-6/2019(G)-IIT +Kanpur of the Government of India for funding. K.W. and T.T. acknowledge support from JSPS +KAKENHI (Grant Numbers 19H05790, 20H00354, and 21H05233) +Supporting Information Available +Supporting information contains detailed discussions of (a) model Hamiltonian of Graphene/WSe2 +heterostructure, (b) anomalous Hall effect and Drude conductivity, (c) data from other devices, and +(d) device fabrication and characterization details. +9 + +(b) +(d) +(e) +(c) +(a) +n (1X1016 m-2) +B (T) +−2 += +ν +ν=−6 +ν=−10 +ν=−14 +ν=2 +ν=6 +ν=10 +ν=14 +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +-10 +-5 +0 +5 +0 +-2.5 +-2 +-1.5 +-1 +-0.5 +0.5 +1 +1.5 +2 +2.5 +K valley +-1 +1 +Ωx104 ( +2) +0 +Graphene on WSe2 +Graphene +Graphene on WSe2 +Energy (meV) +K valley +K valley +-1 +1 +Ωx104 ( +2) +K' valley +0 +Energy (meV) +kx +kx +x +kx +Δvs +A +A +20 +10 +0 +-10 +-20 +-0.01 +0.00 +0.01 +20 +10 +0 +10 +20 +-0.01 +0.00 +0.01 +20 +10 +0 +-10 +-20 +-0.01 +0.00 +0.01 +k +20 +10 +0 +-10 +-20 +-0.01 +0.00 +0.01 +(f) +(g) +1 +2 +-10 +-5 +0 +5 +10 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 + +1/B (1/T) +Normalized Amplitude +16 +20 +24 +28 +32 +36 +Rxx(Ω) +10µm +hBN +hBN +WSe2 +Graphene +BF (T) +lnGxx(e2/h) +Figure 1: Device characteristics and band dispersion: (a) Schematic of the graphene/WSe2 +layers encapsulated in hBN illustrating the sequence of crystal stacking. (b) Optical image of +the device. (c) Map of the longitudinal conductance (Gxx(B)) with varying carrier density n and +perpendicular magnetic field B at T ∼ 20 mK. The thicker dashed lines correspond to the signature +plateaus of single-layer graphene. Thinner lines mark the broken-symmetry phases indicating +complete lifting of the spin and valley degeneracies at low-B. (d) SdH oscillations versus 1/B +at Vbg = −40 V. (e) Fourier spectrum of the SdH oscillations; two peaks are distinctly visible, +establishing the presence of two Fermi surfaces. (f) Schematic of the band dispersion of the K +valley of monolayer graphene (left panel) and graphene on WSe2 heterostructure (right panel). +The WSe2 layer essentially lifts the spin degeneracy of the low-lying energy bands and opens up a +gap at the Fermi energy. (g) The impact of valley splitting (denoted by ∆vs) on the band structure +of the K (left) and the K′ (right) valleys of the graphene/WSe2 heterostructure. The color map of +the lines indicates the Berry curvature, which is concentrated near the band edges. +10 + +2 +0 +-2.5 +-2 +-1.5-0.5 +0 +0.55 +0101.5 +2 +2.5 +16 +X10-106 +5 +4 +3-59 +8 +1 +7Rxy(kΩ) +B=0T +n (1 x 1016 m-2) +n (1 x 1016 m-2) +n (1 x 1016 m-2) +Rxx(kΩ) +Rxy(kΩ) +30nA +T = 142 K +150nA +R +xy(kΩ) +0.3 +0.2 +0.1 +0 +100 +0 +50 +150 200 250 +Rxy(kΩ) +T(K) +µ(meV) +0.2 +0.0 +-0.2 +-200 +-100 +0 +100 +200 +(a) +(b) +(c) +(d) +(e) +(f) +10K +30K +50K +Rxx +Rxy +0.3 +-0.3 +-0.2 +-0.1 +0.0 +0.1 +0.2 +-0.5 +-0.4 +-1.2 +-0.4 +-0.8 +-0.0 +0.4 +0.8 +1.2 +0 +1 +2 +3 +4 +5 +6 + 0.02K + 14K + 24K + 51K + 80K + 142K + 10K + 222K + 300K +0.3 +-0.4 +-0.3 +-0.2 +-0.1 +0.0 +0.1 +0.2 +-1.2 +-0.8 +-0.4 +0.0 +0.4 +0.8 +1.2 +B=0T +300 +-1.2 +-0.8 +-0.4 +0.0 +0.4 +0.8 +1.2 +0.06 +-0.10 +-0.08 +-0.06 +-0.02 +0.00 +0.02 +0.04 +-0.04 +Rxx +Rxy +K +K +Ωz +0 +σxy/σ0 +Figure 2: Anomalous Hall effect (a) Plots of the zero magnetic-field longitudinal resistance +Rxx(B = 0) (left-axis, red line) and zero magnetic-field transverse resistance Rxy(B = 0) (right- +axis, blue line) versus n; the data were measured at T = 20 mK. (b) Rxy(B = 0) response as a +function of n at few representative values of temperature; the AHE persists up to 300 K. (c) Plot +of Rxy(B = 0) as a function of n for two different values of electrical current; the data were taken +at T = 142 K. (d) Plot of the peak value of Rxy(B = 0) versus T. The dotted line is a guide to +the eye. (e) The bell-shaped surface represents the opposite Berry curvatures of the two valleys. +The position of the Fermi surfaces for the K and K′ valleys (indicated by the black circle) differ +due to valley population imbalance. The top insets show the schematic of Dirac crossing for the +K and K′ valleys for the effective graphene sector. The valley splitting introduces a population +imbalance between the two valleys of the Dirac cones. (f) Theoretically calculated anomalous Hall +conductivity (σxy ∝ −ρxy) in the absence (black dashed line) and in the presence (solid lines) of +valley splitting (∆vs ∼ 4 meV). The y-axis is scaled w.r.t σ0 ≡ e2/h. The increase in temperature +diminishes the heights of the σxy peak. +11 + +n= −0.18 x1016 m-2 +B (mT) +-0.3 +-0.2 +-0.1 +0 +0.1 +0.2 +0.3 +0.6 +-0.4 +0.2 +0 +-0.2 +0.4 +-6 +-4 +2 +0 +-2 +4 +-3 +-2 +-1 +0 +1 +2 +3 +-0.2 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +1.6 +1.8 +0.3V/nm +0.2V/nm +0.1V/nm +0.05V/nm +0.0V/nm +-0.05V/nm +-0.1V/nm +-0.2V/nm +-0.3V/nm +n (1x1016 m-2) +n (1x1016 m-2) +0 +0.1 +−0.1 +−40 +0 +−0.2 +0.3 +40 +80 +−80 +0.2 +Rxy(kΩ) +D (V/nm) +(a) +(b) +(c) +Rxy (kΩ) +Rxy (kΩ) +Figure 3: Dependence of the transverse resistance Rxy on D and B. (a) A 2-dimensional contour +map of Rxy(B = 0) plotted in the n−D plane. (b) Plots of Rxy(B = 0) versus n for different values +of D. The data have been vertically shifted by 200 Ω for clarity. The dashed horizontal line for +each plot marks the zero of Rxy(B = 0). (c) A representative plot of Rxy versus B measured at +n = −0.18 × 1016 m−2, an arrow marks the value of the anomalous Hall resistance. +12 + +0.3V/nm +0.2V/nm +0.1V/nm +0.05V/nm +0V/nm +-0.3V/nm +-0.05V/nm +-0.1V/nm +-0.2V/nm +) +4 +0 +1 +2 +3 +5 +6 +7 +8 +9 +-3 +-2 +-1 +0 +1 +2 +3 +2.5 +5.0 +7.5 +2.5 +5.0 +7.5 +0.0 +2.0 +Δ=300 meV +Δ=0 meV +Δ=-300 meV +σxx /συ (103) +-200 +200 + 0 +μ (meV) +(c) +(b) +(a) +0.5 +1 +2 +3 +0.6 +-0.4 +0.2 +0 +-0.2 +0.4 +-6 +-4 +2 +0 +-2 +4 +Rxx(kΩ) +D (V/nm) +4.0 +σxx /συ (103) +σxx /συ (103) +n (1 x 1016 m-2) +n (1 x 1016 m-2) +Rxx(kΩ +Figure 4: Dependence of Rxx(B = 0) on D. (a) A 2-dimensional contour map of Rxx(B = 0) +plotted in the n − D plane. (b) Plots of Rxx(B = 0) versus n for different values of D. The data +have been vertically shifted by 1 kΩ for clarity. The dashed horizontal line for each plot is the +zero of the y-axis. (c) Variation of the calculated Drude conductivity σxx with energy (µ) for three +different values of the interlayer potential induced by the applied electric field, ∆ = 300 meV (red +line), 0 meV (blue line) and -300 meV (green line), respectively. The values of σxx have been +scaled by σv where σv = e2τ/4π2ℏ2. +13 + +Supplementary Information +Model Hamiltonian of Graphene WSe2 heterostructure +In this section, we construct the low energy model Hamiltonian of monolayer graphene on a +WSe2 layer. Going beyond the effective graphene model as reported in recent literature55,57,62, we +explicitly solve for the composite low energy Hamiltonian for the graphene-WSe2 heterostructure +to capture the effect of perpendicular electric field correctly. We solve the following low-energy +Hamiltonian +Htot = +� +� +� +Hg +k +Ht +H† +t +Hws +tot +� +� +� + H⊥ +(1) +Here, Hg +k and Hws +tot are the onsite Hamiltonian for graphene and the WSe2 respectively. +The +interaction between graphene and WSe2 layer has been included through spin and valley conserved +off-diagonal hopping (Ht). The effect of the perpendicular electric field is captured through the +diagonal matrix H⊥. +We consider the monolayer of WSe2 in the x-y plane in the presence of intrinsic spin-orbit coupling +(SOC) (Hws +sym), spin Zeeman field (∆ws +0 ). +In addition, finite Rashba SOC term (Hws +R ) is also +considered within the WSe2 sector? ? . Including all these effects, the two-dimensional extended +Dirac Hamiltonian (Hws +tot) of WSe2 monolayer can be written as +Hws +tot = Hws +k ++ Hws +sym + Hws +R . +(2) +The explicit forms of each term are expressed as follows, +Hws +k += vws +F [ξσxkx + σyky] + ∆ws +0 σz , +Hsym = 1 +2[λc(σz + σ0) + λv(σz − σ0)] , +Hws +R = λR[ξσxSy − σySx] , +(3) +14 + +where ξ = ±1 for K and K′ valley respectively. As in the monolayer WSe2, two degenerate +but inequivalent valleys (K and K′) are separated by a large momentum; we can split the total +Hamiltonian into two valley-specific parts. Here, we have considered vws +F +≡1.83 eV.Å as the +Fermi velocity of WSe2. ∆0 represents the mass term that breaks the inversion symmetry. Here, +λc and λv correspond to the SOC strengths of conduction and valence bands. In general, the +valence band (λv ∼ 112.5 meV) of WSe2 possesses larger SOC strength than the conduction band +(λc ∼ 7.5meV), promoting relatively larger splitting in the valence band63? . For simplicity of the +calculation, we choose the SOC strengths of both the conduction and valence bands to be equal, +λc = λv =7.5 meV. We set ∆0 =250 meV which induces a large gap between the conduction and +valence bands of WSe2. To model the low energy physics of graphene, we choose valley-specific +Hamiltonian of the following form, +Hg +k = vg +F[ξσxkx + σyky] . +(4) +Here, vg +F=3.46 eV.Å is the Fermi velocity of graphene. Equation (4) represents a gapless Dirac +dispersion for the graphene sector. The coupling between the two layers is captured by +Ht = t +� +� +� +0 +1 +1 +0 +� +� +� σ0 . +(5) +For our calculation, we set the hopping strength t =50 meV. The proximity effect of the WSe2 +layer essentially opens up a gap at the Dirac crossing of the graphene bands. The induced band +gap of graphene gets enhanced with an increase in hopping strength. +The effect of the external perpendicular electric field is introduced by adding a diagonal Hamiltonian. +H⊥ = +� +� +� +∆I +0 +0 +−∆I +� +� +� . +(6) +Figure 5 shows the band dispersion evolution with a perpendicular electric field. The band dispersion +15 + +Energy (meV) +(c) +(b) +(a) +=300 meV +Δ +Δ=0 meV +Δ=-300 meV +Figure 5: Impact of the electric field on the band structure of graphene/WSe2 heterostructure. (a), +(b) and (c) show the band dispersion in the presence of electric field values ∆ = 300 meV, 0 meV, +and -300 meV, respectively. The external electric field changes the low energy band dispersion of +the composite graphene-WSe2 heterostructure, inducing a metal-insulator transition. +essentially undergoes an insulator-to-metal transition with the electric field (see Fig. 5). +Anomalous Hall effect and Drude conductivity +We attribute the observed Hall effect to the anomalous Hall effect induced by Berry curvature. The +anomalous Hall conductivity of the system is defined as, +σxy = −e2 +ℏ +� +n,ξ +� � dkxdky +(2π)2 Ωn,ξ +z f n,ξ , +(7) +where n is the band index. As observed in our experimental finding, a Hall current can only be +generated through a population imbalance due to the valley gap difference. The van der Waals +stacking of graphene onto hexagonal boron nitride offers a natural platform for valley control? . To +induce a finite valley splitting, we have incorporated a term ∆vs =10 meV between the two valleys, +as shown in Fig. 1 (f) of the main manuscript. It is important to note that ϵK ̸= ϵK′ even without +external perturbations like an electric field. As a result of this valley splitting, a finite anomalous +Hall effect σxy is generated within the system (see Fig. 2 (f) in the main manuscript). +16 + +We calculate σxx using the expression of the Drude conductivity +σxx = e2τ +� +n,ξ +� � dkxdky +4π2 +vn,ξ +x vn,ξ +x (−∂f +∂ϵ )ϵ=ϵn(k) . +(8) +The band velocity is defined as ℏvn,ξ +x += ∂ϵn,ξ/∂kx, where n is the band index. The longitudinal +conductivity (σxx), which follows the density of states (DOS), shows a W-like pattern with an +increase in the electric field. The calculated σxx captured the qualitative nature of the inverse of +the experimental resistivity (Rxx) plot of Fig. 4(a) of the main manuscript. The pseudo gap within +the first and second valence (conduction) bands promotes the low conducting dips below (above) +the Fermi energy, whereas for a finite electric field, the substantial DOS at Fermi energy promotes +the metallic nature indicated by a peak at the σxx of Fig. 4(c) of the main manuscript. +Device fabrication +Thin flakes of WSe2, hBN, and graphene were mechanically exfoliated on Si/SiO2 substrates. The +thickness of the flakes was initially estimated from the color contrast under an optical microscope +and later confirmed using Raman spectroscopy. This was followed by sequential pickup of each +flake using Polycarbonate (PC) film at 90oC. The assembled heterostructure was transferred on +a new Si/SiO2 substrate. The heterostructure is then cleaned in chloroform, acetone, and IPA to +remove the PC residue. The heterostructure was then annealed at 2500C for 3 hours. Electron +beam lithography was used to define the contact and top gate electrodes. We used reactive ion +etching (mixture of CHF3 and O2 gas) to etch top hBN to make one-dimensional edge contacts +to graphene. For making the electrical contacts, Cr/Au (5 nm/60 nm) was deposited, followed by +liftoff in hot acetone and cleaning in IPA. The unwanted hBN and graphene were removed using +E-beam lithography and dry etching to define the Hall bar. We transferred an hBN top of the device +and fabricated a metallic top gate using lithography and thermal deposition. +17 + +-3 +-2 +-1 +0 +1 +2 +3 +-250 +-200 +-150 +-100 +-50 +0 +50 +100 +-3 +-2 +-1 +0 +1 +2 +3 +0 +1 +2 +3 +4 +Rxx (kΩ) +n (1x1016 m-2) +-250 +-200 +-150 +-100 +-50 +0 +50 +100 +Rxy (Ω) +n (1x1016 m-2) +(a) +(b) +Rxx +Rxy +Rxy +Isd +Isd +Rxy +Isd +Rxy (Ω) +Figure 6: Data on device SW2. (a) Plot of longitudinal and transverse resistivity versus number density +for device SW2. (b) Plot of transverse resistance versus number density in two different configurations for +device SW2. Configuration 1 measures Rxy(B = 0) and configuration 2 measures Ryx(B = 0). +Data on device SW2 +Fig. 6(a) shows the data for zero-field longitudinal and transverse resistance in device SW2; one +can see the appearance of a finite Rxy(B = 0) that changes its sign near the Dirac point. Fig. 6(b) +presents the B = 0 transverse signal measured in two different configurations, configuration 1 +measures Rxy(B = 0) while configuration 2 measures Ryx(B = 0). The two signals overlap +exactly with each other. +Note that this is one expects from the Onsager relation Rxy(B) = +Rxy(−B) for B = 0. +Low-field magnetoresistance +Fig. 7(a) shows the line plots of the transverse signal measured in device SW2 in the presence of +a small perpendicular magnetic field. The data show the smooth evolution of the anomalous Hall +18 + +3 +-3 +2 +1 +0 +-1 +-2 +1 +1.5 +2 +2.5 +0 +0.5 +n (1x1016 m-2) +Rxy(kΩ) +100 +80 +60 +40 +20 +0 +-20 +-40 +-80 +-60 +-2 +-3 +-1 +0 +1 +2 +3 +-600 +-400 +-200 +0 +200 +400 +B (T) +(b) +n (1x1016 m-2) +Rxy(Ω) +(a) +Figure 7: Dependence of Rxy on B. (a) Plot of Rxy at small magnetic field values measured for +device SW2. (b) A 2D map of the transverse resistance Rxy(B) in the n − B plane; the data shows +a finite Hall signal at B = 0 T. +signal into the classical Hall signal. This can be better appreciated from Fig. 7(b), which is a 2D +map of the transverse signal in the n-B plane. +Raman shift and strain +We used low-temperature Raman spectroscopy in graphene WSe2 stack to estimate the strain in +graphene. High-quality single layer graphene has two prominent Raman active modes, the G- +mode (1580 cm−1) and the 2D-mode (2690 cm−1). In the presence of a uniaxial strain ϵ, the shift +in 2D peak has been measured to be δωSLG +2D /ϵ ∼ −64cm−1/%? . Fig. 8(a) shows a comparison of +the temperature-dependence of the Raman shift of the 2D band measured for graphene ωSLG +2D +and +for graphene on WSe2 ωSLG/WSe2 +2D +. In Fig. 8(b), we show a plot of the T-dependence of δω2D = +ωSLG/WSe2 +2D +− ωSLG +2D . One can see that the difference in the Raman shift of the 2D peak increases +rapidly with a decrease in T; the positive value of δω2D indicates that the strain is compressive. +The temperature dependence of the strain in graphene was extracted from the data in Fig. 8(b); +its magnitude is plotted in Fig. 8(c). The data shows that SLG on single layer WSe2 undergoes a +19 + +0 +100 +200 +300 +2684 +2688 +2692 +2696 +0 +100 +200 +300 +8 +9 +10 +11 +12 +13 +14 +0 +100 +200 +300 +0.12 +0.14 +0.16 +0.18 +0.20 +0.22 +T (K) +ω2d (cm-1) +(b) +(c) +(a) +T (K) +δω2d (cm-1) +|ε| (%) +T (K) +Figure 8: Raman shift in the 2D band of graphene (a) Temperature variation of the measured +Raman shift of the 2D peak of graphene (blue filled circles) and of graphene on single-layer WSe2 +(red filled circles). (b) Plot of δω2D versus T. (c) Plot of the T- dependence of the magnitude of +the strain |ϵ| in SLG on single-layer WSe2. +significant compressive strain of about 0.2% at 4 K. +Absence of ferromagnetism and nonlinear AHE +The measured magnetoresistance in our devices is non-hysteretic (Fig. 9(a)). This is clear evidence +of the absence of ferromagnetism in the system. We also find the second harmonic R2ω +xy signal to +be negligibly small for our device (Fig. 9(b)). This establishes that one does not have a nonlinear +anomalous Hall effect in this system. To establish that the absence of the second harmonic signal +is real and not an experimental artifact, we plot for comparison in Fig. 9(b) the data from similar +measurements on hBN/graphene moiré devices. In the moiré device, we measure a finite nonlinear +signal R2ω +xy near the primary Dirac point (as expected from previous reports50). +20 + +-0.10 +-0.05 +0.00 +0.05 +0.10 +-30 +-20 +-10 +0 +10 +20 +30 +40 +-0.2 +-0.1 +0.0 +0.1 +0.2 +-15 +-10 +-5 +0 +5 +10 +15 +Rxy(Ω) +B (mT) +R2ω +xy (Ω) +n (1x1016 m-2) +(a) +(b) +Figure 9: Nonlinear AHE And MR: (a) Plot of magnetoresistance in a small magnetic field at +D = −0.3 V/nm displacement field. The data were taken at n = −2 × 1016m−2. (b) Plot of the +nonlinear AHE R2ω +xy(B = 0) for SLG/WSe2 (red line). The data is contrasted with that obtained for +a graphene/hBN moiré device (black line). +21 + +References +(1) Xiao, D.; Chang, M.-C.; Niu, Q. Berry phase effects on electronic properties. Rev. Mod. +Phys. 2010, 82, 1959–2007. +(2) Ahn, J.; Guo, G.-Y.; Nagaosa, N.; Vishwanath, A. Riemannian geometry of resonant optical +responses. Nature Physics 2022, 18, 290–295. +(3) Gao, A. et al. Layer Hall effect in a 2D topological axion antiferromagnet. Nature 2021, 595, +521–525. +(4) Bhalla, P.; Das, K.; Culcer, D.; Agarwal, A. Resonant Second-Harmonic Generation as a +Probe of Quantum Geometry. Phys. Rev. Lett. 2022, 129, 227401. +(5) Han, W.; +Kawakami, R. K.; +Gmitra, M.; +Fabian, J. Graphene spintronics. Nature +Nanotechnology 2014, 9, 794–807. +(6) Sinova, J.; Valenzuela, S. O.; Wunderlich, J.; Back, C.; Jungwirth, T. Spin hall effects. +Reviews of modern physics 2015, 87, 1213. +(7) Hirsch, J. Spin hall effect. Physical review letters 1999, 83, 1834. +(8) Bernevig, B. A.; Zhang, S.-C. Quantum spin Hall effect. Physical review letters 2006, 96, +106802. +(9) Tiwari, P.; Jat, M. K.; Udupa, A.; Narang, D. S.; Watanabe, K.; Taniguchi, T.; Sen, D.; +Bid, A. Experimental observation of spin-split energy dispersion in high-mobility single- +layer graphene/WSe2 heterostructures. npj 2D Materials and Applications 2022, 6, 68. +(10) Xiao, D.; Liu, G.-B.; Feng, W.; Xu, X.; Yao, W. Coupled Spin and Valley Physics in +Monolayers of MoS2 and Other Group-VI Dichalcogenides. Phys. Rev. Lett. 2012, 108, +196802. +22 + +(11) Cresti, A.; Nikoli´c, B. K.; García, J. H.; Roche, S. Charge, spin and valley Hall effects in +disordered graphene. La Rivista del Nuovo Cimento 2016, 39, 587–667. +(12) Mak, K. F.; McGill, K. L.; Park, J.; McEuen, P. L. The valley Hall effect in MoS2 transistors. +Science 2014, 344, 1489–1492. +(13) Lee, J.; Mak, K. F.; Shan, J. Electrical control of the valley Hall effect in bilayer MoS2 +transistors. Nature nanotechnology 2016, 11, 421–425. +(14) Liu, J.; Ma, Z.; Gao, J.; Dai, X. Quantum valley Hall effect, orbital magnetism, and +anomalous Hall effect in twisted multilayer graphene systems. Physical Review X 2019, 9, +031021. +(15) Qiao, Z.; Yang, S. A.; Feng, W.; Tse, W.-K.; Ding, J.; Yao, Y.; Wang, J.; Niu, Q. Quantum +anomalous Hall effect in graphene from Rashba and exchange effects. Phys. Rev. B 2010, 82, +161414. +(16) Shimazaki, Y.; Yamamoto, M.; Borzenets, I. V.; Watanabe, K.; Taniguchi, T.; Tarucha, S. +Generation and detection of pure valley current by electrically induced Berry curvature in +bilayer graphene. Nature Physics 2015, 11, 1032–1036. +(17) Sui, M.; Chen, G.; Ma, L.; Shan, W.-Y.; Tian, D.; Watanabe, K.; Taniguchi, T.; Jin, X.; +Yao, W.; Xiao, D.; Zhang, Y. Gate-tunable topological valley transport in bilayer graphene. +Nature Physics 2015, 11, 1027–1031. +(18) Wallbank, J. R. et al. Tuning the valley and chiral quantum state of Dirac electrons in van der +Waals heterostructures. Science 2016, 353, 575–579. +(19) Xiao, D.; Yao, W.; Niu, Q. Valley-Contrasting Physics in Graphene: Magnetic Moment and +Topological Transport. Phys. Rev. Lett. 2007, 99, 236809. +(20) Sodemann, I.; Fu, L. Quantum Nonlinear Hall Effect Induced by Berry Curvature Dipole in +Time-Reversal Invariant Materials. Phys. Rev. Lett. 2015, 115, 216806. +23 + +(21) Du, Z. Z.; Wang, C. M.; Li, S.; Lu, H.-Z.; Xie, X. C. Disorder-induced nonlinear Hall effect +with time-reversal symmetry. Nature Communications 2019, 10, 3047. +(22) Sinha, S.; Adak, P. C.; Chakraborty, A.; Das, K.; Debnath, K.; Sangani, L. D. V.; +Watanabe, K.; Taniguchi, T.; Waghmare, U. V.; Agarwal, A.; Deshmukh, M. M. Berry +curvature dipole senses topological transition in a moiré superlattice. Nature Physics 2022, +18, 765–770. +(23) Chakraborty, A.; Das, K.; Sinha, S.; Adak, P. C.; Deshmukh, M. M.; Agarwal, A. +Nonlinear anomalous Hall effects probe topological phase-transitions in twisted double +bilayer graphene. 2D Materials 2022, 9, 045020. +(24) Zhai, D.; Chen, C.; Xiao, C.; Yao, W. Layer-Contrasted Hall Effect in Twisted Bilayers with +Time Reversal Symmetry. 2022; https://arxiv.org/abs/2207.14644. +(25) Ho, S.-C.; Chang, C.-H.; Hsieh, Y.-C.; Lo, S.-T.; Huang, B.; Vu, T.-H.-Y.; Ortix, C.; Chen, T.- +M. Hall effects in artificially corrugated bilayer graphene without breaking time-reversal +symmetry. Nature Electronics 2021, 4, 116–125. +(26) Sharpe, A. L.; Fox, E. J.; Barnard, A. W.; Finney, J.; Watanabe, K.; Taniguchi, T.; +Kastner, M. A.; Goldhaber-Gordon, D. Emergent ferromagnetism near three-quarters filling +in twisted bilayer graphene. Science 2019, 365, 605–608. +(27) Serlin, M.; Tschirhart, C. L.; Polshyn, H.; Zhang, Y.; Zhu, J.; Watanabe, K.; Taniguchi, T.; +Balents, L.; Young, A. F. Intrinsic quantized anomalous Hall effect in a moiré heterostructure. +Science 2020, 367, 900–903. +(28) Li, T.; Jiang, S.; Shen, B.; Zhang, Y.; Li, L.; Tao, Z.; Devakul, T.; Watanabe, K.; Taniguchi, T.; +Fu, L.; Shan, J.; Mak, K. F. Quantum anomalous Hall effect from intertwined moiré bands. +Nature 2021, 600, 641–646. +24 + +(29) Lin, J.-X.; Zhang, Y.-H.; Morissette, E.; Wang, Z.; Liu, S.; Rhodes, D.; Watanabe, K.; +Taniguchi, T.; Hone, J.; Li, J. I. A. Spin-orbit-driven ferromagnetism at half moiré filling +in magic-angle twisted bilayer graphene. Science 2022, 375, 437–441. +(30) Kuiri, M.; Coleman, C.; Gao, Z.; Vishnuradhan, A.; Watanabe, K.; Taniguchi, T.; Zhu, J.; +MacDonald, A. H.; Folk, J. Spontaneous time-reversal symmetry breaking in twisted double +bilayer graphene. Nature Communications 2022, 13, 6468. +(31) Xie, Y.-M.; Zhang, C.-P.; Hu, J.-X.; Mak, K. F.; Law, K. T. Valley-Polarized Quantum +Anomalous Hall State in Moiré MoTe2/WSe2 Heterobilayers. Phys. Rev. Lett. 2022, 128, +026402. +(32) Kang, J.; Vafek, O. Strong Coupling Phases of Partially Filled Twisted Bilayer Graphene +Narrow Bands. Phys. Rev. Lett. 2019, 122, 246401. +(33) Liu, J.; Dai, X. Anomalous Hall effect, magneto-optical properties, and nonlinear optical +properties of twisted graphene systems. npj Computational Materials 2020, 6, 57. +(34) Qiao, Z.; Ren, W.; Chen, H.; Bellaiche, L.; Zhang, Z.; MacDonald, A.; Niu, Q. Quantum +anomalous Hall effect in graphene proximity coupled to an antiferromagnetic insulator. +Physical review letters 2014, 112, 116404. +(35) Song, G.; Ranjbar, M.; Daughton, D. R.; Kiehl, R. A. Nanoparticle-induced anomalous Hall +effect in graphene. Nano Letters 2019, 19, 7112–7118. +(36) Avsar, A.; Tan, J. Y.; Taychatanapat, T.; Balakrishnan, J.; Koon, G.; Yeo, Y.; Lahiri, J.; +Carvalho, A.; Rodin, A.; O’Farrell, E., et al. Spin-orbit proximity effect in graphene. Nature +communications 2014, 5, 1–6. +(37) Ghiasi, T. S.; Kaverzin, A. A.; Blah, P. J.; van Wees, B. J. Charge-to-spin conversion by +the Rashba–Edelstein effect in two-dimensional van der Waals heterostructures up to room +temperature. Nano letters 2019, 19, 5959–5966. +25 + +(38) Tiwari, P.; Srivastav, S. K.; Ray, S.; Das, T.; Bid, A. Observation of Time-Reversal Invariant +Helical Edge-Modes in Bilayer Graphene/WSe2 Heterostructure. ACS Nano 2021, 15, 916– +922, PMID: 33378173. +(39) Herling, F.; Safeer, C. K.; Ingla-Aynés, J.; Ontoso, N.; Hueso, L. E.; Casanova, F. Gate +tunability of highly efficient spin-to-charge conversion by spin Hall effect in graphene +proximitized with WSe2. APL Materials 2020, 8, 071103. +(40) Dastgeer, G.; Afzal, A. M.; Jaffery, S. H. A.; Imran, M.; Assiri, M. A.; Nisar, S. Gate +modulation of the spin current in graphene/WSe2 van der Waals heterostructure at room +temperature. Journal of Alloys and Compounds 2022, 919, 165815. +(41) Lee, S.; de Sousa, D. J. P.; Kwon, Y.-K.; de Juan, F.; Chi, Z.; Casanova, F.; Low, T. Charge- +to-spin conversion in twisted graphene/WSe2 heterostructures. Phys. Rev. B 2022, 106, +165420. +(42) Wang, Z.; Ki, D.-K.; Chen, H.; Berger, H.; MacDonald, A. H.; Morpurgo, A. F. Strong +interface-induced spin–orbit interaction in graphene on WS2. Nature Communications 2015, +6, 8339. +(43) Wang, Z.; Ki, D.-K.; Khoo, J. Y.; Mauro, D.; Berger, H.; Levitov, L. S.; Morpurgo, A. F. +Origin and Magnitude of ‘Designer’ Spin-Orbit Interaction in Graphene on Semiconducting +Transition Metal Dichalcogenides. Phys. Rev. X 2016, 6, 041020. +(44) Völkl, T.; Rockinger, T.; Drienovsky, M.; Watanabe, K.; Taniguchi, T.; Weiss, D.; Eroms, J. +Magnetotransport in heterostructures of transition metal dichalcogenides and graphene. Phys. +Rev. B 2017, 96, 125405. +(45) Wakamura, T.; Reale, F.; Palczynski, P.; Zhao, M. Q.; Johnson, A. T. C.; Guéron, S.; +Mattevi, C.; Ouerghi, A.; Bouchiat, H. Spin-orbit interaction induced in graphene by +transition metal dichalcogenides. Phys. Rev. B 2019, 99, 245402. +26 + +(46) Fülöp, B.; Márffy, A.; Zihlmann, S.; Gmitra, M.; Tóvári, E.; Szentpéteri, B.; Kedves, M.; +Watanabe, K.; Taniguchi, T.; Fabian, J.; Schönenberger, C.; Makk, P.; Csonka, S. Boosting +proximity spin–orbit coupling in graphene/WSe2 heterostructures via hydrostatic pressure. +npj 2D Materials and Applications 2021, 5, 82. +(47) Tiwari, P.; Srivastav, S. K.; Bid, A. Electric-Field-Tunable Valley Zeeman Effect in Bilayer +Graphene Heterostructures: Realization of the Spin-Orbit Valve Effect. Phys. Rev. Lett. 2021, +126, 096801. +(48) Pizzocchero, F.; Gammelgaard, L.; Jessen, B. S.; Caridad, J. M.; Wang, L.; Hone, J.; +Bøggild, P.; Booth, T. J. The hot pick-up technique for batch assembly of van der Waals +heterostructures. Nature Communications 2016, 7, 1–10. +(49) Wang, L.; Meric, I.; Huang, P.; Gao, Q.; Gao, Y.; Tran, H.; Taniguchi, T.; Watanabe, K.; +Campos, L.; Muller, D., et al. One-dimensional electrical contact to a two-dimensional +material. Science 2013, 342, 614–617. +(50) He, P.; Koon, G. K. W.; Isobe, H.; Tan, J. Y.; Hu, J.; Neto, A. H. C.; Fu, L.; Yang, H. +Graphene moiré superlattices with giant quantum nonlinearity of chiral Bloch electrons. +Nature Nanotechnology 2022, 17, 378–383. +(51) Nakamura, M.; Castro, E. V.; Dóra, B. Valley Symmetry Breaking in Bilayer Graphene: A +Test of the Minimal Model. Phys. Rev. Lett. 2009, 103, 266804. +(52) Yang, Z.; Han, J. H. Hierarchy of spin and valley symmetry breaking in quantum Hall single- +layer graphene. Phys. Rev. B 2010, 81, 115405. +(53) Farajollahpour, T.; Phirouznia, A. The role of the strain induced population imbalance in +Valley polarization of graphene: Berry curvature perspective. Scientific Reports 2017, 7, +17878. +27 + +(54) Freitag, N. M.; Reisch, T.; Chizhova, L. A.; Nemes-Incze, P.; Holl, C.; Woods, C. R.; +Gorbachev, R. V.; Cao, Y.; Geim, A. K.; Novoselov, K. S.; Burgdörfer, J.; Libisch, F.; +Morgenstern, M. Large tunable valley splitting in edge-free graphene quantum dots on boron +nitride. Nature Nanotechnology 2018, 13, 392–397. +(55) Gmitra, M.; Kochan, D.; Högl, P.; Fabian, J. Trivial and inverted Dirac bands and the +emergence of quantum spin Hall states in graphene on transition-metal dichalcogenides. +Phys. Rev. B 2016, 93, 155104. +(56) Offidani, M.; Milletarì, M.; Raimondi, R.; Ferreira, A. Optimal Charge-to-Spin Conversion +in Graphene on Transition-Metal Dichalcogenides. Phys. Rev. Lett. 2017, 119, 196801. +(57) Cummings, A. W.; Garcia, J. H.; Fabian, J.; Roche, S. Giant Spin Lifetime Anisotropy in +Graphene Induced by Proximity Effects. Phys. Rev. Lett. 2017, 119, 206601. +(58) Garcia, J. H.; Vila, M.; Cummings, A. W.; Roche, S. Spin transport in graphene/transition +metal dichalcogenide heterostructures. Chemical Society Reviews 2018, 47, 3359–3379. +(59) Li, Y.; Koshino, M. Twist-angle dependence of the proximity spin-orbit coupling in graphene +on transition-metal dichalcogenides. Phys. Rev. B 2019, 99, 075438. +(60) Zubair, M.; Vasilopoulos, P.; Tahir, M. Influence of interface induced valley-Zeeman and +spin-orbit couplings on transport in heterostructures of graphene on WSe2. Phys. Rev. B +2020, 101, 165436. +(61) Kumar, A.; Maiti, S.; Maslov, D. L. Zero-field spin resonance in graphene with proximity- +induced spin-orbit coupling. Phys. Rev. B 2021, 104, 155138. +(62) Gmitra, M.; Fabian, J. Graphene on transition-metal dichalcogenides: +A platform for +proximity spin-orbit physics and optospintronics. Phys. Rev. B 2015, 92, 155403. +(63) Tahir, M.; Vasilopoulos, P. Magneto-optical transport properties of monolayer WSe2. Phys. +Rev. B 2016, 94, 045415. +28 + diff --git a/19AzT4oBgHgl3EQf8_5O/content/tmp_files/load_file.txt b/19AzT4oBgHgl3EQf8_5O/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..01a71a4132f04ea39cf668b583806c9e0392470b --- /dev/null +++ b/19AzT4oBgHgl3EQf8_5O/content/tmp_files/load_file.txt @@ -0,0 +1,1439 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf,len=1438 +page_content='Observation of room temperature anomalous Hall effect in graphene-WSe2 heterostructures Priya Tiwari1†,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Divya Sahani1†,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Atasi Chakraborty2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kamal Das2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kenji Watanabe3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Takashi Taniguchi4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Amit Agarwal2∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' and Aveek Bid1∗ 1Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Indian Institute of Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bangalore 560012,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' India 2 Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Indian Institute of Technology Kanpur,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kanpur-208016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' India 3 Research Center for Functional Materials,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' National Institute for Materials Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1-1 Namiki,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tsukuba 305-0044,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Japan 4 International Center for Materials Nanoarchitectonics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' National Institute for Materials Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1-1 Namiki,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tsukuba 305-0044,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Japan † These authors contributed equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' E-mail: amitag@iitk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='in,aveek@iisc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='in Abstract Proximity-induced spin–orbit coupling in graphene offers an exciting platform to probe spin-based effects in chiral Dirac fermionic systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' These systems are believed to be intrinsically time-reversal symmetric, which should ensure that the charge Hall response vanishes without a magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In contrast to this expectation, we report the first observation of anomalous Hall effect (AHE) in single-layer graphene/single-layer WSe2 heterostructures that persists up to room temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The magnitude and the sign of the AHE can be tuned using an external perpendicular electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Our joint experimental and theoretical study establishes that the observed anomalous Hall signal arises from the combined effect of strain and spin- orbit coupling in graphene, which induces time-reversal symmetry breaking and manifests 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01912v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='mes-hall] 5 Jan 2023 as a valley asymmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Our observation broadens the prospects of realizing high-temperature anomalous Hall effects in a completely new system, namely graphene-transition metal dichalcogenide- based heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Introduction Topological and band geometric effects in two-dimensional systems have attracted significant attention due to their fascinating physics and potential applications in spintronics and novel electronic devices1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Graphene-based heterostructures offer one such exciting platform for studying band geometric effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The coupling to the charge, spin, and valley degrees of freedom in graphene gives rise to, among other things, a multitude of Hall effects such as the spin Hall6–9, and the valley Hall effects10–15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A possible common origin of these effects is the emergence of a non- trivial Berry curvature on breaking the inversion symmetry, which induces opposite anomalous velocity in the two valleys of graphene16–18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Note that in the absence of exchange interactions, time-reversal symmetry (TRS) forces the Berry curvatures at the K and K′ valleys to be equal and opposite Ωz(K) = −Ωz(K′), causing signatures of the anomalous Hall effect (AHE) in the charge sector to vanish19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Several other unconventional Hall effects have been predicted and explored in graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Some prominent examples include the nonlinear anomalous Hall effect20–23, layer contrasted Hall effect3,24, and linear Hall effect in corrugated systems25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The study in corrugated systems is particularly fascinating as it demonstrates the appearance of a linear Hall response even under time-reversal symmetric conditions for systems with tilted bands in a reduced-symmetry scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' More recently, AHE has been observed in graphene-based moiré heterostructures at half- or quarter-filling of the bands owing to the spontaneously broken time-reversal symmetry and magnetization arising from the enhancement of the exchange interactions by the large density of states of the flat bands26–33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Several studies have reported extrinsic AHE in graphene where suitable dopants or magnetic substrate induce an exchange interaction (see for example 15,34,35).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' However, despite being a testbed 2 for band geometric effects, the observation of intrinsic AHE in graphene-based non-magnetic heterostructures remains rare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In this letter, we report the observation of a large linear AHE originating from lifting the valley- degeneracy in the high-mobility heterostructures of single-layer graphene (SLG) with proximity- induced spin-orbit coupling (SOC) from single-layer WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We find that the dependence of the transverse resistance at a zero magnetic field Rxy(B = 0) on the charge carrier density mimics the finite B-field classical Hall signal in graphene and is observed up to room temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Single-layer WSe2 used as a substrate influences the graphene bands in two significant ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The first of these is well studied: Graphene on WSe2 possesses spin-split bands owing to the Ising-like SOC, which gives rise to the spin Hall effect36–38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The second effect, equally vital for our purposes but ill-explored to date, is the appearance of a substantial lateral strain in the graphene layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We propose that the combined effect of this proximity-induced SOC and lattice-induced strain lifts the valley-degeneracy in graphene, leading to the appearance of the AHE signal near the Dirac point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We establish that the AHE is zero in the absence of the WSe2 layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Note that previous studies on the SLG-WSe2 heterostructure (or graphene on transition metal dichalcogenides in general) focused primarily on the spin aspects of the transport36,37,39–41 where a non-local signal is measured as a signature of the spin Hall effect and weak (anti-) localization measurements were used to quantify the spin-orbit coupling strength38,42–47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Interestingly, these studies did not probe the finite Hall effect without a magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This makes our observation of AHE in this system unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Results Device characteristics Heterostructures of SLG and single-layer WSe2, encapsulated by crystalline hexagonal boron nitrate (hBN), were fabricated using a dry transfer technique48,49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' One-dimensional electrical contacts were formed by electron beam lithography, followed by etching (using a mixture of CHF3 3 and O2) and deposition of 5 nm/60 nm Cr/Au contacts and top-gate electrode (see Section S3 Supplementary Information for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A schematic of the device structure is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(a), and an optical image of the device is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The dual-gated architecture of the devices allows independent control of the charge-carrier density n and the vertical displacement field D;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' n= (CtgVtg + CbgVbg)/e − n0 and D = (CtgVtg − CbgVbg)/2ϵ0 − D0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Here Cbg (Ctg) is the capacitance per unit area of the back-gate (top-gate), Vbg (Vtg) is the back-gate (top-gate) bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' n0 and D0 are the residual charge carrier density and residual vertical displacement field induced by impurities in the device channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Electrical transport measurements were performed at 10 nA source-drain current using low-frequency lock-in detection techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' All data were obtained at 20 mK unless specified otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The measurements were performed on multiple devices;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the results were similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In the main manuscript, we present the data from a single device, SW1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data from another device, SW2, are shown in the Supplementary Information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A map of the measured longitudinal conductance Gxx as a function of charge carrier density n and perpendicular magnetic field B is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The appearance of broken symmetry quantum Hall states at low B-fields implies a complete lifting of the spin and valley degeneracies in SLG bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The splitting of the spin-degenerate bands in SLG (shown schematically in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(f)) is also evident from the beating pattern seen in the Shubnikov de Haas oscillations [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(d)], and the double periodicity in the corresponding Fourier spectrum [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(e)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(g) is a representation of the lifting of the valley degeneracy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the valley-splitting energy scale is marked as ∆vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The lifting of spin- and valley-degeneracies in the band dispersion (along with the high field-effect mobility µ ∼ 140, 000 cm2V−1s−1 of the device) shows that the graphene and WSe2 interface is atomically clean with significant interfacial coupling and minimal random potential fluctuations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 4 Room temperature anomalous Hall effect at B = 0 T In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2(a), we present the data for the longitudinal resistance, Rxx (left-axis, red line), and transverse resistance, Rxy (right-axis, blue line) measured at B = 0 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We observe a finite Rxy signal in a narrow range of charge carrier densities ∆n = ±1015 m−2 centered about the charge neutrality point, a feature conspicuously absent in hBN/graphene/hBN heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The Rxy features an evident change in the sign about the charge neutrality point – it is positive for n < 0 (hole-band) and negative for n > 0 (electron-band).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The current independence of Rxy establishes it to be a linear anomalous Hall effect – (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2(c) for the data for two representative values of current - 30 nA and 120 nA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The finite Rxy(B = 0) survives at least to room temperature with diminished amplitude as shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2(b) and (d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This observation of room temperature B = 0 anomalous Hall effect in hBN/graphene/WSe2/hBN heterostructures is the central result of this letter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We find the nonlinear anomalous Hall resistance (quantified by the second harmonic R2ω xy signal) to be negligibly small for our device (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' S5 of Supplementary Information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' To establish that the absence of the second harmonic signal is not an experimental artifact, we present in the same figure data from similar measurements on hBN/graphene moiré devices where a small but finite nonlinear signal does show up in the measured R2ω xy near the primary Dirac point as per previous reports50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Note also that the data for Rxy(B = 0) were reproduced in cryostats without a superconducting magnet, ruling out the remnant field of a magnet as the origin of the AHE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We attribute the observed zero-field anomalous Hall effect (AHE) to an effective time-reversal symmetry breaking of the system captured by valley splitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In the presence of time-reversal symmetry, the anomalous Hall effect, defined as σxy = − e2 ℏ � dk (2π)2Ωzf(k), vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Here f(k) is the Fermi distribution function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The vanishing of AHE can be understood by recalling that as Ωz(K) = −Ωz(K′) in the presence of time-reversal symmetry, the contribution of each valley to the AHE are equal and opposite, making the total AHE zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' However, on breaking the valley degeneracy, the valleys have different fillings, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2(e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In this case, the resulting total 5 anomalous Hall response is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We calculate this non-zero AHE explicitly for the graphene- WSe2 heterostructure (see Supplementary Information for the details of the calculation), and the theoretical results for the Hall conductivity (which has the opposite sign to the Hall resistivity) are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2(f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Our calculations capture the existence of zero-field AHE in valley split graphene-WSe2 device along with the sign reversal in the AHE on going from the hole (valence) band to the electron (conduction) band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We emphasize that here we aim for a qualitative match with the experimental data, as the microscopic origin of valley splitting (and hence the magnitude of the split) is not evident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The valley polarization can arise from different physical mechanisms such as enhanced impurity- induced inter-valley scattering, selective exchange coupling of the two valleys, or non-periodic lattice deformations51–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' However, we do not find evidence of valley splitting or finite AHE in hBN/graphene/hBN devices without the intervening WSe2 layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Thus, it is obvious that the valley-specific asymmetry is induced by WSe2-graphene combination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The lattice constant for graphene is ∼ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='46 Å while that for WSe2 is ∼ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='27 Å.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The large lattice-mismatch generates a significant strain across the graphene flake as the heterostructure relaxes to the stable ground state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' From Raman spectroscopy, we estimate the magnitude of the strain on the SLG layer in our hBN/SLG/WSe2/hBN heterostructure to be ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='15% − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='20% (see Section S6 of Supplementary Information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This combination of strain and spin-orbit coupling feasibly lifts the valley degeneracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' While the microscopic origin of valley splitting is not completely clear, we model it by shifting the two valleys in energy, as indicated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1(f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hall response with vertical displacement and magnetic field Having demonstrated the AHE, we now focus on the dependence of the AHE on a perpendicular displacement field D (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' It is illuminating to map the transverse zero-B-field conductivity Rxy(B = 0) data in the n − D plane (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 3(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The plot shows Rxy(B = 0) to be finite only at the band edges, consistent with the idea of the Berry curvature hot spots lying in the vicinity of the band edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This can be seen clearly in the line plots of Rxy(B = 0) for different values of D 6 shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 3(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Note that the plots are vertically offset by 200 Ω for clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The measured Rxy(B = 0) has an intriguing D dependence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' it changes its sign as the direction of D flips [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 3(a- b)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' To understand this, we analyze the dependence of the Berry curvature near the band edges on the direction of D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Our theoretical calculations show that as the polarity of D changes, the Berry curvature near the band edges changes sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Consequently, the sign of the anomalous Hall voltage (determined by the sign of the Berry curvature) in the SLG/WSe2 heterostructure flips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This is reminiscent of the change in the sign of the gap in bilayer graphene on flipping the direction of D, which changes the sign of the Berry curvature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Measurements in a finite magnetic field B applied perpendicular to the device interface (see Section S5 of Supplementary Information) reveal the interplay between the classical Hall effect and the B = 0 AHE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data smoothly crosses over from the anomalous hall phase at B = 0 to the conventional Hall phase at finite B-field with an anti-crossing feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This feature resembles the planar Hall effect in corrugated bilayer graphene25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A non-zero intercept of the plot of Rxy versus B [shown for a fixed n in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 3(c)] on the B-axis captures the AHE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We note that Rxy is non-hysteretic in the presence of a small non-quantizing magnetic field (see Section S7 of Supplementary Information), ruling out emergent ferromagnetism in the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 4(a), we present a plot of Rxx in the n − D plane measured at B = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We observe that with increasing D, the resistance peak at the charge neutrality point splits into two maxima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This feature can be better appreciated from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 4(b), where we show individual plots of Rxx(B = 0) versus n at several representative values of D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' At higher values of |D|, we find two distinct peaks in Rxx separated by a shallow valley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Such a displacement field-dependent dispersion of the bands near the Dirac point is not captured by the existing models for graphene/WSe2 heterostructures42,55–61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' To remedy this, we construct a new model Hamiltonian for the graphene/WSe2 system, retaining both the WSe2 and the graphene Hamiltonian blocks, which allows us to include the impact of a vertical displacement field systematically (see Section S1 and S2 of Supplementary Information for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 4(c) is a plot of the theoretically calculated σxx as a function of the chemical potential 7 – the panels show the splitting of the conductivity minima into two asymmetric conductivity minima at finite D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Our model thus reproduces the prominent features of σxx both at zero displacement field55,57 and at a finite D, along with the observed AHE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Discussion To summarize, we report the first observation of room temperature anomalous Hall effect in heterostructures of graphene/WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Primarily known for their promising spintronic aspects, the charge Hall response of such a heterostructure was expected to be relatively mundane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Contrary to this, we show that the dual effect of spin-orbit coupling and strain in the system gives rise to time-reversal symmetry-breaking through valley splitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Combined with a finite Berry curvature, this results in a finite anomalous Hall effect in the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The anomalous Hall response persists at least to room temperature and features a unique perpendicular electric field tunability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Our work establishes graphene-WSe2 heterostructure as an excellent platform for further exploration of band geometry-induced interplay of charge, spin, and valley responses in two-dimensional systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' AUTHOR INFORMATION Author Contributions A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=', P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=', and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' conceptualized the study, performed the measurements, and analyzed the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=', and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' performed the theoretical analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' grew the hBN single crystals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' All the authors contributed to preparing the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Notes The authors declare no competing financial interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 8 Acknowledgement A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' acknowledges funding from the DST FIST program, DST fellowship (DST/SJF/PSA01/2016- 17), and US Army DECVCOM and ITC IPAC (project: FA520922P0166/2232).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' acknowledges the Indian Institute of Technology, Kanpur, and the Science and Engineering Research Board (SERB) National Postdoctoral Fellowship (PDF/2021/000346), India, for financial support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' acknowledges the Science and Engineering Research Board for Project No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' MTR/2019/001520, and the Department of Science and Technology for Project No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' DST/NM/TUE/QM-6/2019(G)-IIT Kanpur of the Government of India for funding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' acknowledge support from JSPS KAKENHI (Grant Numbers 19H05790, 20H00354, and 21H05233) Supporting Information Available Supporting information contains detailed discussions of (a) model Hamiltonian of Graphene/WSe2 heterostructure, (b) anomalous Hall effect and Drude conductivity, (c) data from other devices, and (d) device fabrication and characterization details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 9 (b) (d) (e) (c) (a) n (1X1016 m-2) B (T) −2 = ν ν=−6 ν=−10 ν=−14 ν=2 ν=6 ν=10 ν=14 0 1 2 3 4 5 6 7 8 9 10 10 5 0 5 0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content="5 K valley 1 1 Ωx104 ( 2) 0 Graphene on WSe2 Graphene Graphene on WSe2 Energy (meV) K valley K valley 1 1 Ωx104 ( 2) K' valley 0 Energy (meV) kx kx x kx Δvs A A 20 10 0 10 20 0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 20 10 0 10 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 20 10 0 10 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 k 20 10 0 10 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='01 (f) (g) 1 2 10 5 0 5 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 1/B (1/T) Normalized Amplitude 16 20 24 28 32 36 Rxx(Ω) 10µm hBN hBN WSe2 Graphene BF (T) lnGxx(e2/h) Figure 1: Device characteristics and band dispersion: (a) Schematic of the graphene/WSe2 layers encapsulated in hBN illustrating the sequence of crystal stacking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Optical image of the device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (c) Map of the longitudinal conductance (Gxx(B)) with varying carrier density n and perpendicular magnetic field B at T ∼ 20 mK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The thicker dashed lines correspond to the signature plateaus of single-layer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Thinner lines mark the broken-symmetry phases indicating complete lifting of the spin and valley degeneracies at low-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (d) SdH oscillations versus 1/B at Vbg = −40 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (e) Fourier spectrum of the SdH oscillations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' two peaks are distinctly visible, establishing the presence of two Fermi surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (f) Schematic of the band dispersion of the K valley of monolayer graphene (left panel) and graphene on WSe2 heterostructure (right panel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The WSe2 layer essentially lifts the spin degeneracy of the low-lying energy bands and opens up a gap at the Fermi energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (g) The impact of valley splitting (denoted by ∆vs) on the band structure of the K (left) and the K′ (right) valleys of the graphene/WSe2 heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The color map of the lines indicates the Berry curvature, which is concentrated near the band edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 10 2 0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='55 0101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 16 X10-106 5 4 3-59 8 1 7Rxy(kΩ) B=0T n (1 x 1016 m-2) n (1 x 1016 m-2) n (1 x 1016 m-2) Rxx(kΩ) Rxy(kΩ) 30nA T = 142 K 150nA R xy(kΩ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0 100 0 50 150 200 250 Rxy(kΩ) T(K) µ(meV) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 200 100 0 100 200 (a) (b) (c) (d) (e) (f) 10K 30K 50K Rxx Rxy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0 1 2 3 4 5 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='02K 14K 24K 51K 80K 142K 10K 222K 300K 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 B=0T 300 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='04 Rxx Rxy K K Ωz 0 σxy/σ0 Figure 2: Anomalous Hall effect (a) Plots of the zero magnetic-field longitudinal resistance Rxx(B = 0) (left-axis, red line) and zero magnetic-field transverse resistance Rxy(B = 0) (right- axis, blue line) versus n;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the data were measured at T = 20 mK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Rxy(B = 0) response as a function of n at few representative values of temperature;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the AHE persists up to 300 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (c) Plot of Rxy(B = 0) as a function of n for two different values of electrical current;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the data were taken at T = 142 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (d) Plot of the peak value of Rxy(B = 0) versus T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The dotted line is a guide to the eye.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (e) The bell-shaped surface represents the opposite Berry curvatures of the two valleys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The position of the Fermi surfaces for the K and K′ valleys (indicated by the black circle) differ due to valley population imbalance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The top insets show the schematic of Dirac crossing for the K and K′ valleys for the effective graphene sector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The valley splitting introduces a population imbalance between the two valleys of the Dirac cones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (f) Theoretically calculated anomalous Hall conductivity (σxy ∝ −ρxy) in the absence (black dashed line) and in the presence (solid lines) of valley splitting (∆vs ∼ 4 meV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The y-axis is scaled w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='t σ0 ≡ e2/h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The increase in temperature diminishes the heights of the σxy peak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 11 n= −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='18 x1016 m-2 B (mT) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 6 4 2 0 2 4 3 2 1 0 1 2 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='05V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='05V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3V/nm n (1x1016 m-2) n (1x1016 m-2) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 −40 0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 40 80 −80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 Rxy(kΩ) D (V/nm) (a) (b) (c) Rxy (kΩ) Rxy (kΩ) Figure 3: Dependence of the transverse resistance Rxy on D and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (a) A 2-dimensional contour map of Rxy(B = 0) plotted in the n−D plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Plots of Rxy(B = 0) versus n for different values of D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data have been vertically shifted by 200 Ω for clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The dashed horizontal line for each plot marks the zero of Rxy(B = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (c) A representative plot of Rxy versus B measured at n = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='18 × 1016 m−2, an arrow marks the value of the anomalous Hall resistance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='05V/nm 0V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='05V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1V/nm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2V/nm ) 4 0 1 2 3 5 6 7 8 9 3 2 1 0 1 2 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 Δ=300 meV Δ=0 meV Δ=-300 meV σxx /συ (103) 200 200 0 μ (meV) (c) (b) (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 1 2 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='4 6 4 2 0 2 4 Rxx(kΩ) D (V/nm) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 σxx /συ (103) σxx /συ (103) n (1 x 1016 m-2) n (1 x 1016 m-2) Rxx(kΩ Figure 4: Dependence of Rxx(B = 0) on D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (a) A 2-dimensional contour map of Rxx(B = 0) plotted in the n − D plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Plots of Rxx(B = 0) versus n for different values of D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data have been vertically shifted by 1 kΩ for clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The dashed horizontal line for each plot is the zero of the y-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (c) Variation of the calculated Drude conductivity σxx with energy (µ) for three different values of the interlayer potential induced by the applied electric field, ∆ = 300 meV (red line), 0 meV (blue line) and -300 meV (green line), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The values of σxx have been scaled by σv where σv = e2τ/4π2ℏ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 13 Supplementary Information Model Hamiltonian of Graphene WSe2 heterostructure In this section, we construct the low energy model Hamiltonian of monolayer graphene on a WSe2 layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Going beyond the effective graphene model as reported in recent literature55,57,62, we explicitly solve for the composite low energy Hamiltonian for the graphene-WSe2 heterostructure to capture the effect of perpendicular electric field correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We solve the following low-energy Hamiltonian Htot = � � � Hg k Ht H† t Hws tot � � � + H⊥ (1) Here, Hg k and Hws tot are the onsite Hamiltonian for graphene and the WSe2 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The interaction between graphene and WSe2 layer has been included through spin and valley conserved off-diagonal hopping (Ht).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The effect of the perpendicular electric field is captured through the diagonal matrix H⊥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We consider the monolayer of WSe2 in the x-y plane in the presence of intrinsic spin-orbit coupling (SOC) (Hws sym), spin Zeeman field (∆ws 0 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In addition, finite Rashba SOC term (Hws R ) is also considered within the WSe2 sector?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Including all these effects, the two-dimensional extended Dirac Hamiltonian (Hws tot) of WSe2 monolayer can be written as Hws tot = Hws k + Hws sym + Hws R .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (2) The explicit forms of each term are expressed as follows, Hws k = vws F [ξσxkx + σyky] + ∆ws 0 σz , Hsym = 1 2[λc(σz + σ0) + λv(σz − σ0)] , Hws R = λR[ξσxSy − σySx] , (3) 14 where ξ = ±1 for K and K′ valley respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' As in the monolayer WSe2, two degenerate but inequivalent valleys (K and K′) are separated by a large momentum;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' we can split the total Hamiltonian into two valley-specific parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Here, we have considered vws F ≡1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='83 eV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='Å as the Fermi velocity of WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ∆0 represents the mass term that breaks the inversion symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Here, λc and λv correspond to the SOC strengths of conduction and valence bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In general, the valence band (λv ∼ 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 meV) of WSe2 possesses larger SOC strength than the conduction band (λc ∼ 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5meV), promoting relatively larger splitting in the valence band63?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' For simplicity of the calculation, we choose the SOC strengths of both the conduction and valence bands to be equal, λc = λv =7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 meV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We set ∆0 =250 meV which induces a large gap between the conduction and valence bands of WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' To model the low energy physics of graphene, we choose valley-specific Hamiltonian of the following form, Hg k = vg F[ξσxkx + σyky] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (4) Here, vg F=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='46 eV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='Å is the Fermi velocity of graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Equation (4) represents a gapless Dirac dispersion for the graphene sector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The coupling between the two layers is captured by Ht = t � � � 0 1 1 0 � � � σ0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (5) For our calculation, we set the hopping strength t =50 meV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The proximity effect of the WSe2 layer essentially opens up a gap at the Dirac crossing of the graphene bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The induced band gap of graphene gets enhanced with an increase in hopping strength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The effect of the external perpendicular electric field is introduced by adding a diagonal Hamiltonian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H⊥ = � � � ∆I 0 0 −∆I � � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (6) Figure 5 shows the band dispersion evolution with a perpendicular electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The band dispersion 15 Energy (meV) (c) (b) (a) =300 meV Δ Δ=0 meV Δ=-300 meV Figure 5: Impact of the electric field on the band structure of graphene/WSe2 heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (a), (b) and (c) show the band dispersion in the presence of electric field values ∆ = 300 meV, 0 meV, and -300 meV, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The external electric field changes the low energy band dispersion of the composite graphene-WSe2 heterostructure, inducing a metal-insulator transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' essentially undergoes an insulator-to-metal transition with the electric field (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Anomalous Hall effect and Drude conductivity We attribute the observed Hall effect to the anomalous Hall effect induced by Berry curvature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The anomalous Hall conductivity of the system is defined as, σxy = −e2 ℏ � n,ξ � � dkxdky (2π)2 Ωn,ξ z f n,ξ , (7) where n is the band index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' As observed in our experimental finding, a Hall current can only be generated through a population imbalance due to the valley gap difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The van der Waals stacking of graphene onto hexagonal boron nitride offers a natural platform for valley control?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' To induce a finite valley splitting, we have incorporated a term ∆vs =10 meV between the two valleys, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 1 (f) of the main manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' It is important to note that ϵK ̸= ϵK′ even without external perturbations like an electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' As a result of this valley splitting, a finite anomalous Hall effect σxy is generated within the system (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2 (f) in the main manuscript).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 16 We calculate σxx using the expression of the Drude conductivity σxx = e2τ � n,ξ � � dkxdky 4π2 vn,ξ x vn,ξ x (−∂f ∂ϵ )ϵ=ϵn(k) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (8) The band velocity is defined as ℏvn,ξ x = ∂ϵn,ξ/∂kx, where n is the band index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The longitudinal conductivity (σxx), which follows the density of states (DOS), shows a W-like pattern with an increase in the electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The calculated σxx captured the qualitative nature of the inverse of the experimental resistivity (Rxx) plot of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 4(a) of the main manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The pseudo gap within the first and second valence (conduction) bands promotes the low conducting dips below (above) the Fermi energy, whereas for a finite electric field, the substantial DOS at Fermi energy promotes the metallic nature indicated by a peak at the σxx of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 4(c) of the main manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Device fabrication Thin flakes of WSe2, hBN, and graphene were mechanically exfoliated on Si/SiO2 substrates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The thickness of the flakes was initially estimated from the color contrast under an optical microscope and later confirmed using Raman spectroscopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This was followed by sequential pickup of each flake using Polycarbonate (PC) film at 90oC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The assembled heterostructure was transferred on a new Si/SiO2 substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The heterostructure is then cleaned in chloroform, acetone, and IPA to remove the PC residue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The heterostructure was then annealed at 2500C for 3 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Electron beam lithography was used to define the contact and top gate electrodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We used reactive ion etching (mixture of CHF3 and O2 gas) to etch top hBN to make one-dimensional edge contacts to graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' For making the electrical contacts, Cr/Au (5 nm/60 nm) was deposited, followed by liftoff in hot acetone and cleaning in IPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The unwanted hBN and graphene were removed using E-beam lithography and dry etching to define the Hall bar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We transferred an hBN top of the device and fabricated a metallic top gate using lithography and thermal deposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 17 3 2 1 0 1 2 3 250 200 150 100 50 0 50 100 3 2 1 0 1 2 3 0 1 2 3 4 Rxx (kΩ) n (1x1016 m-2) 250 200 150 100 50 0 50 100 Rxy (Ω) n (1x1016 m-2) (a) (b) Rxx Rxy Rxy Isd Isd Rxy Isd Rxy (Ω) Figure 6: Data on device SW2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (a) Plot of longitudinal and transverse resistivity versus number density for device SW2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Plot of transverse resistance versus number density in two different configurations for device SW2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Configuration 1 measures Rxy(B = 0) and configuration 2 measures Ryx(B = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Data on device SW2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 6(a) shows the data for zero-field longitudinal and transverse resistance in device SW2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' one can see the appearance of a finite Rxy(B = 0) that changes its sign near the Dirac point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 6(b) presents the B = 0 transverse signal measured in two different configurations, configuration 1 measures Rxy(B = 0) while configuration 2 measures Ryx(B = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The two signals overlap exactly with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Note that this is one expects from the Onsager relation Rxy(B) = Rxy(−B) for B = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Low-field magnetoresistance Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 7(a) shows the line plots of the transverse signal measured in device SW2 in the presence of a small perpendicular magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data show the smooth evolution of the anomalous Hall 18 3 3 2 1 0 1 2 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='5 n (1x1016 m-2) Rxy(kΩ) 100 80 60 40 20 0 20 40 80 60 2 3 1 0 1 2 3 600 400 200 0 200 400 B (T) (b) n (1x1016 m-2) Rxy(Ω) (a) Figure 7: Dependence of Rxy on B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (a) Plot of Rxy at small magnetic field values measured for device SW2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) A 2D map of the transverse resistance Rxy(B) in the n − B plane;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the data shows a finite Hall signal at B = 0 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' signal into the classical Hall signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This can be better appreciated from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 7(b), which is a 2D map of the transverse signal in the n-B plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Raman shift and strain We used low-temperature Raman spectroscopy in graphene WSe2 stack to estimate the strain in graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' High-quality single layer graphene has two prominent Raman active modes, the G- mode (1580 cm−1) and the 2D-mode (2690 cm−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In the presence of a uniaxial strain ϵ, the shift in 2D peak has been measured to be δωSLG 2D /ϵ ∼ −64cm−1/%?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 8(a) shows a comparison of the temperature-dependence of the Raman shift of the 2D band measured for graphene ωSLG 2D and for graphene on WSe2 ωSLG/WSe2 2D .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 8(b), we show a plot of the T-dependence of δω2D = ωSLG/WSe2 2D − ωSLG 2D .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' One can see that the difference in the Raman shift of the 2D peak increases rapidly with a decrease in T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' the positive value of δω2D indicates that the strain is compressive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The temperature dependence of the strain in graphene was extracted from the data in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 8(b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' its magnitude is plotted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 8(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data shows that SLG on single layer WSe2 undergoes a 19 0 100 200 300 2684 2688 2692 2696 0 100 200 300 8 9 10 11 12 13 14 0 100 200 300 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='22 T (K) ω2d (cm-1) (b) (c) (a) T (K) δω2d (cm-1) |ε| (%) T (K) Figure 8: Raman shift in the 2D band of graphene (a) Temperature variation of the measured Raman shift of the 2D peak of graphene (blue filled circles) and of graphene on single-layer WSe2 (red filled circles).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Plot of δω2D versus T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (c) Plot of the T- dependence of the magnitude of the strain |ϵ| in SLG on single-layer WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' significant compressive strain of about 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2% at 4 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Absence of ferromagnetism and nonlinear AHE The measured magnetoresistance in our devices is non-hysteretic (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 9(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This is clear evidence of the absence of ferromagnetism in the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' We also find the second harmonic R2ω xy signal to be negligibly small for our device (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 9(b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' This establishes that one does not have a nonlinear anomalous Hall effect in this system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' To establish that the absence of the second harmonic signal is real and not an experimental artifact, we plot for comparison in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 9(b) the data from similar measurements on hBN/graphene moiré devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' In the moiré device, we measure a finite nonlinear signal R2ω xy near the primary Dirac point (as expected from previous reports50).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='10 30 20 10 0 10 20 30 40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='2 15 10 5 0 5 10 15 Rxy(Ω) B (mT) R2ω xy (Ω) n (1x1016 m-2) (a) (b) Figure 9: Nonlinear AHE And MR: (a) Plot of magnetoresistance in a small magnetic field at D = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='3 V/nm displacement field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data were taken at n = −2 × 1016m−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (b) Plot of the nonlinear AHE R2ω xy(B = 0) for SLG/WSe2 (red line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The data is contrasted with that obtained for a graphene/hBN moiré device (black line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 21 References (1) Xiao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Niu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Berry phase effects on electronic properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2010, 82, 1959–2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (2) Ahn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Guo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nagaosa, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vishwanath, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Riemannian geometry of resonant optical responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Physics 2022, 18, 290–295.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (3) Gao, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Layer Hall effect in a 2D topological axion antiferromagnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature 2021, 595, 521–525.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (4) Bhalla, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Das, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Culcer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Resonant Second-Harmonic Generation as a Probe of Quantum Geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2022, 129, 227401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (5) Han, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kawakami, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gmitra, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fabian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Graphene spintronics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Nanotechnology 2014, 9, 794–807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (6) Sinova, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Valenzuela, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Wunderlich, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Back, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Jungwirth, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spin hall effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Reviews of modern physics 2015, 87, 1213.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (7) Hirsch, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spin hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Physical review letters 1999, 83, 1834.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (8) Bernevig, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Quantum spin Hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Physical review letters 2006, 96, 106802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (9) Tiwari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Jat, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Udupa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Narang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Sen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bid, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Experimental observation of spin-split energy dispersion in high-mobility single- layer graphene/WSe2 heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' npj 2D Materials and Applications 2022, 6, 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (10) Xiao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Feng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Xu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Coupled Spin and Valley Physics in Monolayers of MoS2 and Other Group-VI Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2012, 108, 196802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 22 (11) Cresti, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nikoli´c, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' García, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Roche, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Charge, spin and valley Hall effects in disordered graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' La Rivista del Nuovo Cimento 2016, 39, 587–667.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (12) Mak, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' McGill, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Park, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' McEuen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The valley Hall effect in MoS2 transistors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Science 2014, 344, 1489–1492.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (13) Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Mak, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Shan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Electrical control of the valley Hall effect in bilayer MoS2 transistors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature nanotechnology 2016, 11, 421–425.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (14) Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Dai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Quantum valley Hall effect, orbital magnetism, and anomalous Hall effect in twisted multilayer graphene systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Physical Review X 2019, 9, 031021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (15) Qiao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Feng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tse, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ding, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Niu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Quantum anomalous Hall effect in graphene from Rashba and exchange effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2010, 82, 161414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (16) Shimazaki, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yamamoto, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Borzenets, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tarucha, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Generation and detection of pure valley current by electrically induced Berry curvature in bilayer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Physics 2015, 11, 1032–1036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (17) Sui, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ma, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Shan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tian, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Jin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Xiao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gate-tunable topological valley transport in bilayer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Physics 2015, 11, 1027–1031.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (18) Wallbank, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tuning the valley and chiral quantum state of Dirac electrons in van der Waals heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Science 2016, 353, 575–579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (19) Xiao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Niu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Valley-Contrasting Physics in Graphene: Magnetic Moment and Topological Transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2007, 99, 236809.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (20) Sodemann, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Quantum Nonlinear Hall Effect Induced by Berry Curvature Dipole in Time-Reversal Invariant Materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2015, 115, 216806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 23 (21) Du, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Disorder-induced nonlinear Hall effect with time-reversal symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Communications 2019, 10, 3047.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (22) Sinha, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Adak, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chakraborty, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Das, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Debnath, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Sangani, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Waghmare, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Deshmukh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Berry curvature dipole senses topological transition in a moiré superlattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Physics 2022, 18, 765–770.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (23) Chakraborty, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Das, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Sinha, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Adak, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Deshmukh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nonlinear anomalous Hall effects probe topological phase-transitions in twisted double bilayer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2D Materials 2022, 9, 045020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (24) Zhai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Xiao, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Layer-Contrasted Hall Effect in Twisted Bilayers with Time Reversal Symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='org/abs/2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='14644.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (25) Ho, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hsieh, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Huang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ortix, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='- M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hall effects in artificially corrugated bilayer graphene without breaking time-reversal symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Electronics 2021, 4, 116–125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (26) Sharpe, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fox, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Barnard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Finney, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kastner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Goldhaber-Gordon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Emergent ferromagnetism near three-quarters filling in twisted bilayer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Science 2019, 365, 605–608.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (27) Serlin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tschirhart, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Polshyn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Balents, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Young, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Intrinsic quantized anomalous Hall effect in a moiré heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Science 2020, 367, 900–903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (28) Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Jiang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Shen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Devakul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Shan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Mak, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Quantum anomalous Hall effect from intertwined moiré bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature 2021, 600, 641–646.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 24 (29) Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Morissette, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rhodes, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hone, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spin-orbit-driven ferromagnetism at half moiré filling in magic-angle twisted bilayer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Science 2022, 375, 437–441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (30) Kuiri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Coleman, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vishnuradhan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' MacDonald, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Folk, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spontaneous time-reversal symmetry breaking in twisted double bilayer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Communications 2022, 13, 6468.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (31) Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Mak, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Law, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Valley-Polarized Quantum Anomalous Hall State in Moiré MoTe2/WSe2 Heterobilayers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2022, 128, 026402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (32) Kang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vafek, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Strong Coupling Phases of Partially Filled Twisted Bilayer Graphene Narrow Bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2019, 122, 246401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (33) Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Dai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Anomalous Hall effect, magneto-optical properties, and nonlinear optical properties of twisted graphene systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' npj Computational Materials 2020, 6, 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (34) Qiao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ren, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bellaiche, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' MacDonald, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Niu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Quantum anomalous Hall effect in graphene proximity coupled to an antiferromagnetic insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Physical review letters 2014, 112, 116404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (35) Song, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ranjbar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Daughton, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kiehl, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nanoparticle-induced anomalous Hall effect in graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nano Letters 2019, 19, 7112–7118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (36) Avsar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taychatanapat, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Balakrishnan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Koon, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yeo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lahiri, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Carvalho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rodin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' O’Farrell, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spin-orbit proximity effect in graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature communications 2014, 5, 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (37) Ghiasi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kaverzin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Blah, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' van Wees, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Charge-to-spin conversion by the Rashba–Edelstein effect in two-dimensional van der Waals heterostructures up to room temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nano letters 2019, 19, 5959–5966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 25 (38) Tiwari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Srivastav, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ray, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Das, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bid, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Observation of Time-Reversal Invariant Helical Edge-Modes in Bilayer Graphene/WSe2 Heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' ACS Nano 2021, 15, 916– 922, PMID: 33378173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (39) Herling, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Safeer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ingla-Aynés, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ontoso, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hueso, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Casanova, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gate tunability of highly efficient spin-to-charge conversion by spin Hall effect in graphene proximitized with WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' APL Materials 2020, 8, 071103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (40) Dastgeer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Afzal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Jaffery, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Imran, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Assiri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nisar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gate modulation of the spin current in graphene/WSe2 van der Waals heterostructure at room temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Journal of Alloys and Compounds 2022, 919, 165815.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (41) Lee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' de Sousa, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kwon, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' de Juan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Casanova, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Low, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Charge- to-spin conversion in twisted graphene/WSe2 heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2022, 106, 165420.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (42) Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ki, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Berger, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' MacDonald, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Morpurgo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Strong interface-induced spin–orbit interaction in graphene on WS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Communications 2015, 6, 8339.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (43) Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ki, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Khoo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Mauro, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Berger, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Levitov, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Morpurgo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Origin and Magnitude of ‘Designer’ Spin-Orbit Interaction in Graphene on Semiconducting Transition Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' X 2016, 6, 041020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (44) Völkl, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rockinger, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Drienovsky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Weiss, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Eroms, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Magnetotransport in heterostructures of transition metal dichalcogenides and graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2017, 96, 125405.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (45) Wakamura, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Reale, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Palczynski, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zhao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Johnson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Guéron, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Mattevi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ouerghi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bouchiat, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spin-orbit interaction induced in graphene by transition metal dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2019, 99, 245402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 26 (46) Fülöp, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Márffy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zihlmann, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gmitra, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tóvári, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Szentpéteri, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kedves, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fabian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Schönenberger, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Makk, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Csonka, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Boosting proximity spin–orbit coupling in graphene/WSe2 heterostructures via hydrostatic pressure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' npj 2D Materials and Applications 2021, 5, 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (47) Tiwari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Srivastav, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bid, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Electric-Field-Tunable Valley Zeeman Effect in Bilayer Graphene Heterostructures: Realization of the Spin-Orbit Valve Effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2021, 126, 096801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (48) Pizzocchero, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gammelgaard, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Jessen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Caridad, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hone, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Bøggild, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Booth, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The hot pick-up technique for batch assembly of van der Waals heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Communications 2016, 7, 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (49) Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Meric, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Huang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tran, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Taniguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Watanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Campos, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Muller, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' One-dimensional electrical contact to a two-dimensional material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Science 2013, 342, 614–617.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (50) He, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Koon, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Isobe, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Neto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Graphene moiré superlattices with giant quantum nonlinearity of chiral Bloch electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Nanotechnology 2022, 17, 378–383.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (51) Nakamura, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Castro, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Dóra, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Valley Symmetry Breaking in Bilayer Graphene: A Test of the Minimal Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2009, 103, 266804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (52) Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Han, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Hierarchy of spin and valley symmetry breaking in quantum Hall single- layer graphene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2010, 81, 115405.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (53) Farajollahpour, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phirouznia, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' The role of the strain induced population imbalance in Valley polarization of graphene: Berry curvature perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Scientific Reports 2017, 7, 17878.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 27 (54) Freitag, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Reisch, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chizhova, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nemes-Incze, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Holl, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Woods, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Gorbachev, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Geim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Novoselov, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Burgdörfer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Libisch, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Morgenstern, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Large tunable valley splitting in edge-free graphene quantum dots on boron nitride.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Nature Nanotechnology 2018, 13, 392–397.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (55) Gmitra, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Kochan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Högl, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fabian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Trivial and inverted Dirac bands and the emergence of quantum spin Hall states in graphene on transition-metal dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2016, 93, 155104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (56) Offidani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Milletarì, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Raimondi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Ferreira, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Optimal Charge-to-Spin Conversion in Graphene on Transition-Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2017, 119, 196801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (57) Cummings, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Garcia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fabian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Roche, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Giant Spin Lifetime Anisotropy in Graphene Induced by Proximity Effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 2017, 119, 206601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (58) Garcia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vila, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Cummings, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Roche, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Spin transport in graphene/transition metal dichalcogenide heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Chemical Society Reviews 2018, 47, 3359–3379.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (59) Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Koshino, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Twist-angle dependence of the proximity spin-orbit coupling in graphene on transition-metal dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2019, 99, 075438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (60) Zubair, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vasilopoulos, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Tahir, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Influence of interface induced valley-Zeeman and spin-orbit couplings on transport in heterostructures of graphene on WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2020, 101, 165436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (61) Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Maiti, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Maslov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Zero-field spin resonance in graphene with proximity- induced spin-orbit coupling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2021, 104, 155138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (62) Gmitra, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Fabian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Graphene on transition-metal dichalcogenides: A platform for proximity spin-orbit physics and optospintronics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2015, 92, 155403.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' (63) Tahir, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Vasilopoulos, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Magneto-optical transport properties of monolayer WSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' B 2016, 94, 045415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} +page_content=' 28' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AzT4oBgHgl3EQf8_5O/content/2301.01912v1.pdf'} diff --git a/1tFIT4oBgHgl3EQf4CvP/vector_store/index.pkl b/1tFIT4oBgHgl3EQf4CvP/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..0c66449e2b16b2ba2b3b5f3b2a6138a3ffc6d398 --- /dev/null +++ b/1tFIT4oBgHgl3EQf4CvP/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dfaa2ba8e03efae28dbad253816baf5565e382534a570e0bbf009160b3bb56f +size 135915 diff --git a/2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf b/2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e968544a460bfea20d848964818f26ebfd6ae47a --- /dev/null +++ b/2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94c02748995b5161a15c2061277c9df8166cf274cffab335c76f42a89e83f05d +size 653017 diff --git a/2NE1T4oBgHgl3EQfAAJE/vector_store/index.pkl b/2NE1T4oBgHgl3EQfAAJE/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..0eac3e6d2c3cc6dc154ba0799442afcfe7afd8c5 --- /dev/null +++ b/2NE1T4oBgHgl3EQfAAJE/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ba60bd80d7b9a55129f9593fb323927a5e666a6b60501dd9700a7dfe57fdf27 +size 180108 diff --git a/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/2301.02382v1.pdf.txt b/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/2301.02382v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..21eb036996a0a4a1b5dc9fee941a869eb236db4c --- /dev/null +++ b/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/2301.02382v1.pdf.txt @@ -0,0 +1,1049 @@ +ReVoLT: Relational Reasoning and Voronoi Local Graph Planning +for Target-driven Navigation +Junjia Liu13, Jianfei Guo23, Zehui Meng3, Jingtao Xue3 +1 Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong +2 School of Automation Science and Engineering, Xi’an Jiaotong University +3 Application Innovate Laboratory (2012 Laboratories), Huawei Technologies Co., Ltd. +Beijing, 100038, China +jjliu@mae.cuhk.edu.hk, ventus@stu.xjtu.edu.cn, {mengzehui, xuejingtao}@huawei.com +Abstract—Embodied AI is an inevitable trend that emphasizes +the interaction between intelligent entities and the real world, +with broad applications in Robotics, especially target-driven +navigation. This task requires the robot to find an object of a +certain category efficiently in an unknown domestic environment. +Recent works focus on exploiting layout relationships by graph +neural networks (GNNs). However, most of them obtain robot +actions directly from observations in an end-to-end manner +via an incomplete relation graph, which is not interpretable +and reliable. We decouple this task and propose ReVoLT, a +hierarchical framework: (a) an object detection visual front- +end, (b) a high-level reasoner (infers semantic sub-goals), (c) an +intermediate-level planner (computes geometrical positions), and +(d) a low-level controller (executes actions). ReVoLT operates with +a multi-layer semantic-spatial topological graph. The reasoner +uses multiform structured relations as priors, which are obtained +from combinatorial relation extraction networks composed of +unsupervised GraphSAGE, GCN, and GraphRNN-based Region +Rollout. The reasoner performs with Upper Confidence Bound +for Tree (UCT) to infer semantic sub-goals, accounting for +trade-offs between exploitation (depth-first searching) and ex- +ploration (regretting). The lightweight intermediate-level planner +generates instantaneous spatial sub-goal locations via an online +constructed Voronoi local graph. The simulation experiments +demonstrate that our framework achieves better performance in +the target-driven navigation tasks and generalizes well, which +has an 80% improvement compared to the existing state-of- +the-art method. The code and result video will be released at +https://ventusff.github.io/ReVoLT-website/. +Index Terms—Relational reasoning, combinatorial relation +graph neural networks, UCT bandit, online Voronoi local graph +I. INTRODUCTION +Finding objects in complex houses efficiently is a prereq- +uisite for domestic service robots. Robots need to reason and +make dynamic decisions along with interacting with the real- +world environment. Embodied AI, proposed by Matej Hoffman +and Rolf Pfiefer [1], suggests that to truly understand how +the human brain works, a brain should be embedded into +a physical body, and let it explore and interact with the +real world. Among all the work practicing Embodied AI in +recent years, target-driven navigation (TDN) is one of the most +feasible and essential tasks, which combines techniques in both +machine learning and robotics, and is widely applicable for +scenarios such as domestic service robots. It typically requires +the robot to find a target object of a certain category in an +unknown scene, demanding both high efficiency and success +rate. Hence, the key problems of the TDN task are generalizing +across unknown domains and exploring efficiently. +Traditional +Simultaneous +Localization +and +Mapping +(SLAM) +pipeline +has +already +handled +TDN +to +some +extent [2], but there are still numerous problems lying +in its major modules. First, it remains troublesome for +SLAM-based methods to acquire and maintain a lifelong +updating semantic map, which demands accurate sensors and +semantic information. Second, SLAM-based methods are +inherently less adaptive to posterior information, which causes +them not generalizing well in complicated environments, +especially in indoor scenes. Last but not least, SLAM-based +methods are not specially designed for searching objects +in unknown environments, which requires keeping balance +between exploitation (depth-first searching) and exploration +(regretting). +Recently, learning-based methods emerge and show power- +ful capabilities of solving complicated tasks. However, these +methods generally have problems of interpretability and gen- +eralization, especially in the TDN task which require robots +to operate in unseen domain. We argue that it is more natural +and empirical to introduce a priori [3] to the learning model +instead of training from scratch, considering how human teach +ignorant babies. Introducing a priori enables algorithms to +achieve higher data efficiency, better model interpretability, +and generalization. In indoor TDN tasks, one of the most +useful prior information is the relationship among objects +and rooms of different categories. Some recent works reason +about the target direction using object relationships as a +priori in single-room environments [4]–[6]. However, common +domestic scenes are composed of multiple rooms, thus more +prior information such as room connection, object-in-room +membership, and other implicitly structured relationships could +be exploited, which are typically ignored in these works. +In this paper, we propose a hierarchical navigation frame- +work, Relational Reasoning and Voronoi Local graph plan- +ning (ReVoLT), which comprises a combinatorial graph neural +network for multiform domestic relations extraction, an UCT- +based reasoning exploration, and an online Voronoi local graph +for the semantic-spatial transition. The detailed contributions +are as follows: +• The TDN task is concisely decomposed, allowing for +separate and special designs for different modules, instead +of operating in a mixed-up end-to-end manner. We focus +our efforts on designing the reasoner and the planner. +• To extract multiform structural relations for reasoning, we +arXiv:2301.02382v1 [cs.RO] 6 Jan 2023 + +������������ +��������� +���������� +��������� +����������� +���������� +������������ +������������� +����������� +� +� +�� +��������������������������������������������� +�������� +������������� +������������������� +�������������������������� +���������� +�������� +���������� +��������������� +����������������� +����������� +������������������������ +������������������� +���������������� +���������������� +�������������� +������ +�������� +�� +�� +� +� +�� +� +�� +�� +� +������ +Fig. 1. The main hierarchical framework of ReVoLT method, which contains a high-level reasoner (infers semantic sub-goals), an intermediate-level planner +(computes spatial location sub-goal), and a low-level controller (computes actions). The combinatorial relation extraction module provides a priori of the +exploration value about the observed objects and regions through embedding similarity. Especially, Region Rollout model provides Monte Carlo simulation +for UCT in a conditional GraphRNN (c-GraphRNN) way. +propose combining unsupervised GraphSAGE [7], self- +supervised GCN, and c-GraphRNN methods for learning +object embedding, region embedding, and region rollout, +respectively. +• Based on the relation priors, the high-level reasoner +(semantic reasoning) is abstracted as a bandit problem and +adopts UCT to balance exploitation (depth-first searching) +and exploration (regretting). +• We construct Voronoi local graphs online using RGB- +D observations and convert semantic sub-goals to spatial +locations. We term this an intermediate-level planning +process. +• It is found in the test results that the proposed framework +is superior to state-of-the-art methods and achieves a +higher success rate and success weighted by path length +(SPL) with good generalization. +II. RELATED WORKS +Recently, there are many TDN solutions based on relational +reasoning. They have the advantage of replacing an explicit +metric map like SLAM-based methods, inferring the approxi- +mate position of the target object based on observed objects. +Most of these methods use GNNs to learn object-object +proximity relationships but ignore the relationship between +regions/rooms, thus it limits their task scenarios to a single +room (using AI2Thor data set [8] in simulation for training). +For example, Yang et al. [4] propose to use Graph Convo- +lutional Network (GCN) to incorporate the prior knowledge +about object relationship into a Deep Reinforcement Learning +(DRL) framework as part of joint embedding. Their priors are +obtained from large-scale scene understanding datasets and +updated according to the current observation. Qiu et al. [6] +share the same idea, but extract observations as context vectors, +which integrates relationship strength between the connected +objects and their spatial information. +For navigation tasks in houses with multiple rooms, it is +necessary to first reach the room that may contain the target +object (e.g. refrigerator-kitchen), then search the target in +object cliques. Therefore, the learning of prior knowledge +should consider more relationships, including room-to-room +connection and object-in-room membership. Wu et al. [9] +propose a memory structure based on the Bayesian graph +model. It uses the probability relationship graph to get the prior +house layout from the training set and estimates its posterior +in the test set. However, this work does not combine object- +level reasoning to achieve a complete TDN task. Chaplot +et al. [10] build a topological representation with associated +semantic features and learn a prior semantic score function +to evaluate the probability of potential nodes in a graph with +various directions. However, they provide target images,which +is impractical in domestic scenarios, while our method only +uses target labels. They subsequently extend the Active Neural +SLAM system [2], to learn semantic priors using a semanti- +cally aware long-term policy for label target navigation task +[11] and won CVPR 2020 Habitat ObjectNav Challenge1 [12]. +It is worth mentioning that they also point out the end-to-end +learning-based methods suffer from large sample complexity +and poor generalization as they memorize object locations and +appearance in training environments [11], which prompt us to +consider the hierarchical framework with a topological graph. +Table I only lists TDN methods with label target and relational +reasoning. +III. REVOLT REASONING & PLANNING WITH +HIERARCHICAL FRAMEWORK +This task needs to be re-examined from the perspective of +bionics. Imagine a human facing such a task when he enters +an unknown house. He will not feel confused due to the prior +knowledge about domestic scenes he has. It is natural for us to +first roughly determine the type of room based on categories +of multiple observed objects in the current room (e.g. a +bedroom). According to the object-in-room membership, the +1https://aihabitat.org/challenge/2020/ + +13 +12 +11 +10 +6 +8 +6 +5 +0 +8 +10 +12���������������������������������� +��������������������� +���������������� +��������������������������� +��� +������������������������� +����������� +��������������� +������������������ +��� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +����������������������������������� +���������������������� +�������������������� +�������������������������� +�������������������� +��������������������������� +������������������ +����������������������������� +������������������ +Fig. 2. Combinatorial relation extraction module. (a) Obtain object embedding via unsupervised weighted-GraphSAGE; (b) Region embedding is received by +passing a sub-graph with object embedding to GCN layers; (c) According to the house structure of region connectivity, a GraphRNN-based model is used to +learn the structure distribution and generate possible feature of future regions node by node. +TABLE I +PERFORMANCE OF EXISTING TDN METHODS WITH +VARIOUS EXPERIMENT SETTING +Method +Room Scale +Dataset +SR(%) +SPL(%) +Scene-prior [4] +Single +AI2-THOR +35.4 +10.9 +SAVN [13] +Single +AI2-THOR +35.7 +9.3 +MJOLNIR [6] +Single +AI2-THOR +65.3 +21.1 +BRM [9] +Multiple +House3D +- +- +SemExp† [11] +Multiple +Matterport3D +36.0 +14.4 +† SemExp won the first place in CVPR Habitat 2020 competition. +exploration value V(t|cur room) of the target object t in +the current room can be obtained. At the same time, some +potential but unexplored passages (e.g. a door or hallway) +can be determined as ghost nodes like [10]. The structural +relationship of the house layout and room connection can help +us predict categories and value V(t|next room) of next rooms +connected by ghost nodes. +Except for these priors, dynamic decisions also should be +made in a specific task, rather than just applying experience +mechanically. Reasoning procedure which contains intelligent +exploration and exploitation is one of the winning strategies. +Thus, our approach focuses on solving the following two +problems: +• How to obtain a more effective prior conditional explo- +ration value in a structured form? +• How to make efficient decisions between multiple feasible +paths based on exploration values? +The remainder of this section is organized as follows. In +subsection III-A, III-B, III-C, we present a combinatorial +relation extraction module (Fig. 2) using GNNs, which learns +three different relationships in a unified paradigm. A UCT- +based online reasoner is described in subsection III-D. In +III-E, we consider the coarse spatial information and build +an intermediate-level planner through online Voronoi construc- +tion. Finally, the whole ReVoLT hierarchical framework is +summarized in subsection III-F (Fig. 1). +A. Object Embedding learning +As illustrated in Fig. 2 (a), the object-to-object relationship +consists of not only pair-wise semantic similarity, but also +distances and the number of hops between object pairs. We +first extract an object-level graph Go(Vo, Eo) through object +positions pos and category Co from Matterport3D dataset. +Objects in the same room are fully connected. As for object +pairs in different rooms, only those closest to a common door +have an connecting edge. This is useful for the robot to infer +objects that are strongly related to the target just using object- +level embedding. +GraphSAGE [7] is a popular model in the node embedding +field. We adopt it to obtain the embedding of each object +category to fuse semantics and proximity relationships with +other categories. Our node embedding procedure uses GloVe +[14] as the initial node semantic feature {xv, ∀v ∈ Vo}, and +employ an unsupervised form of GraphSAGE with a loss that +penalizes the embedding similarity between two objects far +apart and reward the adjacent two. Different from the original +GraphSAGE, edge features {ωe:u→v, ∀e ∈ Eo} are also taken +into account in aggregation and loss calculations. For each +search depth k, weight matrices Wk, ∀k ∈ {1, . . . , K}, we +employ an edge-weighted mean aggregator which simply takes +the element-wise mean of the vectors in {hk−1 +u +, ∀u ∈ N(v)} +to aggregate information from node neighbors: +h0 +v ← xv, ∀v ∈ V +hk +v ←σ +� +Wk · mean({hk−1 +v +} ∪ {ωu→v · hk−1 +u +}) +� +(1) +Then an edge-weighted loss function is applied to the output +{zv, ∀v ∈ Vo}, and tune the weight matrices Wk: +LGo (zv) = − log +� +σ +� +ωu→vz⊤ +v zu +�� +− Q · Eun∼Pn(v) log +� +σ +� +−ωu→vz⊤ +v zun +�� +(2) + +where Pn is a negative sampling distribution, Q defines the +number of negative samples, σ is the sigmoid function. +Since object embeddings with the same category {zc, ∀c ∈ +Co} should have consistent representation, another mean ag- +gregation is performed on the embeddings of same category +between the final GraphSAGE aggregation and loss function. +This overwrites the original value with the final embedding for +each category {zc ← mean(hK +v ), if Co(v) = c}. +B. Region Embedding learning +Apart from the pairwise relationship between objects, the +many-to-one relationship between an object and a room or +region is also indispensable for inferring the existence pos- +sibility of the target object in a certain room or among +multiple observed objects. Besides, to evaluate the similarity, +relationships of different levels should have a unified paradigm +to obtain representation of consistent metrics. Therefore, for +region-level sub-graphs, we still opt for the same embedding +representation procedure. This part is shown in Fig. 2 (b). +Region embedding is carried out in a self-supervised form. +We take the sub-graph Gr(Vr, Er) as input, with embedding +of objects in the same region {zc, ∀c ∈ Co} as nodes and +weighted spatial distances as edges. The batch composed +of these sub-graphs is passed into the GCN [15], and the +corresponding region embedding {rv, ∀v ∈ Vr} is obtained. +Similarly from the previous procedure, for region embedding +with the same label, a mean aggregation is performed to obtain +a uniform vector representation {rl, ∀l ∈ Lr}. Since there is +no need to do multi-hop aggregations at region-level, a simple +GCN layer is applied rather than GraphSAGE. +To enable membership calculation between region embed- +ding rl and object embedding zc and distinguish regions with +different labels, we use a combined loss which comprises +two parts: the classification loss of embedding label and the +membership loss of object-in-region: +LGr (rv) = − log +� +σ +� +r⊤ +v zu +�� +− Q · Eun∼Pn(v) log +� +σ +� +−r⊤ +v zun +�� +− 1 +n +n +� +i=1 +lv log(ˆl(rv)) +(3) +where Pn(v) represents objects not in region v, and ˆl(·) is a +multi-layer perceptron (MLP) network. +C. Region Rollout learning +As the third and most important part of relation extraction, +the structural relationship reasoning ability plays a crucial +role in understanding the correct direction of navigation and +shortening the exploration period. To achieve this, the joint +probability p(Gh) of houses need to be learned to conceive a +probable house layout memory Gh ∼ p(Gh|Gsub) conditioned +on observed regions Gsub. However, its sample space might not +be easily characterized. Thus, the house graphs are modeled +as sequences by following the idea of GraphRNN [16], and +redefine some concepts to make it more suitable for conditional +reasoning with embedding. This part is shown in Fig. 2 (c). +Sπ = fs(Gh, π) = (Aπ +1, . . . , Aπ +n) +(4) +where π represents the node order, and each element Aπ +i ∈ +{0, 1}(i−1)×(i−1), i ∈ {1, . . . , n} is an adjacent matrix refer- +ring the edges between node π(vi) and its previous nodes +π(vj), j ∈ {1, . . . , i − 1} already in the graph. +Since each Aπ +i has variable dimensions, we first fill them up +to the maximum dimension n and then repeat the 2D matrix +16 times to form a 3D matrix with n × n × 16 dimensions as +an edge mask where 16 is the embedding length. Therefore, a +featured graph can be expressed as the element-wise product +of the region embedding matrix Xπ under corresponding order +and sequence matrix {Sπ}3D: +p(G) = +n+1 +� +i=1 +p +� +xπ +i | ({Sπ +1 }3D, . . . , {Sπ +i−1}3D) ⊙ Xπ +i−1 +� +(5) +where Xπ +i−1 is the embedding matrix with (i − 1) × (i − 1) × +16 dimensions referring to region embeddings before region +π(vi), and xπ +i refers to the embedding of π(vi). +Passing {Sπ}3D ⊙ Xπ as a sequence into GRU or LSTM, +we can get the structure distribution of houses. This allows +us to predict the next region embedding and label under the +condition of the observed subgraph. The loss function of the +Region Rollout network is a CrossEntropy between generated +embedding label and the real label: +LGh(xπ +i ) = − 1 +n +n +� +i=1 +li log-softmax[(xπ +i )T rj], ∀j ∈ Lr +(6) +In conclusion, with the combination of III-A unsupervised +edge-weighted GraphSAGE object embedding learning, III-B +self-supervised GCN region embedding learning, and III-C +c-GraphRNN conditional region rollout, we can now extract +multiform structural relationships. Meanwhile, embedding is +used as a unified paradigm for representation, and the similar- +ity between objects or regions (either observed or predicted) +embeddings and the target object embedding is used as a prior +to guide the exploration in an unknown domain. +D. Reasoning and Exploring as a Bandit Problem +A prior alone cannot lead to success. Inspired by [10], a +posterior topological representation is also constructed in each +specific task to combine experience with practice. Specifically, +we build a multi-layer posterior topological graph covering all +object-level, clique-level and vertex-level. clique divides rooms +into small clustered regions and reduces the burden of the +visual front-end. Each vertex governs the three nearest cliques. +Object Embedding network provides the object node features, +and Region Embedding network generates the features of both +clique and vertex from their attached objects. Region Rollout +network gives an evaluation about ghost nodes. However, there +are always situations contrary to experience in reality. In other +words, robots must have the ability to balance exploration and +exploitation online. +We adopt Upper Confidence Bound for Tree (UCT) method +[17] to set an online bonus. The simulation procedure of UCT +is supported by the Region Rollout network, thus the robot +is not only able to obtain the bonus from reached count, but +also estimate the future exploration value inductive bias ωi of +selected path. It can effectively prevent the robot from being + +������������ +��������� +����������� +���������� +������������ +������������� +����������� +� +� +�� +������������������� +���������� +�������� +���������� +��������������� +������������������������ +������������������� +���������������� +���������������� +������������ +������ +�������� +�� +�� +� +Fig. 3. In a specific task, a multi-layer topological graph is constructed based +on visual front-end, and a tree with the birthplace as the root node is abstracted +from the graph. The clique refers to a collection of adjacent objects or a bunch +of non-semantic obstacles, and the vertex refers to an observed navigable +location. Each gray ghost node has connected two vertices, and only stores +the relative position of the connected vertices to assist localization, without +being used as a navigation sub-goal. The black ghost nodes refer to unknown +areas and promote exploration. +trapped in a useless area. The combined effect of inductive bias +ω and bonus will discourage the repetitive search near negative +(non-success) sub-goals and drive the robot to return to parent +nodes for back-tracking, which we term Revolt Reasoning. +The word Revolt summarizes the characteristics of our method +vividly, which allows robots to regret at nodes with low +exploration value, discarding them and returning to previous +paths. To avoid robots wandering between two goals, it is +necessary to introduce a navigation loss term Ldis to penalize +node distances. Hence, we can finally obtain the exploration +value V of the node i as: +V(t|i) = Σm +i→jωj +m ++ c1 +� +ln Ni +ni +− c2Ldis +(7) +where factors c1 and c2 are set as 1 and 0.5. j refers to one +of node i’s descendants in the tree, and m is its total number. +Ni is the total arrivals of node i and its descendants, while ni +just represents arrivals of node i. +E. Online constructed Voronoi local graph +The reasoner only gives a semantic node id in a graph as +a sub-goal. If the low-level controller directly uses it as a +navigation goal, it will inevitably lead to over-coupling and +increase the difficulty of navigation success. We can refer to +the hierarchical human central nervous system composed of +the brain, cerebellum, brain-stem and spinal cord [18], if the +high-level reasoner is compared to the brain, then the skeletal +muscle is the low-level motor controller. The brain does not +directly transmit motion instructions to the skeletal muscles, +but passes it through the brain-stem, spinal cord and other +lower-level central nervous system for information conversion +[19]. Besides, the brain does not actually support high-speed, +low-latency information interaction while controlling a motion +[20]. Therefore, it is necessary to use a RGB-D camera and +an odometer to construct a local Voronoi graph, offering +approximate relative coordinates of the sub-goal within a +Fig. 4. Combining the depth information with robot’s pose in a short period, +then we can get a simple 3D reconstruction result. A Voronoi local graph can +be constructed through DBSCAN clustering after projecting the 3D map as a +2D obstacle scatter plot. +reachable range as an input to the low-level controller. The +Voronoi graph can record the relationship between the robot +and obstacles, and provide an available path. Since the TDN +task is map-less, we construct a local Voronoi graph within a +fixed step online. +Conditioning on the depth information, the parameters (in- +ternal and external) of the camera, and the odometer infor- +mation, obstacles in depth images can be easily converted +into coordinates in a world coordinate system. This system +is derived from the birth pose of the robot. Projecting this +partially reconstructed 3D map onto a 2D plane along the +vertical axis forms a scatter diagram depicting obstacles. We +can construct a Voronoi diagram online by segmenting naviga- +ble paths and explorable cliques with multiple related objects. +Different from traditional methods [21], we use DBSCAN +[22], [23] (a density-based clustering algorithm) to cluster +the scattered points of adjacent obstacles into convex hulls +first, and then filter out noise points. Followed by constructing +Delaunay triangle with the center of scattered points in the +convex hull, thereby generating a Voronoi diagram. + +������������ +��������� +���������� +������������ +������������� +� +� +�� +�������� +������������� +������������������� +�������������������� +� +�� +� +���������� +�������� +���������� +��������������� +����������������� +����������� +�� +�� +�� +�� +� +� +����������� +���������� +��������� +����������� +��������������������������������������������� +������� ����� ����� +��� ������ � ����������� +Fig. 5. +The semantic sub-goal is converted into relative coordinates by the +Voronoi-based intermediate-level planner. +F. Hierarchical reasoning and planning for navigation +In this section, we will summarize how the proposed rea- +soner and planner cooperate to complete navigation tasks. The +curves in Fig. 5 show the correspondence of concepts between +the topological graph in reasoner and the Voronoi diagram +in planner. The aggregation of obstacles is regarded as a +clique, each of which attaches and records all objects in its +convex hull, and evaluates its inductive bias value according +to the object-in-region membership via the Region Embedding +network. The position of a vertex is generated by Voronoi. +Multiple cliques and their subordinate objects surrounding the +vertex jointly determine the general room label of it, and use +the label for the inductive bias evaluation. Relative directions +and distances between two adjacent vertex nodes are stored in +gray ghost nodes. Since the robot exploits relative coordinates +and directions, it effectively avoids the influence of odometer +and depth camera error, thus insensitive to cumulative error. +Besides, thanks to the Voronoi local diagram, only short-period +scatter data need to be saved, and there is no need to consider +the closed-loop matching problem like SLAM. +With the construction of Voronoi diagram and its trans- +formation to a hierarchical topology, we can conduct rea- +soning in vertex/clique-level and object-level, searching for +the best vertex position and the most likely clique based on +the exploration value. After selecting a clique, the robot will +navigate towards it, and explore it more explicitly as object- +level reasoning. Besides, the Voronoi diagram provides the +evidence for choosing the next best view of one clique. By +changing multiple perspectives, the robot can find the target +object in a clique more efficiently. +IV. EXPERIMENTS +A. Experiment Setup +We use the Habitat simulator [24] with Matterport3D [25] +environment as our experiment platform. Habitat simulator is +a 3D simulator with configurable agents, multiple sensors, and +generic 3D dataset handling. Matterport3D dataset contains 90 +houses with 40 categories of objects and 31 labels of regions. +It also provides detailed object and region segmentation infor- +mation. Here we just focus on 21 categories of target object +required by the task: chair, table, picture, cabinet, cushion, +sofa, bed, chest of drawers, plant, sink, toilet, stool, towel, tv +monitor, shower, bathtub, counter, fireplace, gym equipment, +seating, clothes and also ignore some meaningless room labels, +like outdoor, no label, other room and empty room. We use +YOLOv4 [26] as our object detection module, which is fine- +tuned using objects in Matterport3D dataset. Because the +aiming of low-level controller is the same as PointNav task’s +[27], we adapt a pre-trained state-of-the-art PointNav method +occupancy anticipation [28] as our controller. +During a specific TDN task, the robot is spawned at a +random location in a certain house and is demanded to find a +object of a given category as quickly as possible. The task +is evaluated with three commonly used indicators: Success +Rate (SR), the Success weighted by Path Length (SPL) +and Distance to Success (DTS). SR represents the number of +times the target was found in multiple episodes and is defined +as +1 +N +�N +i=1 Sui, where N is the number of total episodes and +Sui is a binary value representing the success or failure of the +i-th episode. SPL depicts both success and the optimal path +length, it is defined as +1 +N +�N +i=1 Si +Li +max(Pi,Li). Here we use the +shortest length provided by the simulator as Li and the path +length of the robot as Pi in episode i. DTS is the distance +of the agent from the success threshold boundary when the +episode ends. The boundary is set to 1m and the maximum +episode length is 500 steps, which are the same as [11]. +Furthermore, our navigation task has two modes: indepen- +dent (ReVoLT-i) and continuous (ReVoLT-c). Independent mode +is the traditional one, the environment is reset after each +episode and the robot clears its memory. While the continuous +mode allows the robot to keep the topological graph if it +resets in the same house. It is used for evaluating the robot’s +capability of keeping and updating the environment memory. +B. Baselines +Random: At each step, the agent randomly samples an +action from the action space with a uniform distribution. +RGBD + DD-PPO: This baseline is provided by ObjectNav +Challenge 2020 [24]. Directly pass RGB-D information to an +end-to-end DD-PPO and output an action from the policy. +Active Neural SLAM: This baseline uses an exploration +policy trained to maximize coverage from [2], followed by the +heuristic-based local policy as described above. +SemExp: Since [11] has not open-sourced their code, we +directly use results in their paper as a state-of-the-art method. +C. Results +1) results of combinatorial relation embeddings: The Ob- +ject Embedding network obtains classification accuracy of +91%. The Region Embedding network obtains membership +accuracy of 78% and classification accuracy of 75%. The +Region Rollout network reaches prediction accuracy of 45% +in the test set, which is acceptable since room relationships +are not significant inherently. +2) results of the whole TDN task: The results of baseline +methods and ReVoLT is shown in Table II. It can be seen +that both of our models significantly outperform the current +state-of-the-art. ReVoLT-i small has ≈ 80% increase in SR +and nearly twice than SemExp in SPL. This confirms our +hypothesis that separating prior learning and control policy in a +hierarchical framework is indeed a wise approach than directly + +13 +12 +11 +10 +6 +8 +6 +5 +0 +8 +10 +12��������� +��������� +��������� +��������� +��������� +��������� +��������� +��������� +Fig. 6. +Top-down maps of four successful tasks while using ReVoLT-i. +The blue squares are the beginning positions, the blue curves are the robot +trajectories, and arrows represent the robot’s current positions. Targets are +highlighted with green boxes, and pink areas refer to the success threshold +boundary. The color of the trajectory is a gradient from dark to light, and the +brighter the end indicates the longer the path. +TABLE II +PERFORMANCE COMPARISON +Method +SR(%) +SPL +DTS (m) +Random +0 +0 +10.3298 +RGBD + DD-PPO +6.2 +0.021 +9.3162 +Active Neural SLAM +32.1 +0.119 +7.056 +SemExp1 +36.0 +0.144 +6.733 +ReVoLT-i small∗ +66.7 +0.265 +0.9762 +ReVoLT-i∗ +62.5 +0.102 +1.0511 +ReVoLT-c∗ +85.7 +0.070 +0.0253 +1 The 1st prize of AI Habitat 2020 +* These three refer to small mode with only 6 categories target like SemExp, +independence mode (-i) and continuous mode (-c) of ReVoLT. +learning a semantically-aware policy. Besides, the standard +ReVoLT-i with 19 categories of targets still achieve a higher SR +and SPL. By applying the continuous mode, the robot retains +a memory belonging to the same house, which allows it to find +observed targets with a higher SR. +V. ABLATION STUDY +The success of ReVoLT is attributed to the relationship +priors provided by the combinatorial graph neural networks, +the online bonus by UCT, and the distance penalty. Therefore, +we set three extra experiments with the same Voronoi-based +planner and low-level controller to reveal their impacts, respec- +tively. Moreover, the results of the continuous mode are also +presented below. The performance of all varieties is listed in +Table III. +ReVoLT w/o relationship priors. Sub-goal in the navigation +without priors can be generated according to the distance of +the observed cliques. Compared to Fig. 7 (a) with Fig. 6, we +find that the lack of semantic relationship profoundly affects +the robot’s path decision, making it not interested in the region +with a target even though it is just nearby. Besides, the lack +���������������������������������� +������������������������ +������������������������������� +������������������������������� +���������������� +���������� +���������������� +������������������� +������������������� +������������������� +��������� +������������������� +��������� +��������� +Fig. 7. In response to the three parts of exploration value function, we conduct +ablation experiments respectively and illustrate them in top-down maps. +TABLE III +PERFORMANCE OF ABLATION EXPERIMENTS +Method +SR(%) +SPL +DTS (m) +ReVoLT-i +62.5 +0.102 +1.0511 +ReVoLT-c +85.7 +0.070 +0.0253 +ReVoLT w/o priors +25.0 +0.003 +1.4129 +ReVoLT w/o bonus +60.0 +0.034 +0.8139 +ReVoLT w/o distance +54.5 +0.030 +1.2689 +of region classification and region rollout makes the robot +unable to use the observed semantic information to reason +about relationships, resulting in longer paths. +ReVoLT w/o UCT bonus. The bonus is replaced with a fixed +threshold. If the robot reaches the same clique or vertex node +more than twice, then this node will no longer be selected as + +105T105a sub-goal. The corresponding top-down maps are illustrated +in Fig. 7 (b). Without a UCT bonus, the robot falls into an +impossible local region until the threshold is reached. +ReVoLT w/o distance penalty. In Fig. 7 (c), using only priors +and bonuses can also complete tasks, but their paths are longer +due to the fluctuating thoughts while making decisions. +ReVoLT with continuous mode. The left figure of Fig. 7 (d) +is the same as the one in Fig. 6. However, when searching +for the second target in this house, once the robot associates +current observations with the memory, it can find the target +with a higher success rate. However, this also makes the robot +more focused on exploitation and reduces exploration, which +may cause it to ignore closer targets and lead to a lower SPL. +To sum up, relationship priors are essential for robots to +understand the environment semantics, and it is also the major +factor affecting the SR. The UCT bonus and distance penalty +contribute to the improvement of SPL. ReVoLT-c maintains a +long-term scene memory and can get outstanding performance. +VI. CONCLUSION +We propose ReVoLT, a hierarchical reasoning target-driven +navigation framework that combines combinatorial graph re- +lation extraction and online UCT decision operating with a +multi-layer topological graph. ReVoLT shows better perfor- +mance on exploiting the prior relationships, and its bandit +reasoning is more reasonable and efficient. To bridge the +gap between existing point-goal controllers and our reasoner, +we adopt the Voronoi local graph for the semantic-spatial +transition. However, some significant challenges remain in +this field. Our future direction lies in using representation +learning techniques to introduce richer object information like +shape, color, and size, using scene graph detection to introduce +richer semantic relation information like furniture layout, and +achieving more abundant tasks like object instance navigation. +REFERENCES +[1] M. Hoffmann and R. Pfeifer, “The implications of embodiment for +behavior and cognition: animal and robotic case studies,” arXiv preprint +arXiv:1202.0440, 2012. +[2] D. S. Chaplot, D. Gandhi, S. Gupta, A. Gupta, and R. Salakhutdinov, +“Learning to explore using active neural slam,” in International Confer- +ence on Learning Representations, 2019. +[3] K. Chatzilygeroudis, V. Vassiliades, F. Stulp, S. Calinon, and J.-B. +Mouret, “A survey on policy search algorithms for learning robot +controllers in a handful of trials,” IEEE Transactions on Robotics, +vol. 36, no. 2, pp. 328–347, 2019. +[4] W. Yang, X. Wang, A. Farhadi, A. Gupta, and R. Mottaghi, “Visual se- +mantic navigation using scene priors,” arXiv preprint arXiv:1810.06543, +2018. +[5] H. Du, X. Yu, and L. Zheng, “Learning object relation graph and tentative +policy for visual navigation,” in European Conference on Computer +Vision, pp. 19–34, Springer, 2020. +[6] Y. Qiu, A. Pal, and H. I. Christensen, “Learning hierarchical relationships +for object-goal navigation,” 2020. +[7] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation +learning on large graphs,” in Advances in Neural Information Processing +Systems (NeurIPS), 2017. +[8] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, +D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi, “Ai2-thor: An interactive +3d environment for visual ai,” arXiv preprint arXiv:1712.05474, 2017. +[9] Y. Wu, Y. Wu, A. Tamar, S. Russell, G. Gkioxari, and Y. Tian, “Bayesian +relational memory for semantic visual navigation,” in Proceedings of +the IEEE International Conference on Computer Vision, pp. 2769–2779, +2019. +[10] D. S. Chaplot, R. Salakhutdinov, A. Gupta, and S. Gupta, “Neural +topological slam for visual navigation,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, pp. 12875– +12884, 2020. +[11] D. S. Chaplot, D. P. Gandhi, A. Gupta, and R. R. Salakhutdinov, “Object +goal navigation using goal-oriented semantic exploration,” Advances in +Neural Information Processing Systems (NeurIPS), vol. 33, 2020. +[12] D. Batra, A. Gokaslan, A. Kembhavi, O. Maksymets, R. Mottaghi, +M. Savva, A. Toshev, and E. Wijmans, “ObjectNav Revisited: On Evalu- +ation of Embodied Agents Navigating to Objects,” in arXiv:2006.13171, +2020. +[13] M. Wortsman, K. Ehsani, M. Rastegari, A. Farhadi, and R. Mottaghi, +“Learning to learn how to learn: Self-adaptive visual navigation using +meta-learning,” 2019 IEEE/CVF Conference on Computer Vision and +Pattern Recognition (CVPR), pp. 6743–6752, 2019. +[14] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for +word representation,” in Proceedings of the 2014 conference on empirical +methods in natural language processing (EMNLP), pp. 1532–1543, 2014. +[15] T. N. Kipf and M. Welling, “Semi-supervised classification with graph +convolutional networks,” in International Conference on Learning Rep- +resentations (ICLR), 2017. +[16] J. You, R. Ying, X. Ren, W. Hamilton, and J. Leskovec, “Graphrnn: +Generating realistic graphs with deep auto-regressive models,” in Inter- +national Conference on Machine Learning, pp. 5708–5717, 2018. +[17] P.-A. Coquelin and R. Munos, “Bandit algorithms for tree search,” in +Proceedings of the Twenty-Third Conference on Uncertainty in Artificial +Intelligence, pp. 67–74, 2007. +[18] D. Purves, R. Cabeza, S. A. Huettel, K. S. LaBar, M. L. Platt, M. G. +Woldorff, and E. M. Brannon, Cognitive neuroscience. +Sunderland: +Sinauer Associates, Inc, 2008. +[19] E. Bizzi, M. C. Tresch, P. Saltiel, and A. d’Avella, “New perspectives +on spinal motor systems,” Nature Reviews Neuroscience, vol. 1, no. 2, +pp. 101–108, 2000. +[20] D. A. Rosenbaum, Human motor control. Academic press, 2009. +[21] R. Mahkovic and T. Slivnik, “Generalized local voronoi diagram of +visible region,” in Proceedings. 1998 IEEE International Conference on +Robotics and Automation (Cat. No. 98CH36146), vol. 1, pp. 349–355, +IEEE, 1998. +[22] K. Khan, S. U. Rehman, K. Aziz, S. Fong, and S. Sarasvady, “Dbscan: +Past, present and future,” in The fifth international conference on the +applications of digital information and web technologies (ICADIWT +2014), pp. 232–238, IEEE, 2014. +[23] E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu, “Dbscan +revisited, revisited: why and how you should (still) use dbscan,” ACM +Transactions on Database Systems (TODS), vol. 42, no. 3, pp. 1–21, +2017. +[24] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Y. Zhao, +E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, +and D. Batra, “Habitat: A Platform for Embodied AI Research,” in +Proceedings of the IEEE/CVF International Conference on Computer +Vision (ICCV), 2019. +[25] A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva, +S. Song, A. Zeng, and Y. Zhang, “Matterport3D: Learning from RGB- +D data in indoor environments,” International Conference on 3D Vision +(3DV), 2017. +[26] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed +and accuracy of object detection,” arXiv preprint arXiv:2004.10934, +2020. +[27] A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, +M. Savva, S. Chernova, and D. Batra, “Sim2real predictivity: Does +evaluation in simulation predict real-world performance?,” 2019. +[28] S. K. Ramakrishnan, Z. Al-Halah, and K. Grauman, “Occupancy antici- +pation for efficient exploration and navigation,” in European Conference +on Computer Vision, pp. 400–418, Springer, 2020. + diff --git a/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt b/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..0651f5612225f872d123f314489f3b7920b19812 --- /dev/null +++ b/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt @@ -0,0 +1,663 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf,len=662 +page_content='ReVoLT: Relational Reasoning and Voronoi Local Graph Planning for Target-driven Navigation Junjia Liu13, Jianfei Guo23, Zehui Meng3, Jingtao Xue3 1 Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong 2 School of Automation Science and Engineering, Xi’an Jiaotong University 3 Application Innovate Laboratory (2012 Laboratories), Huawei Technologies Co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=', Ltd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Beijing, 100038, China jjliu@mae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='cuhk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='hk, ventus@stu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='xjtu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='cn, {mengzehui, xuejingtao}@huawei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='com Abstract—Embodied AI is an inevitable trend that emphasizes the interaction between intelligent entities and the real world, with broad applications in Robotics, especially target-driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This task requires the robot to find an object of a certain category efficiently in an unknown domestic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Recent works focus on exploiting layout relationships by graph neural networks (GNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, most of them obtain robot actions directly from observations in an end-to-end manner via an incomplete relation graph, which is not interpretable and reliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We decouple this task and propose ReVoLT, a hierarchical framework: (a) an object detection visual front- end, (b) a high-level reasoner (infers semantic sub-goals), (c) an intermediate-level planner (computes geometrical positions), and (d) a low-level controller (executes actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT operates with a multi-layer semantic-spatial topological graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The reasoner uses multiform structured relations as priors, which are obtained from combinatorial relation extraction networks composed of unsupervised GraphSAGE, GCN, and GraphRNN-based Region Rollout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The reasoner performs with Upper Confidence Bound for Tree (UCT) to infer semantic sub-goals, accounting for trade-offs between exploitation (depth-first searching) and ex- ploration (regretting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The lightweight intermediate-level planner generates instantaneous spatial sub-goal locations via an online constructed Voronoi local graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The simulation experiments demonstrate that our framework achieves better performance in the target-driven navigation tasks and generalizes well, which has an 80% improvement compared to the existing state-of- the-art method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The code and result video will be released at https://ventusff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='io/ReVoLT-website/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Index Terms—Relational reasoning, combinatorial relation graph neural networks, UCT bandit, online Voronoi local graph I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' INTRODUCTION Finding objects in complex houses efficiently is a prereq- uisite for domestic service robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Robots need to reason and make dynamic decisions along with interacting with the real- world environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Embodied AI, proposed by Matej Hoffman and Rolf Pfiefer [1], suggests that to truly understand how the human brain works, a brain should be embedded into a physical body, and let it explore and interact with the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Among all the work practicing Embodied AI in recent years, target-driven navigation (TDN) is one of the most feasible and essential tasks, which combines techniques in both machine learning and robotics, and is widely applicable for scenarios such as domestic service robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It typically requires the robot to find a target object of a certain category in an unknown scene, demanding both high efficiency and success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hence, the key problems of the TDN task are generalizing across unknown domains and exploring efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Traditional Simultaneous Localization and Mapping (SLAM) pipeline has already handled TDN to some extent [2], but there are still numerous problems lying in its major modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' First, it remains troublesome for SLAM-based methods to acquire and maintain a lifelong updating semantic map, which demands accurate sensors and semantic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Second, SLAM-based methods are inherently less adaptive to posterior information, which causes them not generalizing well in complicated environments, especially in indoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Last but not least, SLAM-based methods are not specially designed for searching objects in unknown environments, which requires keeping balance between exploitation (depth-first searching) and exploration (regretting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Recently, learning-based methods emerge and show power- ful capabilities of solving complicated tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, these methods generally have problems of interpretability and gen- eralization, especially in the TDN task which require robots to operate in unseen domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We argue that it is more natural and empirical to introduce a priori [3] to the learning model instead of training from scratch, considering how human teach ignorant babies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Introducing a priori enables algorithms to achieve higher data efficiency, better model interpretability, and generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In indoor TDN tasks, one of the most useful prior information is the relationship among objects and rooms of different categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Some recent works reason about the target direction using object relationships as a priori in single-room environments [4]–[6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, common domestic scenes are composed of multiple rooms, thus more prior information such as room connection, object-in-room membership, and other implicitly structured relationships could be exploited, which are typically ignored in these works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In this paper, we propose a hierarchical navigation frame- work, Relational Reasoning and Voronoi Local graph plan- ning (ReVoLT), which comprises a combinatorial graph neural network for multiform domestic relations extraction, an UCT- based reasoning exploration, and an online Voronoi local graph for the semantic-spatial transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The detailed contributions are as follows: The TDN task is concisely decomposed, allowing for separate and special designs for different modules, instead of operating in a mixed-up end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We focus our efforts on designing the reasoner and the planner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To extract multiform structural relations for reasoning, we arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='02382v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='RO] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='6 Jan 2023 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The main hierarchical framework of ReVoLT method, which contains a high-level reasoner (infers semantic sub-goals), an intermediate-level planner (computes spatial location sub-goal), and a low-level controller (computes actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The combinatorial relation extraction module provides a priori of the exploration value about the observed objects and regions through embedding similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Especially, Region Rollout model provides Monte Carlo simulation for UCT in a conditional GraphRNN (c-GraphRNN) way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' propose combining unsupervised GraphSAGE [7], self- supervised GCN, and c-GraphRNN methods for learning object embedding, region embedding, and region rollout, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Based on the relation priors, the high-level reasoner (semantic reasoning) is abstracted as a bandit problem and adopts UCT to balance exploitation (depth-first searching) and exploration (regretting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We construct Voronoi local graphs online using RGB- D observations and convert semantic sub-goals to spatial locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We term this an intermediate-level planning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is found in the test results that the proposed framework is superior to state-of-the-art methods and achieves a higher success rate and success weighted by path length (SPL) with good generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' RELATED WORKS Recently, there are many TDN solutions based on relational reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' They have the advantage of replacing an explicit metric map like SLAM-based methods, inferring the approxi- mate position of the target object based on observed objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Most of these methods use GNNs to learn object-object proximity relationships but ignore the relationship between regions/rooms, thus it limits their task scenarios to a single room (using AI2Thor data set [8] in simulation for training).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' For example, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [4] propose to use Graph Convo- lutional Network (GCN) to incorporate the prior knowledge about object relationship into a Deep Reinforcement Learning (DRL) framework as part of joint embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Their priors are obtained from large-scale scene understanding datasets and updated according to the current observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Qiu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [6] share the same idea, but extract observations as context vectors, which integrates relationship strength between the connected objects and their spatial information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' For navigation tasks in houses with multiple rooms, it is necessary to first reach the room that may contain the target object (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' refrigerator-kitchen), then search the target in object cliques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, the learning of prior knowledge should consider more relationships, including room-to-room connection and object-in-room membership.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [9] propose a memory structure based on the Bayesian graph model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It uses the probability relationship graph to get the prior house layout from the training set and estimates its posterior in the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, this work does not combine object- level reasoning to achieve a complete TDN task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [10] build a topological representation with associated semantic features and learn a prior semantic score function to evaluate the probability of potential nodes in a graph with various directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, they provide target images,which is impractical in domestic scenarios, while our method only uses target labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' They subsequently extend the Active Neural SLAM system [2], to learn semantic priors using a semanti- cally aware long-term policy for label target navigation task [11] and won CVPR 2020 Habitat ObjectNav Challenge1 [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is worth mentioning that they also point out the end-to-end learning-based methods suffer from large sample complexity and poor generalization as they memorize object locations and appearance in training environments [11], which prompt us to consider the hierarchical framework with a topological graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Table I only lists TDN methods with label target and relational reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' REVOLT REASONING & PLANNING WITH HIERARCHICAL FRAMEWORK This task needs to be re-examined from the perspective of bionics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Imagine a human facing such a task when he enters an unknown house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' He will not feel confused due to the prior knowledge about domestic scenes he has.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is natural for us to first roughly determine the type of room based on categories of multiple observed objects in the current room (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' a bedroom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' According to the object-in-room membership, the 1https://aihabitat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='org/challenge/2020/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='12���������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Combinatorial relation extraction module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' (a) Obtain object embedding via unsupervised weighted-GraphSAGE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' (b) Region embedding is received by passing a sub-graph with object embedding to GCN layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' (c) According to the house structure of region connectivity, a GraphRNN-based model is used to learn the structure distribution and generate possible feature of future regions node by node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' TABLE I PERFORMANCE OF EXISTING TDN METHODS WITH VARIOUS EXPERIMENT SETTING Method Room Scale Dataset SR(%) SPL(%) Scene-prior [4] Single AI2-THOR 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='4 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='9 SAVN [13] Single AI2-THOR 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3 MJOLNIR [6] Single AI2-THOR 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='1 BRM [9] Multiple House3D SemExp† [11] Multiple Matterport3D 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='4 † SemExp won the first place in CVPR Habitat 2020 competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' exploration value V(t|cur room) of the target object t in the current room can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' At the same time, some potential but unexplored passages (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' a door or hallway) can be determined as ghost nodes like [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The structural relationship of the house layout and room connection can help us predict categories and value V(t|next room) of next rooms connected by ghost nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Except for these priors, dynamic decisions also should be made in a specific task, rather than just applying experience mechanically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Reasoning procedure which contains intelligent exploration and exploitation is one of the winning strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Thus, our approach focuses on solving the following two problems: How to obtain a more effective prior conditional explo- ration value in a structured form?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' How to make efficient decisions between multiple feasible paths based on exploration values?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The remainder of this section is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In subsection III-A, III-B, III-C, we present a combinatorial relation extraction module (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2) using GNNs, which learns three different relationships in a unified paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A UCT- based online reasoner is described in subsection III-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In III-E, we consider the coarse spatial information and build an intermediate-level planner through online Voronoi construc- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Finally, the whole ReVoLT hierarchical framework is summarized in subsection III-F (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Object Embedding learning As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2 (a), the object-to-object relationship consists of not only pair-wise semantic similarity, but also distances and the number of hops between object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We first extract an object-level graph Go(Vo, Eo) through object positions pos and category Co from Matterport3D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Objects in the same room are fully connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' As for object pairs in different rooms, only those closest to a common door have an connecting edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This is useful for the robot to infer objects that are strongly related to the target just using object- level embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' GraphSAGE [7] is a popular model in the node embedding field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We adopt it to obtain the embedding of each object category to fuse semantics and proximity relationships with other categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Our node embedding procedure uses GloVe [14] as the initial node semantic feature {xv, ∀v ∈ Vo}, and employ an unsupervised form of GraphSAGE with a loss that penalizes the embedding similarity between two objects far apart and reward the adjacent two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Different from the original GraphSAGE, edge features {ωe:u→v, ∀e ∈ Eo} are also taken into account in aggregation and loss calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' For each search depth k, weight matrices Wk, ∀k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' K},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' we employ an edge-weighted mean aggregator which simply takes the element-wise mean of the vectors in {hk−1 u ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ∀u ∈ N(v)} to aggregate information from node neighbors: h0 v ← xv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ∀v ∈ V hk v ←σ � Wk · mean({hk−1 v } ∪ {ωu→v · hk−1 u }) � (1) Then an edge-weighted loss function is applied to the output {zv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ∀v ∈ Vo},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' and tune the weight matrices Wk: LGo (zv) = − log � σ � ωu→vz⊤ v zu �� − Q · Eun∼Pn(v) log � σ � −ωu→vz⊤ v zun �� (2) where Pn is a negative sampling distribution,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Q defines the number of negative samples,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' σ is the sigmoid function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since object embeddings with the same category {zc, ∀c ∈ Co} should have consistent representation, another mean ag- gregation is performed on the embeddings of same category between the final GraphSAGE aggregation and loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This overwrites the original value with the final embedding for each category {zc ← mean(hK v ), if Co(v) = c}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region Embedding learning Apart from the pairwise relationship between objects, the many-to-one relationship between an object and a room or region is also indispensable for inferring the existence pos- sibility of the target object in a certain room or among multiple observed objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, to evaluate the similarity, relationships of different levels should have a unified paradigm to obtain representation of consistent metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, for region-level sub-graphs, we still opt for the same embedding representation procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This part is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region embedding is carried out in a self-supervised form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We take the sub-graph Gr(Vr, Er) as input, with embedding of objects in the same region {zc, ∀c ∈ Co} as nodes and weighted spatial distances as edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The batch composed of these sub-graphs is passed into the GCN [15], and the corresponding region embedding {rv, ∀v ∈ Vr} is obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Similarly from the previous procedure, for region embedding with the same label, a mean aggregation is performed to obtain a uniform vector representation {rl, ∀l ∈ Lr}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since there is no need to do multi-hop aggregations at region-level, a simple GCN layer is applied rather than GraphSAGE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To enable membership calculation between region embed- ding rl and object embedding zc and distinguish regions with different labels,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' we use a combined loss which comprises two parts: the classification loss of embedding label and the membership loss of object-in-region: LGr (rv) = − log � σ � r⊤ v zu �� − Q · Eun∼Pn(v) log � σ � −r⊤ v zun �� − 1 n n � i=1 lv log(ˆl(rv)) (3) where Pn(v) represents objects not in region v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' and ˆl(·) is a multi-layer perceptron (MLP) network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region Rollout learning As the third and most important part of relation extraction, the structural relationship reasoning ability plays a crucial role in understanding the correct direction of navigation and shortening the exploration period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To achieve this, the joint probability p(Gh) of houses need to be learned to conceive a probable house layout memory Gh ∼ p(Gh|Gsub) conditioned on observed regions Gsub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, its sample space might not be easily characterized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Thus, the house graphs are modeled as sequences by following the idea of GraphRNN [16], and redefine some concepts to make it more suitable for conditional reasoning with embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This part is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2 (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sπ = fs(Gh, π) = (Aπ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , Aπ n) (4) where π represents the node order, and each element Aπ i ∈ {0, 1}(i−1)×(i−1), i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , n} is an adjacent matrix refer- ring the edges between node π(vi) and its previous nodes π(vj), j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , i − 1} already in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since each Aπ i has variable dimensions, we first fill them up to the maximum dimension n and then repeat the 2D matrix 16 times to form a 3D matrix with n × n × 16 dimensions as an edge mask where 16 is the embedding length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, a featured graph can be expressed as the element-wise product of the region embedding matrix Xπ under corresponding order and sequence matrix {Sπ}3D: p(G) = n+1 � i=1 p � xπ i | ({Sπ 1 }3D, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , {Sπ i−1}3D) ⊙ Xπ i−1 � (5) where Xπ i−1 is the embedding matrix with (i − 1) × (i − 1) × 16 dimensions referring to region embeddings before region π(vi), and xπ i refers to the embedding of π(vi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Passing {Sπ}3D ⊙ Xπ as a sequence into GRU or LSTM, we can get the structure distribution of houses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This allows us to predict the next region embedding and label under the condition of the observed subgraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The loss function of the Region Rollout network is a CrossEntropy between generated embedding label and the real label: LGh(xπ i ) = − 1 n n � i=1 li log-softmax[(xπ i )T rj], ∀j ∈ Lr (6) In conclusion, with the combination of III-A unsupervised edge-weighted GraphSAGE object embedding learning, III-B self-supervised GCN region embedding learning, and III-C c-GraphRNN conditional region rollout, we can now extract multiform structural relationships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Meanwhile, embedding is used as a unified paradigm for representation, and the similar- ity between objects or regions (either observed or predicted) embeddings and the target object embedding is used as a prior to guide the exploration in an unknown domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Reasoning and Exploring as a Bandit Problem A prior alone cannot lead to success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Inspired by [10], a posterior topological representation is also constructed in each specific task to combine experience with practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Specifically, we build a multi-layer posterior topological graph covering all object-level, clique-level and vertex-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' clique divides rooms into small clustered regions and reduces the burden of the visual front-end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Each vertex governs the three nearest cliques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Object Embedding network provides the object node features, and Region Embedding network generates the features of both clique and vertex from their attached objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region Rollout network gives an evaluation about ghost nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, there are always situations contrary to experience in reality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In other words, robots must have the ability to balance exploration and exploitation online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We adopt Upper Confidence Bound for Tree (UCT) method [17] to set an online bonus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The simulation procedure of UCT is supported by the Region Rollout network, thus the robot is not only able to obtain the bonus from reached count, but also estimate the future exploration value inductive bias ωi of selected path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It can effectively prevent the robot from being ������������ ��������� ����������� ���������� ������������ ������������� ����������� � � �� ������������������� ���������� �������� ���������� ��������������� ������������������������ ������������������� ���������������� ���������������� ������������ ������ �������� �� �� � Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In a specific task, a multi-layer topological graph is constructed based on visual front-end, and a tree with the birthplace as the root node is abstracted from the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The clique refers to a collection of adjacent objects or a bunch of non-semantic obstacles, and the vertex refers to an observed navigable location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Each gray ghost node has connected two vertices, and only stores the relative position of the connected vertices to assist localization, without being used as a navigation sub-goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The black ghost nodes refer to unknown areas and promote exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' trapped in a useless area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The combined effect of inductive bias ω and bonus will discourage the repetitive search near negative (non-success) sub-goals and drive the robot to return to parent nodes for back-tracking, which we term Revolt Reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The word Revolt summarizes the characteristics of our method vividly, which allows robots to regret at nodes with low exploration value, discarding them and returning to previous paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To avoid robots wandering between two goals, it is necessary to introduce a navigation loss term Ldis to penalize node distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hence, we can finally obtain the exploration value V of the node i as: V(t|i) = Σm i→jωj m + c1 � ln Ni ni − c2Ldis (7) where factors c1 and c2 are set as 1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' j refers to one of node i’s descendants in the tree, and m is its total number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ni is the total arrivals of node i and its descendants, while ni just represents arrivals of node i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Online constructed Voronoi local graph The reasoner only gives a semantic node id in a graph as a sub-goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' If the low-level controller directly uses it as a navigation goal, it will inevitably lead to over-coupling and increase the difficulty of navigation success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We can refer to the hierarchical human central nervous system composed of the brain, cerebellum, brain-stem and spinal cord [18], if the high-level reasoner is compared to the brain, then the skeletal muscle is the low-level motor controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The brain does not directly transmit motion instructions to the skeletal muscles, but passes it through the brain-stem, spinal cord and other lower-level central nervous system for information conversion [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the brain does not actually support high-speed, low-latency information interaction while controlling a motion [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, it is necessary to use a RGB-D camera and an odometer to construct a local Voronoi graph, offering approximate relative coordinates of the sub-goal within a Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Combining the depth information with robot’s pose in a short period, then we can get a simple 3D reconstruction result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A Voronoi local graph can be constructed through DBSCAN clustering after projecting the 3D map as a 2D obstacle scatter plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' reachable range as an input to the low-level controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The Voronoi graph can record the relationship between the robot and obstacles, and provide an available path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since the TDN task is map-less, we construct a local Voronoi graph within a fixed step online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Conditioning on the depth information, the parameters (in- ternal and external) of the camera, and the odometer infor- mation, obstacles in depth images can be easily converted into coordinates in a world coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This system is derived from the birth pose of the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Projecting this partially reconstructed 3D map onto a 2D plane along the vertical axis forms a scatter diagram depicting obstacles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We can construct a Voronoi diagram online by segmenting naviga- ble paths and explorable cliques with multiple related objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Different from traditional methods [21], we use DBSCAN [22], [23] (a density-based clustering algorithm) to cluster the scattered points of adjacent obstacles into convex hulls first, and then filter out noise points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Followed by constructing Delaunay triangle with the center of scattered points in the convex hull, thereby generating a Voronoi diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ������������ ��������� ���������� ������������ ������������� � � �� �������� ������������� ������������������� �������������������� � �� � ���������� �������� ���������� ��������������� ����������������� ����������� �� �� �� �� � � ����������� ���������� ��������� ����������� ��������������������������������������������� ������� ����� ����� ��� ������ � ����������� Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The semantic sub-goal is converted into relative coordinates by the Voronoi-based intermediate-level planner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hierarchical reasoning and planning for navigation In this section, we will summarize how the proposed rea- soner and planner cooperate to complete navigation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The curves in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 5 show the correspondence of concepts between the topological graph in reasoner and the Voronoi diagram in planner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The aggregation of obstacles is regarded as a clique, each of which attaches and records all objects in its convex hull, and evaluates its inductive bias value according to the object-in-region membership via the Region Embedding network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The position of a vertex is generated by Voronoi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Multiple cliques and their subordinate objects surrounding the vertex jointly determine the general room label of it, and use the label for the inductive bias evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Relative directions and distances between two adjacent vertex nodes are stored in gray ghost nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since the robot exploits relative coordinates and directions, it effectively avoids the influence of odometer and depth camera error, thus insensitive to cumulative error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, thanks to the Voronoi local diagram, only short-period scatter data need to be saved, and there is no need to consider the closed-loop matching problem like SLAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' With the construction of Voronoi diagram and its trans- formation to a hierarchical topology, we can conduct rea- soning in vertex/clique-level and object-level, searching for the best vertex position and the most likely clique based on the exploration value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' After selecting a clique, the robot will navigate towards it, and explore it more explicitly as object- level reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the Voronoi diagram provides the evidence for choosing the next best view of one clique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' By changing multiple perspectives, the robot can find the target object in a clique more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' EXPERIMENTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Experiment Setup We use the Habitat simulator [24] with Matterport3D [25] environment as our experiment platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Habitat simulator is a 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Matterport3D dataset contains 90 houses with 40 categories of objects and 31 labels of regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It also provides detailed object and region segmentation infor- mation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Here we just focus on 21 categories of target object required by the task: chair, table, picture, cabinet, cushion, sofa, bed, chest of drawers, plant, sink, toilet, stool, towel, tv monitor, shower, bathtub, counter, fireplace, gym equipment, seating, clothes and also ignore some meaningless room labels, like outdoor, no label, other room and empty room.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We use YOLOv4 [26] as our object detection module, which is fine- tuned using objects in Matterport3D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Because the aiming of low-level controller is the same as PointNav task’s [27], we adapt a pre-trained state-of-the-art PointNav method occupancy anticipation [28] as our controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' During a specific TDN task, the robot is spawned at a random location in a certain house and is demanded to find a object of a given category as quickly as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The task is evaluated with three commonly used indicators: Success Rate (SR), the Success weighted by Path Length (SPL) and Distance to Success (DTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' SR represents the number of times the target was found in multiple episodes and is defined as 1 N �N i=1 Sui, where N is the number of total episodes and Sui is a binary value representing the success or failure of the i-th episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' SPL depicts both success and the optimal path length, it is defined as 1 N �N i=1 Si Li max(Pi,Li).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Here we use the shortest length provided by the simulator as Li and the path length of the robot as Pi in episode i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' DTS is the distance of the agent from the success threshold boundary when the episode ends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The boundary is set to 1m and the maximum episode length is 500 steps, which are the same as [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Furthermore, our navigation task has two modes: indepen- dent (ReVoLT-i) and continuous (ReVoLT-c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Independent mode is the traditional one, the environment is reset after each episode and the robot clears its memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' While the continuous mode allows the robot to keep the topological graph if it resets in the same house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is used for evaluating the robot’s capability of keeping and updating the environment memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Baselines Random: At each step, the agent randomly samples an action from the action space with a uniform distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' RGBD + DD-PPO: This baseline is provided by ObjectNav Challenge 2020 [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Directly pass RGB-D information to an end-to-end DD-PPO and output an action from the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Active Neural SLAM: This baseline uses an exploration policy trained to maximize coverage from [2], followed by the heuristic-based local policy as described above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' SemExp: Since [11] has not open-sourced their code, we directly use results in their paper as a state-of-the-art method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Results 1) results of combinatorial relation embeddings: The Ob- ject Embedding network obtains classification accuracy of 91%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The Region Embedding network obtains membership accuracy of 78% and classification accuracy of 75%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The Region Rollout network reaches prediction accuracy of 45% in the test set, which is acceptable since room relationships are not significant inherently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2) results of the whole TDN task: The results of baseline methods and ReVoLT is shown in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It can be seen that both of our models significantly outperform the current state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT-i small has ≈ 80% increase in SR and nearly twice than SemExp in SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This confirms our hypothesis that separating prior learning and control policy in a hierarchical framework is indeed a wise approach than directly 13 12 11 10 6 8 6 5 0 8 10 12��������� ��������� ��������� ��������� ��������� ��������� ��������� ��������� Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Top-down maps of four successful tasks while using ReVoLT-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The blue squares are the beginning positions, the blue curves are the robot trajectories, and arrows represent the robot’s current positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Targets are highlighted with green boxes, and pink areas refer to the success threshold boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The color of the trajectory is a gradient from dark to light, and the brighter the end indicates the longer the path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' TABLE II PERFORMANCE COMPARISON Method SR(%) SPL DTS (m) Random 0 0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3298 RGBD + DD-PPO 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='021 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3162 Active Neural SLAM 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='119 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='056 SemExp1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='144 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='733 ReVoLT-i small∗ 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='265 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='9762 ReVoLT-i∗ 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='102 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0511 ReVoLT-c∗ 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='070 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0253 1 The 1st prize of AI Habitat 2020 These three refer to small mode with only 6 categories target like SemExp, independence mode (-i) and continuous mode (-c) of ReVoLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' learning a semantically-aware policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the standard ReVoLT-i with 19 categories of targets still achieve a higher SR and SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' By applying the continuous mode, the robot retains a memory belonging to the same house, which allows it to find observed targets with a higher SR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ABLATION STUDY The success of ReVoLT is attributed to the relationship priors provided by the combinatorial graph neural networks, the online bonus by UCT, and the distance penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, we set three extra experiments with the same Voronoi-based planner and low-level controller to reveal their impacts, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Moreover, the results of the continuous mode are also presented below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The performance of all varieties is listed in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT w/o relationship priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sub-goal in the navigation without priors can be generated according to the distance of the observed cliques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Compared to Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (a) with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6, we find that the lack of semantic relationship profoundly affects the robot’s path decision, making it not interested in the region with a target even though it is just nearby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the lack ���������������������������������� ������������������������ ������������������������������� ������������������������������� ���������������� ���������� ���������������� ������������������� ������������������� ������������������� ��������� ������������������� ��������� ��������� Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In response to the three parts of exploration value function, we conduct ablation experiments respectively and illustrate them in top-down maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' TABLE III PERFORMANCE OF ABLATION EXPERIMENTS Method SR(%) SPL DTS (m) ReVoLT-i 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='102 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0511 ReVoLT-c 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='070 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0253 ReVoLT w/o priors 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='003 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='4129 ReVoLT w/o bonus 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='034 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='8139 ReVoLT w/o distance 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='030 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='2689 of region classification and region rollout makes the robot unable to use the observed semantic information to reason about relationships, resulting in longer paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT w/o UCT bonus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The bonus is replaced with a fixed threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' If the robot reaches the same clique or vertex node more than twice, then this node will no longer be selected as 105T105a sub-goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The corresponding top-down maps are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Without a UCT bonus, the robot falls into an impossible local region until the threshold is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT w/o distance penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (c), using only priors and bonuses can also complete tasks, but their paths are longer due to the fluctuating thoughts while making decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT with continuous mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The left figure of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (d) is the same as the one in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, when searching for the second target in this house, once the robot associates current observations with the memory, it can find the target with a higher success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, this also makes the robot more focused on exploitation and reduces exploration, which may cause it to ignore closer targets and lead to a lower SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To sum up, relationship priors are essential for robots to understand the environment semantics, and it is also the major factor affecting the SR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The UCT bonus and distance penalty contribute to the improvement of SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT-c maintains a long-term scene memory and can get outstanding performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' CONCLUSION We propose ReVoLT, a hierarchical reasoning target-driven navigation framework that combines combinatorial graph re- lation extraction and online UCT decision operating with a multi-layer topological graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT shows better perfor- mance on exploiting the prior relationships, and its bandit reasoning is more reasonable and efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To bridge the gap between existing point-goal controllers and our reasoner, we adopt the Voronoi local graph for the semantic-spatial transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, some significant challenges remain in this field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Our future direction lies in using representation learning techniques to introduce richer object information like shape, color, and size, using scene graph detection to introduce richer semantic relation information like furniture layout, and achieving more abundant tasks like object instance navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' REFERENCES [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hoffmann and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Pfeifer, “The implications of embodiment for behavior and cognition: animal and robotic case studies,” arXiv preprint arXiv:1202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0440, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gandhi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Salakhutdinov, “Learning to explore using active neural slam,” in International Confer- ence on Learning Representations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [3] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chatzilygeroudis, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Vassiliades, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Stulp, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Calinon, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mouret, “A survey on policy search algorithms for learning robot controllers in a handful of trials,” IEEE Transactions on Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 328–347, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [4] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Farhadi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, “Visual se- mantic navigation using scene priors,” arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='06543, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [5] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Du, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Yu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zheng, “Learning object relation graph and tentative policy for visual navigation,” in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 19–34, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [6] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Qiu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Pal, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Christensen, “Learning hierarchical relationships for object-goal navigation,” 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [7] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hamilton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ying, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [8] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kolve, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Han, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' VanderBilt, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Weihs, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Herrasti, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gordon, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zhu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Farhadi, “Ai2-thor: An interactive 3d environment for visual ai,” arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='05474, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [9] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Tamar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Russell, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gkioxari, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Tian, “Bayesian relational memory for semantic visual navigation,” in Proceedings of the IEEE International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2769–2779, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [10] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Salakhutdinov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, “Neural topological slam for visual navigation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 12875– 12884, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [11] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gandhi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Salakhutdinov, “Object goal navigation using goal-oriented semantic exploration,” Advances in Neural Information Processing Systems (NeurIPS), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 33, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Batra, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gokaslan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kembhavi, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Maksymets, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Savva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Toshev, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wijmans, “ObjectNav Revisited: On Evalu- ation of Embodied Agents Navigating to Objects,” in arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='13171, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wortsman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ehsani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Rastegari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Farhadi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, “Learning to learn how to learn: Self-adaptive visual navigation using meta-learning,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6743–6752, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Pennington, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Socher, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1532–1543, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kipf and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Rep- resentations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' You, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ying, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ren, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hamilton, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Leskovec, “Graphrnn: Generating realistic graphs with deep auto-regressive models,” in Inter- national Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 5708–5717, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [17] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Coquelin and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Munos, “Bandit algorithms for tree search,” in Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 67–74, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [18] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Purves, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Cabeza, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Huettel, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' LaBar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Platt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Woldorff, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Brannon, Cognitive neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sunderland: Sinauer Associates, Inc, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [19] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Bizzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Tresch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Saltiel, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' d’Avella, “New perspectives on spinal motor systems,” Nature Reviews Neuroscience, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 101–108, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [20] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Rosenbaum, Human motor control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Academic press, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [21] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mahkovic and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Slivnik, “Generalized local voronoi diagram of visible region,” in Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1998 IEEE International Conference on Robotics and Automation (Cat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 98CH36146), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 349–355, IEEE, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Khan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Rehman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Aziz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Fong, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sarasvady, “Dbscan: Past, present and future,” in The fifth international conference on the applications of digital information and web technologies (ICADIWT 2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 232–238, IEEE, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [23] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Schubert, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sander, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ester, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kriegel, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Xu, “Dbscan revisited, revisited: why and how you should (still) use dbscan,” ACM Transactions on Database Systems (TODS), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 42, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1–21, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [24] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zhao, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wijmans, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Jain, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Straub, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Liu, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Koltun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Malik, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Parikh, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Batra, “Habitat: A Platform for Embodied AI Research,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Dai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Funkhouser, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Halber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Niessner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Savva, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Song, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zeng, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zhang, “Matterport3D: Learning from RGB- D data in indoor environments,” International Conference on 3D Vision (3DV), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [26] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Bochkovskiy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='10934, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [27] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kadian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Truong, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gokaslan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Clegg, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wijmans, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Savva, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chernova, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Batra, “Sim2real predictivity: Does evaluation in simulation predict real-world performance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=',” 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ramakrishnan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Al-Halah, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Grauman, “Occupancy antici- pation for efficient exploration and navigation,” in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 400–418, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} diff --git a/39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf b/39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..49db574ed8da8eaacbe151b7b41a25702cfa163c --- /dev/null +++ b/39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:951bd9bc89217cb496e7f9b4d64a8f16b01c43f14e5c8352e1e72cafeef4a045 +size 222924 diff --git a/39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss b/39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..9b085268c9732795c2cccc146339917a07e086c6 --- /dev/null +++ b/39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9237b06040f0468c198f7fc043ef0a089b7ffa148d0242e7af0f4c6d1f33998c +size 2490413 diff --git a/39E3T4oBgHgl3EQfQAlh/vector_store/index.pkl b/39E3T4oBgHgl3EQfQAlh/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..d4caa2165bef49388a4c9b223bbdc25bb0f24380 --- /dev/null +++ b/39E3T4oBgHgl3EQfQAlh/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67093fffd9c4ec2720489de53113f242425f7ecbd42adbf3c2d6ffb0a5469b8f +size 103685 diff --git a/3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss b/3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..0a88eebd2430ca4d9269e68e6c87e7220c6a1853 --- /dev/null +++ b/3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b96c03cdc0d6f4bd82b5b51529cfb9b6ff0a5125e7c53626e3581855e14d270 +size 3604525 diff --git a/4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf b/4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f4be6a2f29afacfb87636585cab09b4f7f9a913e --- /dev/null +++ b/4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fbb34cca9ed18d4bb687d691b8e307578cb9d8554c646db4773fd4eb5d6e0e0 +size 1190020 diff --git a/4tE2T4oBgHgl3EQfOQbV/vector_store/index.pkl b/4tE2T4oBgHgl3EQfOQbV/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..e3c4059237f146c1f2ca280be82ef997129cfbf6 --- /dev/null +++ b/4tE2T4oBgHgl3EQfOQbV/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:537df5ca6bff95fd77a61132eaa398abe5eafede2bb8cab340251d47e4ec6380 +size 203671 diff --git a/5NE2T4oBgHgl3EQfOgaA/content/tmp_files/2301.03749v1.pdf.txt b/5NE2T4oBgHgl3EQfOgaA/content/tmp_files/2301.03749v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f8c0d9d059dfcdb6c85eb3d36800bd9c06db7e62 --- /dev/null +++ b/5NE2T4oBgHgl3EQfOgaA/content/tmp_files/2301.03749v1.pdf.txt @@ -0,0 +1,2470 @@ +Markovian Sliced Wasserstein Distances: Beyond +Independent Projections +Khai Nguyen +Tongzheng Ren +Nhat Ho +The University of Texas at Austin +January 11, 2023 +Abstract +Sliced Wasserstein (SW) distance suffers from redundant projections due to independent +uniform random projecting directions. To partially overcome the issue, max K sliced Wasserstein +(Max-K-SW) distance (K ≥ 1), seeks the best discriminative orthogonal projecting directions. +Despite being able to reduce the number of projections, the metricity of Max-K-SW cannot be +guaranteed in practice due to the non-optimality of the optimization. Moreover, the orthogonality +constraint is also computationally expensive and might not be effective. To address the problem, +we introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, +which imposes a first-order Markov structure on projecting directions. We discuss various members +of MSW by specifying the Markov structure including the prior distribution, the transition +distribution, and the burning and thinning technique. Moreover, we investigate the theoretical +properties of MSW including topological properties (metricity, weak convergence, and connection +to other distances), statistical properties (sample complexity, and Monte Carlo estimation error), +and computational properties (computational complexity and memory complexity). Finally, we +compare MSW distances with previous SW variants in various applications such as gradient +flows, color transfer, and deep generative modeling to demonstrate the favorable performance of +MSW 1. +1 +Introduction +Sliced Wasserstein (SW) [7] distance has been well-known as a great alternative statistical distance +for Wasserstein distance [60, 52]. In short, SW takes the average of Wasserstein distances between +corresponding pairs of one-dimensional projected measures as the distance between the two original +measures. Because of that, the SW has a low computational complexity compared to the conventional +Wasserstein distance due to the closed-form solution of optimal transport in one dimension. When +the probability measures have at most n supports, the computational complexity of the SW is +only O(n log n). This complexity is much lower than the computational complexity O(n3 log n) of +Wasserstein distance and the complexity O(n2) [1, 34, 35, 33] of entropic Wasserstein [11] (Sinkhorn +divergence). Moreover, the memory complexity of the SW which is O(n) which is lower than the +memory complexity O(n2) of the Wasserstein (Sinkhorn) distance. The reason is that SW does not +need to store the cost matrix between supports which cost O(n2). An additional appealing property +of the SW is that it does not suffer from the curse of dimensionality, namely, its sample complexity +is O(n−1/2) [40, 49] compared to O(n−1/d) [19] of the Wasserstein distance (d is the number of +dimensions). +Due to the scalability, the SW has been applied to almost all applications where the Wasserstein +distance is used. For example, we refer to some applications of the SW which are generative model- +1Code for the experiments will be published at https://github.com/UT-Austin-Data-Science-Group/MSW. +1 +arXiv:2301.03749v1 [stat.ML] 10 Jan 2023 + +ing [63, 15, 27, 42], domain adaptation [30], clustering [28], approximate Bayesian computation [39], +gradient flows [36, 5], and variational inference [64]. Moreover, there are many attempts to improve +the SW. The generalized sliced Wasserstein (GSW) distance that uses non-linear projection is +proposed in [26]. Distributional sliced Wasserstein distance is proposed in [44, 47] by replacing the +uniform distribution on the projecting directions in SW with an estimated distribution that puts +high probabilities for discriminative directions. Spherical sliced Wasserstein which is defined between +distributions that have their supports on the hyper-sphere is introduced in [4]. A sliced Wasserstein +variant between probability measures over images with convolution is defined in [43]. +Despite having a lot of improvements, one common property in previous variants of the SW is +that they use independent projecting directions that are sampled from a distribution over a space +of projecting direction e.g., the unit-hypersphere. Those projecting directions are further utilized +to project two interested measures to corresponding pairs of one-dimensional measures. Due to +the independence, practitioners have reported that many projections do not have the power to +discriminative between two input probability measures [26, 15]. Moreover, having a lot of projections +leads to redundancy and losing computation for uninformative pairs of projected measures. This +problem is known as the projection complexity limitation of the SW. +To partially address the issue, the max sliced Wasserstein (Max-SW) distance is introduced in [14]. +Max-SW seeks the best projecting direction that can maximize the projected Wasserstein distance. +Since the Max-SW contains a constraint optimization problem, the projected subgradient ascent +algorithm is performed. Since the algorithm only guarantees to obtain local maximum [49], the +performance of empirical estimation Max-SW is not stable in practice [42] since the metricity of +Max-SW can be only obtained at the global optimum. Another approach is to force the orthogonality +between projecting directions. In particular, K-sliced Wasserstein [53] (K-SW) uses K > 1 orthogonal +projecting directions. Moreover, to generalize the Max-SW and the K-SW, max-K sliced Wasserstein +(Max-K-SW) distance (K > 1) appears in [12] to find the best K projecting directions that +are orthogonal to each other via the projected sub-gradient ascent algorithm. Nevertheless, the +orthogonality constraint is computationally expensive and might not be good in terms of reflecting +discrepancy between general measures. Moreover, Max-K-SW also suffers from the non-optimality +problem which leads to losing the metricity property in practice. +To avoid the independency and to satisfy the requirement of creating informative projecting directions +efficiently, we propose to impose a sequential structure on projecting directions. Namely, we choose +a new projecting direction based on the previously chosen directions. For having more efficiency +in computation, we consider first-order Markovian structure in the paper which means that a +projecting direction can be sampled by using only the previous direction. For the first projecting +direction, it can follow any types of distributions on the unit-hypersphere that were used in the +literature e.g., uniform distribution [7] and von Mises-Fisher distribution [23, 47] to guarantee the +metricity. For the transition distribution on the second projecting direction and later, we propose +three types of family which are random walk transition, orthogonal-based transition, and input-awared +transition. For the random walk transition, we use the von Mises-Fisher with the mean as the +previous projecting direction as the conditional distribution. For the orthogonal-based transition, we +choose the projecting direction uniformly on the unit hypersphere such that it is orthogonal to the +previous direction. In contrast to the previous two transitions which do not use the information +from the two input measures, the input-awared transition uses the sub-gradient with respect to +the previous projecting direction of the corresponding projected Wasserstein distance between the +2 + +two measures to design the transition. In particular, the projected sub-gradient update is used to +create the new projecting direction. Moreover, we further improve the computational time and +computational memory by introducing the burning and thinning technique to reduce the number of +random projecting directions. +Contribution: In summary, our contributions are two-fold: +1. We propose a novel family of distances on the space of probability measures, named Markovian +sliced Wasserstein (MSW) distances. MSW considers a first-order Markovian structure on random +projecting directions. Moreover, we derive three variants of MSW that use three different types of +conditional transition distributions: random walk, orthogonal-based, and input-awared. We investigate +the theoretical properties of MSW including topological properties (metricity, weak convergence, +and connection to other distances), statistical properties (sample complexity, and Monte Carlo +estimation error), and computational properties (computational complexity and memory complexity). +Moreover, we introduce a burning and thinning approach to further reduce computational and +memory complexity, and we discuss the properties of the resulting distances. +2. We conduct experiments to compare MSW with SW, Max-SW, K-SW, and Max-K-SW in various +applications, namely, gradient flows, color transfer, and deep generative models on standard image +datasets: CIFAR10 and CelebA. We show that the input-awared MSW can yield better qualitative +and quantitative performance while consuming less computation than previous distances in gradient +flows and color transfer, and comparable computation in deep generative modeling. Finally, we +investigate the role of hyper-parameters of distances e.g., the number of projections, the number of +time-steps, and so on, in applications. +Organization. We first provide background for Wasserstein distance, sliced Wasserstein distance, +and max sliced Wasserstein distance in Section 2. In Section 3, we propose Markovian sliced +Wasserstein distances and derive their theoretical properties. Section 4 contains the comparison of +MSW to previous SW variants in gradient flows, color transfer, and deep generative modeling. We +then conclude the paper in Section 5. Finally, we defer the proofs of key results in the paper and +supplementary materials to Appendices. +Notation. For p ≥ 1, Pp(Rd) is the set of all probability measures on Rd that have finite p- +moments. For any d ≥ 2, we denote U(Sd−1) is the uniform measure over the unit hyper-sphere +Sd−1 := {θ ∈ Rd | ||θ||2 +2 = 1}. For any two sequences an and bn, the notation an = O(bn) means +that an ≤ Cbn for all n ≥ 1, where C is some universal constant. We denote θ♯µ is the push-forward +measures of µ through the function f : Rd → R that is f(x) = θ⊤x. +2 +Background +We start with reviewing the background on Wasserstein distance, sliced Wasserstein distances, their +computation techniques, and their limitations. +Wasserstein distance: +Given two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd), the +Wasserstein distance [60, 51] between µ and ν is : +Wp +p(µ, ν) = +inf +π∈Π(µ,ν) +� +Rd×Rd ∥x − y∥p +pdπ(x, y) +(1) +3 + +where Π(µ, ν) is set of all couplings that have marginals are µ and ν respectively. The computational +complexity and memory complexity of Wasserstein distance are O(n3 log n) and O(n2) in turn when +µ and ν have at most n supports. When d = 1, the Wasserstein distance can be computed with a +closed form: Wp +p(µ, ν) = +� 1 +0 |F −1 +µ (z) − F −1 +ν +(z)|pdz, where Fµ and Fν are the cumulative distribution +function (CDF) of µ and ν respectively. +Sliced Wasserstein distance: +By randomly projecting two interested high-dimensional measures +to corresponding pairs of one-dimensional measures, sliced Wasserstein (SW) distance can exploit the +closed-form benefit of Wasserstein distance in one dimension. The definition of sliced Wasserstein +distance [7] between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: +SWp +p(µ, ν) = Eθ∼U(Sd−1)Wp +p(θ♯µ, θ♯ν). +(2) +Monte Carlo samples are often used to approximate the intractable expectation unbiasedly: � +SW +p +p(µ, ν) = +1 +L +�L +l=1 Wp +p(θl♯µ, θl♯ν), where θ1, . . . , θL are drawn randomly from U(Sd−1). When µ and ν are dis- +crete measures that have at most n supports in d dimension, the computational complexity of SW +is O(Ln log2 n + Ldn) and the memory complexity for storing the projecting directions and the +projected supports of SW is O(L(d + n)). Here, Ln log2 n is for sorting L sets of projected supports +and Ld is for projecting supports to L sets of scalars. +Max sliced Wasserstein distance: +To select the best discriminative projecting direction, the +max sliced Wasserstein (Max-SW) distance [14] between µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is introduced +as follows: +Max-SWp(µ, ν) = max +θ∈Sd Wp(θ♯µ, θ♯ν). +(3) +Computing Max-SW requires solving the constrained optimization problem. In practice, the projected +sub-gradient ascent algorithm with T > 1 iterations is often used to obtain a surrogate projecting +direction ˆθT for the global optimum. Hence, the empirical Max-SW distance is +� +Max-SWp(µ, ν) = +Wp(ˆθT ♯µ, ˆθT ♯ν). The detail of the projected sub-gradient ascent algorithm is given in Algorithm 1 in +Appendix A.1. The computational complexity of Max-SW is O(Tn log2 n + Tdn) and the memory +complexity of Max-SW is O(d + n). It is worth noting that the projected sub-gradient ascent can +only yield local maximum [49]. Therefore, the empirical Max-SW might not be distance even when +T → ∞ since the metricity of Max-SW can be only obtained at the global maximum. +K sliced Wasserstein distance: The authors in [53] propose to estimate the sliced Wasserstein +distance based on orthogonal projecting directions. We refer the distance as K sliced Wasserstein +distance (K-SW). The definition of K-SW between two probability measures µ ∈ Pp(Rd) and +ν ∈ Pp(Rd) is: +K-SWp(µ, ν) = E +� +1 +K +K +� +i=1 +Wp +p(θi♯µ, θi♯ν) +� +, +(4) +where the expectation is with respect to (θ1, . . . , θK) ∼ U(Vk(Rd)) with VK(Rd) = {(θ1, . . . , θK) ∈ +Sd−1|⟨θi, θj⟩ = 0 ∀i, j ≤ K} is the Stiefel manifold. The expectation can be approximated with +Monte Carlo samples (θ1l, . . . , θKl)L +l=1 from U(VK(Rd)). In the original paper, L is set to 1. To +sample from the uniform distribution over the Stiefel manifold U(Vk(Rd)), it requires using the +4 + +Gram-Schmidt orthogonality process which has the computational complexity O(K2d) (quadratic +in K). Therefore, the total computational complexity of K-SW is O(LKn log2 n + LKdn + LK2d) +and the memory complexity of K-SW is O(LK(d + n)). More detail related to K-SW including +Gram-Smith process and sampling uniformly from Stiefel manifold is given in Appendix A.1. +Max K sliced Wasserstein distance: To generalize both Max-SW and K-SW, Max K sliced +Wasserstein is introduced in [12]. Its definition between µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: +Max-K-SWp +p(µ, ν) = +max +(θ1,...,θK)∈VK(Rd) +� +1 +K +K +� +i=1 +Wp +p(θi♯µ, θi♯ν) +� +. +(5) +Similar to Max-SW, a projected sub-gradient ascent algorithm with T > 1 iterations is used to +approximate Max-K-SW. We refer the reader to Algorithm 4 in Appendix A.1 for greater detail. +Since the projecting operator to the Stiefel manifold is the Gram-Smith process, the computational +complexity of Max-K-SW is O(TKn log2 n+TKdn+TK2d). The memory complexity of Max-K-SW +is O(K(d + n)). Similar to Max-SW, the metricity of Max-K-SW is only obtained at the global +optimum, hence, the empirical estimation might not be stable. Moreover, the orthogonality constraint +is also computationally expensive i.e., quadratic in terms of the number of orthogonal projections K. +3 +Markovian Sliced Wasserstein distances +As discussed, the limitations of the previous works are independent projecting directions, compu- +tationally expensive dependency, and the lost of asymptotic metricity. In order to address those +limitations, we propose to impose the dependency between projecting directions via the first-order +Markov chain. By doing so, a new projecting direction can be created efficiently while being depen- +dent on previous projecting directions. In this section, we first define Markovian sliced Wasserstein +(MSW) distance and discuss its theoretical properties including topological properties, statistical +properties, and computational properties in Section 3.1. In Section 3.2, we discuss some choices in +designing the Markov chain including the prior distribution and the transition distribution. Finally, +we discuss the burning and thinning variant of MSW which can reduce the computational and +memory complexity in Section 3.3. +3.1 +Definitions, Topological, Statistical, and Computational Properties +We first start with a general definition of Markovian sliced Wasserstein distance in Definition 1. +Definition 1. For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, the Markovian sliced Wasserstein of +order p between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: +MSWp +p,T (µ, ν) = E +� +1 +T +T +� +t=1 +W p +p (θt♯µ, θt♯ν) +� +, +(6) +where T is the number of time steps, the expectation is under the projecting distribution θ1:T ∼ σ(θ1:T ) +with σ(θ1:T ) = σ(θ1, . . . , θT ) = σ1(θ1) �T +l=2 σt(θt|θt−1), and σ1(θ1), σt(θt|θt−1) ∈ P(Sd−1) for all +t = 1, . . . , T. +5 + +The first projecting direction θ1 follows the distribution σ1(θ1) with σ1(θ1) to be any distributions +on the unit hyper-sphere, e.g., the uniform distribution, a von Mises-Fisher distribution, and so on. +By designing the transition distribution σl(θl|θl−1), we can obtain various variants of MSW. Before +going to the specific design of those distributions, we first discuss the empirical estimation of MSW, +and investigate its theoretical properties including topological properties, statistical properties, and +computational properties. +Monte Carlo estimation: +Similar to SW, we also need to use Monte Carlo samples to approximate +the expectation in Definition 1. We first samples θ11, . . . , θL1 ∼ σ1(θ1) for L ≥ 1, then we samples +θlt ∼ σt(θt|θlt−1) for t = 1, . . . , T and l = 1, . . . , L. After that, we can form an unbiased empirical +estimation of MSW as follows: � +MSW +p +p,T (µ, ν) = +1 +LT +�L +l=1 +�T +t=1 Wp +p(θlt♯µ, θlt♯ν). +Topological Properties: +We first state the following assumption: A1: In MSW, the prior +distribution σ1(θ1) is supported on all the unit-hypersphere or there exists a transition distribution +σt(θt|θt−1) being supported on all the unit-hypersphere. The assumption A1 is easy to satisfy and it +holds for all later choices of the prior distribution and transition distribution. We now consider the +metricity properties of the Markovian sliced Wasserstein distance. +Theorem 1 (Metricity). For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds, Markovian sliced +Wasserstein MSWp,T (·, ·) is a valid metric on the space of probability measures Pp(Rd), namely, it +satisfies the (i) non-negativity, (ii) symmetry, (iii) triangle inequality, and (iv) identity. +The proof of Theorem 1 is in Appendix B.1. Next, we show that the convergence in MSW implies +the weak convergence of probability measures and the reverse also holds. +Theorem 2 (Weak Convergence). For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds, the +convergence of probability measures in Pp(Rd) under the Markovian sliced Wasserstein distance +MSWp,T (·, ·) implies weak convergence of probability measures and vice versa. +Theorem 2 means that for any sequence of probability measures (µk)k∈N and µ in Pp(Rd), we have +limk→+∞ MSWp,T (µk, µ) = 0 if and only if for any continuous and bounded function f : Rd → R, +limk→+∞ +� +f dµk = +� +f dµ. The proof of Theorem 2 is in Appendix B.2. Next, we discuss the +connection of MSW to previous sliced Wasserstein variants. +Proposition 1. For any p ≥ 1 and dimension d ≥ 1, +(i) For any T ≥ 1 and µ, ν ∈ Pp(Rd), MSWp,T (µ, ν) ≤ Max-SWp(µ, ν) ≤ Wp(µ, ν). +(ii) If T = 1 and the prior σ1(θ1) := U(Sd−1), MSWp,T (µ, ν) = SWp(µ, ν). +The proof of Proposition 1 is in Appendix B.3. +Statistical Properties: We first investigate the sample complexity or the empirical estimation +rate of MSW. +Proposition 2 (Sample Complexity). Let X1, X2, . . . , Xn be i.i.d. samples from the probability +measure µ being supported on compact set of Rd. We denote the empirical measure µn = 1 +n +�n +i=1 δXi. +Then, for any p ≥ 1 and T ≥ 1, there exists a universal constant C > 0 such that +E[MSWp,T (µn, µ)] ≤ C +� +(d + 1) log n/n, +where the outer expectation is taken with respect to the data X1, X2, . . . , Xn. +6 + +The proof of Proposition 2 is in Appendix B.4. The above sample complexity suggests that MSW +does not suffer from the curse of dimensionality. Next, we investigate the Monte Carlo approximation +error for MSW. +Proposition 3 (Monte Carlo error). For any p ≥ 1, T ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd), +we have: +E|� +MSW +p +p,T (µ, ν) − MSWp +p,T (µ, ν)| +1 +√ +TL +L +� +l=1 +V ar +� T +� +t=1 +W p +p (θt♯µ, θt♯ν) +� 1 +2 +, +where the variance is with respect to σ(θ1, . . . , θT ). +The proof of Proposition 3 is in Appendix B.5. From the above proposition, we know that increasing +the number of projections L reduces the approximation error. +Computational Properties: When µ and ν are two discrete probability measures in Pp(Rd) that +have at most n supports, the computational complexity for the Monte Carlo approximation of MSW +is O(TLn log2 n+TLdn) where O(TLn log n) is for computation of TL one-dimensional Wasserstein +distances and O(TLdn) is the projecting complexity for TL projections from d dimension to 1 +dimension. The memory complexity of MSW is O(TL(d + n)) for storing the projecting directions +and the projections. +3.2 +Specific Choices of the Projecting Distribution +Designing the projecting distribution σ(θ1, . . . , θT ) is the central task in using MSW since it controls +the projecting behavior. For each choice of the σ(θ1, . . . , θT ), we obtain a variant of MSW. Since we +impose the first order Markov structure σ(θ1, . . . , θT ) = σ1(θ1) �T +t=2 σt(θt|θt−1), there are two types +of distributions that we need to choose: the prior distribution σ1(θ1) and the transition distribution +σt(θt|θt−1) for all t = 2, . . . , T. +Prior distribution: The most simple choice of σ1(θ1) when we know nothing about probability +measures that we want to compare is the uniform distribution over the unit hypersphere U(Sd−1). +Moreover, the metricity of MSW is guaranteed regardless of the transition distribution with this +choice. Therefore, the uniform distribution is the choice that we use in our experiments in the +paper. It is worth noting that we could also use a distribution that is estimated from two interested +probability measures [44]; however, this approach costs more computation. +Now, we discuss some specific choices of the transition distributions σt(θt|θt−1). Detailed algorithms +for computing MSW with specific transitions are given in Appendix A.3. +Random Walk transition: +Motivated by the Gaussian Random Walk in MCMC literature [37], +we use a version of Gaussian on the unit hypersphere which is the von Mises-Fisher (vMF) distri- +bution [23]. The details about the vMF distribution including its probability density function, its +sampling procedure, and its properties are given in Appendix A.2. In summary, the vMF distribution +has two parameters: the location parameter ϵ ∈ Sd−1 which is the mean, and the concentration +parameter κ ∈ R+ which plays the role as the variance. Therefore, the transition distribution is +σt(θt|θt−1) = vMF(θt|ϵ = θt−1, κ) where κ is a hyperparameter. +Orthogonal-based transition: +Motivated by the orthogonality constraint in Max-K-SW and +K-SW, we can design a transition distribution that gives us an orthogonal projecting direction to the +7 + +previous one. In particular, given a previous projecting direction θt−1, we want to have θt such that +⟨θt, θt−1⟩ = 0, namely, we want to sample from the subsphere Sd−1 +θt−1 := {θt ∈ Sd−1|⟨θt, θt−1⟩ = 0}. +To the best of our knowledge, there is no explicit form of distribution (known pdf) that is defined +on that set. However, we can still sample from the uniform distribution over that set: U(Sd−1 +θt−1) +since that distribution can be constructed by pushing the uniform distribution over the whole unit +hypersphere U(Sd−1) through the projection operator: Prodθt−1(θt) = ProdSd−1 +� +θt − +⟨θt−1,θt⟩ +⟨θt−1,θt−1⟩θt−1 +� +where ProdSd−1(θ) = +θ +||θ||2 is the normalizing operator. In a greater detail, we first sample θ′ +t ∼ +U(Sd−1) and then set θt = Prodθt−1(θ′ +t). Therefore, in this case, we have σt(θt|θt−1) = U(Sd−1 +t−1 ) = +Prodθt−1♯U(Sd−1). +Input-awared transition: The above two transition distributions do not take into account the +information of the two probability measures µ and ν that we want to compare. Hence, they could +be inefficient to explore good projecting directions in terms of comparing µ and ν. Motivated by the +projected sub-gradient ascent [9] update in finding the “max" projecting direction, we could design the +transition distribution as follows: σt(θt|θt−1) = δf(θt−1|η,µ,ν) where δ denotes the Dirac Delta function +and the transition function f(θt−1|η, µ, ν) = +ProdSd−1 +� +θt−1 + η∇θt−1Wp (θt−1♯µ, θt−1♯ν) +� +, with +η > 0 is the stepsize hyperparameter. +As the current choice is a deterministic transition, it +requires the prior distribution to have supports on all Sd−1 to obtain the metricity for MSW. +A choice to guarantee the metricity regardless of the prior distribution is the vMF distribution, +namely, σt(θt|θt−1) = vMF(θt|ϵ = f(θt−1|η, µ, µ), κ). Thank the interpolation properties of the +vMF distribution: limκ→0 vMF(θ|ϵ, κ) = U(Sd−1) and limκ→∞ vMF(θ|ϵ, κ) = δϵ, the transition +distribution can balance between heading to the “max" projecting direction and exploring the space +of directions. +Stationarity of σT (θT ): A natural important question arises: what is the distribution of σT (θT ) = +� +. . . +� +σ(θ1, . . . , θT )dθ1 . . . dθT−1 when T → ∞? The answer to the above questions depends on the +choice of the projection distribution which is discussed in Section 3.2. For the Random Walk and the +Orthogonal-based transitions and the uniform distribution prior, it is unclear whether the stationary +distribution exists. For the deterministic Input-awared transition and the uniform prior, we have +limT→∞ σT (θT ) = �A +a=1 αaδθ∗a with �A +a=1 αa = 1 where θ∗ +a (a = 1, . . . , A) are local maximas of +the optimization problem maxθ∈Sd−1 Wp(θ♯µ, θ♯ν) and some unknown weights αa that depend on µ +and ν. This property is due to the fact that the projected sub-gradient ascent can guarantee local +maxima convergence [49]. For the Input-awared vMF transition, it is also unclear if the stationary +distribution exists when the parameter κ < ∞. +3.3 +Burning and Thinning +In the definition of MSW in Definition 1, we take the expectation on the joint distribution over +all timesteps σ(θ1:T ) which leads to the time and memory complexities to be linear with T in the +Monte Carlo approximation. Therefore, we can adapt the practical technique from MCMC methods +which is burning in and thinning in to reduce the number of random variables while still having a +dependency structure. +Definition 2. For any p ≥ 1, T ≥ 1, dimension d ≥ 1, the number of burned steps M ≥ 0, and the +number of thinned steps N ≥ 1, the burned thinned Markovian sliced Wasserstein of order p between +8 + +two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: +MSWp,T,N,M(µ, ν) = E +� +� +N +T − M +(T−M)/N +� +t=1 +W p +p +� +θ′ +t♯µ, θ′ +t♯ν +� +� +� , +(7) +where the expectation is under the projection distribution θ′ +1:N(T−M) ∼ σ(θ′ +1:N(T−M)) with σ(θ′ +1:N/(T−M)) +being the marginal distribution which is obtained by integrating out random projecting directions +at the time step t such that t ≤ M or t%N ̸= 0 from σ(θ1, . . . , θT ) = σ1(θ1) �T +l=2 σt(θt|θt−1), and +σ1(θ1), σt(θt|θt−1) ∈ P(Sd−1) for all t = 1, . . . , T. +Similar to MSW, the burned-thinned MSW is also a metric on Pp(Rd) when there exists a time +step t that is not burned, is not thinned, and θt is a random variable that has the supports on +all Sd−1. We discuss more details about the burned-thinned MSW including its topological and +statistical properties in Appendix A.4. The Monte Carlo estimation of the burned-thinned MSW is +given in Equation equation 9 in Appendix A.4. The approximation is the average of the projected +Wasserstein distance from θtl with t ≥ M and t%N = 0. By reducing the number of random +projecting directions, the computational complexity of the burned-thinned MSW is improved to +O(((T −M)Ln log2 n+(T −M)Ldn)/N) in the random walk and the orthogonal-based transitions. In +the case of the input-awared transition, the computational complexity is still O(TLn log2 n + TLdn) +since the transition requires computing the gradient of the projected Wasserstein distance. However, +in all cases, the memory complexity is reduced to O((T − M)L(d + n)/N). +Burned thinned MSW is the generalization of Max-SW: the empirical computation of Max- +SW [14] with the projected sub-gradient ascent and uniform random initialization can be viewed +as a special case of burned thinned MSW with the input-awared transition and with the number +of burned samples M = T − 1. The difference is that Max-SW uses only one local maximum to +compute the distance while the burned thinned MSW uses L ≥ 1 maximums (might not be unique). +More discussions: We refer the reader to Appendix A.5 for other related discussions e.g., “K-SW +is autoregressive decomposition of projecting distribution", “sequential generalization of Max-K-SW", +and related literature. +4 +Experiments +In this section, we refer MSW with random walk transition as rMSW, MSW with orthogonal-based +transition as oMSW, MSW with input-awared transition as iMSW (using the Dirac distribution) +and viMSW (using the vMF distribution). We compare MSW variants to SW, Max-SW, K-SW, +and Max-K-SW in standard applications e.g., gradient flows, color transfer, and deep generative +models. Moreover, we also investigate the role of hyperparameters, e.g., concentration parameter κ, +the number of projections L, the number of time steps T, the number of burning steps M, and the +number of thinning steps N in applications. +4.1 +Gradient Flows and Color Transfer +Gradient flows: We follow the same setting in [17]. The gradient flow models a distribution +µ(t) flowing with time t along the gradient flow of a loss functional µ(t) → D(µ(t), ν) that drives +9 + +SW L=30 +W2: 25.3149×10 +2 (0s) +W2: 0.5913×10 +2 (1.07s) +W2: 0.0099×10 +2 (1.55s) +Max-SW T=30 +W2: 25.3149×10 +2 (0s) +W2: 0.1091×10 +2 (2.37s) +W2: 0.0098×10 +2 (3.48s) +steps 0 +iMSW L=2 T=5 +W2: 25.3149×10 +2 (0s) +steps 200 +W2: 0.0483×10 +2 (0.99s) +steps 300 +W2: 0.0064×10 +2 (1.41s) +steps 0 +viMSW L=2 T=5 =50 +W2: 25.3149×10 +2 (0s) +steps 200 +W2: 0.0512×10 +2 (2.05s) +steps 300 +W2: 0.0043×10 +2 (2.94s) +Figure 1: The figures show the gradient flows that are from the empirical distribution over the +color points to the empirical distribution over S-shape points produced by different distances. The +corresponding Wasserstein-2 distance between the empirical distribution at the current step and the +S-shape distribution and the computational time (in seconds) to reach the step is reported at the +top of the figure. +it towards a target distribution ν [56] where D is a given distance between probability measures. +In this setup, we consider ν = 1 +n +�n +i=1 δYi as a fixed empirical target distribution and the model +distribution µ(t) = 1 +n +�n +i=1 δXi(t). Here, the model distribution is parameterized by a time-varying +point cloud X(t) = (Xi(t))n +i=1 ∈ +� +Rd�n. Starting from an initial condition at time t = 0, we integrate +the ordinary differential equation ˙X(t) = −n∇X(t) +� +D +� 1 +n +�n +i=1 δXi(t), ν +�� +for each iteration. In the +experiments, we utilze the Euler scheme with 300 timesteps and the step size is 10−3 to move the +empirical distribution over colorful points µ(0) to the distribution over S-shape points (ν) (see +Figure 1). For Max-SW, Max-K-SW, iMSW, and viMSW, we use the learning rate parameter +for projecting directions η = 0.1. We report the Wasserstein-2 distances between the empirical +distribution µ(t) and the target empirical distribution ν, and the computational time in Table 1. +We also give the visualization of some obtained flows in Figure 1. We refer the reader to Figure 5 in +Appendix C.1 for the full visualization of all flows and detailed algorithms. We observe that iMSW +gives better flows than SW, Max-SW, K-SW, and Max-K-SW. Namely, the empirical distribution +µ(t) (t = 300) with iMSW is closer to ν in terms of Wasserstein distance. More importantly, iMSW +consumes less computation than its competitors since it can use a smaller number of projections +due to more informative projecting directions. Furthermore, viMSW gives better final results than +iMSW, however, the trade-off is doubling the time computation due to the sampling step of vMF +distribution. We also observe that rMSW does not give good results in both Wasserstein-2 and +computational time due to the random walk transition. In this case, K-SW is equivalent to our +oMSW with T=K=2 since the dimension d = 2. We refer the reader to Appendix C.1 for more +discussion. +Studies on hyperparameters: From Table 3 in Appendix C.1, increasing the number of projections +L yields better performance for SW, K-SW, and iMSW. Similarly, increasing the number of timesteps +T also helps Max-SW and iMSW better. Moreover, we find that for the same number of total +projections e.g., L = 5, T = 2 and T = 2, L = 5, a larger timestep T might lead to a better result +for iMSW. For burning and thinning, we see that they help to reduce the computation while the +performance stays comparable or even better if choosing the right value of M and N. Also, iMSW +10 + +Source +SW (L=45), 37.97(s), W2 = 414.51 +Max-SW (T=45), 57.48(s), W2 = 449.42 +K-SW (L=15,K=3), 38.21(s), W2 = 411.74 +Max-K-SW (K=3,T=15), 52.6(s), W2 = 479.43 +rMSW (L=3,T=5, =50), 15.65(s), W2 = 444.35 +oMSW (L=3,T=5), 14.17(s), W2 = 415.06 +iMSW (L=3,T=5), 25.39(s), W2 = 16.97 +viMSW (L=3,T=5, =50), 29.27(s), W2 = 16.48 +Target +Figure 2: The figures show the source image, the target image, and the transferred images from +different distances. The corresponding Wasserstein-2 distance between the empirical distribution +over transferred color palates and the empirical distribution over the target color palette and the +computational time (in second) are reported at the top of the figure. +Table 1: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in gradient flow. +Distances +Wasserstein-2 (↓) +Time (↓) +SW (L=30) +0.0099 × 10−2 +1.55 +Max-SW (T=30) +0.0098 × 10−2 +3.48 +K-SW (L=15,K=2) +0.0098 × 10−2 +1.71 +Max-K-SW (K=2,T=15) +0.0146 × 10−2 +3.35 +rMSW (L=2,T=5,κ=50) (ours) +0.0157 × 10−2 +2.16 +iMSW (L=2,T=5) (ours) +0.0064 × 10−2 +1.41 +viMSW (L=2,T=5,κ=50)(ours) +0.0043 × 10−2 +2.94 +Table 2: Summary of FID and IS scores of methods on CIFAR10 (32x32), and CelebA (64x64). +Method +CIFAR10 (32x32) +CelebA (64x64) +FID (↓) +IS (↑) +FID (↓) +SW +14.21±1.12 +8.19±0.07 +8.93±0.23 +Max-SW +14.38±0.08 +8.15±0.02 +8.94±0.35 +KSW +15.24±0.02 +8.15±0.03 +9.41±0.16 +Max-K-SW +14.83±1.01 +8.17±0.03 +9.29±0.29 +rMSW (ours) +14.33±0.51 +8.15±0.06 +9.12±0.44 +oMSW (ours) +14.12±0.54 +8.20±0.05 +9.68±0.55 +iMSW (ours) +14.12±0.48 +8.24±0.09 +8.89±0.23 +viMSW (ours) +13.98±0.59 +8.12±0.20 +8.91±0.11 +the burning steps M = T − 1 is still better than Max-SW with T time steps. For the concentration +parameter κ in rMSW and viMSW, a larger value of κ leads to a faster computation due to faster +sampling. However, the performance of viMSW is not monotonic in terms of κ. +Color transfer: We aim to transfer the color palate (RGB) of a source image to the color palette +(RGB) target image. Therefore, it is natural to build a gradient flow that starts from the empirical +distribution over the color palette of the source image to the empirical distribution over the color +palette of the target image. Since the value of color palette is in the set {0, . . . , 255}3, we round the +11 + +200 +300 +400 +500 +600 +Epochs +14 +16 +18 +20 +22 +24 +26 +28 +FID Score +CIFAR10 +SW +Max-SW +K-SW +Max-K-SW +rMSW +oMSW +iMSW +viMSW +200 +300 +400 +500 +600 +Epochs +7.4 +7.6 +7.8 +8.0 +8.2 +IS Score +CIFAR10 +SW +Max-SW +K-SW +Max-K-SW +rMSW +oMSW +iMSW +viMSW +25 +50 +75 +100 +125 +150 +175 +200 +Epochs +10 +15 +20 +25 +30 +35 +40 +FID Score +CelebA +SW +Max-SW +K-SW +Max-K-SW +rMSW +oMSW +iMSW +viMSW +Figure 3: The FID scores over epochs of different distances. +value of the supports of the empirical distribution at the final step of the Euler scheme with 2000 +steps and 10−3 step size. Greater detail can be found in Appendix C.2. For Max-SW, Max-K-SW, +iMSW, and viMSW, we use the learning rate parameter for projecting directions η = 0.1. We show +the transferred images, the corresponding Wasserstein-2 distances between the empirical distribution +over the transferred color palette and the empirical distribution over the target color palette, and the +corresponding computational time in Figure 2. From the figures, iMSW and viMSW give the best +transferred images quantitatively and qualitatively. Moreover, oMSW and rMSW are comparable +to SW, Max-SW, K-SW, and are better than Max-K-SW while consuming much less computation. +We refer the reader to Figure 6 in Appendix C.2 for the color palette visualization and to Figure 7 +for another choice of the source and target images. We also conduct studies on hyperparameters in +Appendix C.2 where we observe some similar phenomenons as in gradient flow. +4.2 +Deep Generative Models +We follow the setup of sliced Wasserstein deep generative models in [15]. The full settings of the +framework including neural network architectures, training framework, and hyperparameters are +given Appendix C.3. We compare MSW with previous baselines including SW, Max-SW, K-SW, +and Max-K-SW on benchmark datasets: CIFAR10 (image size 32x32) [29], and CelebA (image size +64x64). The evaluation metrics are FID score [21] and Inception score (IS) [54] (except on CelebA +since IS score poorly captures the perceptual quality of face images [21]). A notable change in +computing Max-SW is that we do not use momentum in optimization for max projecting direction +like in previous works [26, 42], which leads to a better result. +Summary of generative performance: We train generative models with SW (L ∈ {100, 1000, 10000}), +Max-SW (T ∈ {10, 100, 1000}, the learning rate for projected gradient ascent algorithm η ∈ +{0.01, 0.1}), K-SW (L ∈ {1, 10, 100}, K = 10), Max-K-SW (K = 10, η ∈ {0.01, 0.1}), MSW (all +variant, L = {10, 100}, T ∈ {10, 100}), iMSW and viMSW (η ∈ {0.01, 0.1}), rMSW and viMSW and +(κ ∈ {10, 50}). We report the best FID score and the best IS score for each distance in Table 2. In +addition, we show how scores change with respect to the training epochs in Figure 3. Overall, we +observe that viMSW and iMSW give the best generative performance in terms of the final scores +and fast convergence on CIFAR10 and CelebA. Other MSW variants including rMSW and oMSW +give comparable results to baselines. Since most computation in training deep generative models is +for updating neural networks, the computational time for distances is almost the same. Furthermore, +we show some generated images on CelebA in Figure 4 and all generated images on CIFAR10 and +12 + +SW +Max-K-SW +iMSW +Figure 4: Random generated images of distances on CelebA. +CelebA in Figure 8 and Figure 9 in Appendix C.3. We visually observe that the qualitative results +are consistent with the quantitative results in Table 2. +Studies on hyperparameters: We conduct experiments to understand the behavior of the burning +and thinning technique, and to compare the role of L and T in Table 5 in Appendix C.3. Overall, +burning (thinning) sometimes helps to improve the performance of training generative models. There +is no clear sign of superiority between burning and thinning. We compare two settings of the same +number of total projections (same complexities): L = 10, T = 100 and L = 100, T = 10. On +CIFAR10, the first setting is better while the reverse case happens on CelebA. +5 +Conclusion +We have introduced the Markovian sliced Wasserstein (MSW), a novel family of sliced Wasserstein +(SW) distances, which imposes a first-order Markov structure on projecting directions. We have +investigated the theoretical properties of MSW including topological properties, statistical properties, +and computational properties. Moreover, we have discussed three types of transition distribution +for MSW, namely, random walk, orthogonal-based, and input-awared transitions. In addition, we +have proposed a burning and thinning technique to improve the computational time and memory of +MSW. Finally, we have compared MSW to previous variants of SW in gradient flows, color transfer, +and generative modeling to show that MSW distances are both effective and efficient. +References +[1] J. Altschuler, J. Niles-Weed, and P. Rigollet. Near-linear time approximation algorithms for +optimal transport via Sinkhorn iteration. In Advances in Neural Information Processing Systems, +pages 1964–1974, 2017. (Cited on page 1.) +[2] Y. Bai, B. Schmitzer, M. Thorpe, and S. Kolouri. Sliced optimal partial transport. arXiv +preprint arXiv:2212.08049, 2022. (Cited on page 23.) +[3] V. I. Bogachev and M. A. S. Ruas. Measure theory, volume 1. Springer, 2007. (Cited on page 25.) +[4] C. Bonet, P. Berg, N. Courty, F. Septier, L. Drumetz, and M.-T. Pham. Spherical sliced- +wasserstein. arXiv preprint arXiv:2206.08780, 2022. (Cited on page 2.) +13 + +[5] C. Bonet, N. Courty, F. Septier, and L. Drumetz. Efficient gradient flows in sliced-wasserstein +space. Transactions on Machine Learning Research, 2022. (Cited on page 2.) +[6] N. Bonneel and D. Coeurjolly. Spot: sliced partial optimal transport. ACM Transactions on +Graphics (TOG), 38(4):1–13, 2019. (Cited on page 23.) +[7] N. Bonneel, J. Rabin, G. Peyré, and H. Pfister. Sliced and Radon Wasserstein barycenters of +measures. Journal of Mathematical Imaging and Vision, 1(51):22–45, 2015. (Cited on pages 1, 2, +and 4.) +[8] N. Bonnotte. Unidimensional and evolution methods for optimal transportation. PhD thesis, +Paris 11, 2013. (Cited on pages 24 and 32.) +[9] S. Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends® in +Machine Learning, 8(3-4):231–357, 2015. (Cited on page 8.) +[10] X. Chen, Y. Yang, and Y. Li. Augmented sliced Wasserstein distances. International Conference +on Learning Representations, 2022. (Cited on page 23.) +[11] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in +Neural Information Processing Systems, pages 2292–2300, 2013. (Cited on page 1.) +[12] B. Dai and U. Seljak. Sliced iterative normalizing flows. In International Conference on Machine +Learning, pages 2352–2364. PMLR, 2021. (Cited on pages 2, 5, and 19.) +[13] T. R. Davidson, L. Falorsi, N. De Cao, T. Kipf, and J. M. Tomczak. Hyperspherical variational +auto-encoders. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, +pages 856–865. Association For Uncertainty in Artificial Intelligence (AUAI), 2018. (Cited on +page 21.) +[14] I. Deshpande, Y.-T. Hu, R. Sun, A. Pyrros, N. Siddiqui, S. Koyejo, Z. Zhao, D. Forsyth, and +A. G. Schwing. Max-sliced Wasserstein distance and its use for GANs. In Proceedings of the +IEEE Conference on Computer Vision and Pattern Recognition, pages 10648–10656, 2019. (Cited +on pages 2, 4, and 9.) +[15] I. Deshpande, Z. Zhang, and A. G. Schwing. Generative modeling using the sliced Wasserstein +distance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, +pages 3483–3491, 2018. (Cited on pages 2, 12, 34, and 35.) +[16] K. Fatras, Y. Zine, R. Flamary, R. Gribonval, and N. Courty. Learning with minibatch Wasser- +stein: asymptotic and gradient properties. In AISTATS 2020-23nd International Conference on +Artificial Intelligence and Statistics, volume 108, pages 1–20, 2020. (Cited on page 34.) +[17] J. Feydy, T. Séjourné, F.-X. Vialard, S.-i. Amari, A. Trouve, and G. Peyré. Interpolating +between optimal transport and MMD using Sinkhorn divergences. In The 22nd International +Conference on Artificial Intelligence and Statistics, pages 2681–2690, 2019. (Cited on page 9.) +[18] R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, +A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Janati, A. Rakotoma- +monjy, I. Redko, A. Rolet, A. Schutz, V. Seguy, D. J. Sutherland, R. Tavenard, A. Tong, and +14 + +T. Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, +2021. (Cited on page 30.) +[19] N. Fournier and A. Guillin. On the rate of convergence in Wasserstein distance of the empirical +measure. Probability Theory and Related Fields, 162:707–738, 2015. (Cited on page 1.) +[20] A. Genevay, G. Peyré, and M. Cuturi. Learning generative models with Sinkhorn divergences. +In International Conference on Artificial Intelligence and Statistics, pages 1608–1617. PMLR, +2018. (Cited on page 34.) +[21] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two +time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information +Processing Systems, pages 6626–6637, 2017. (Cited on page 12.) +[22] M. Huang, S. Ma, and L. Lai. A Riemannian block coordinate descent method for computing +the projection robust Wasserstein distance. In International Conference on Machine Learning, +pages 4446–4455. PMLR, 2021. (Cited on page 23.) +[23] P. E. Jupp and K. V. Mardia. Maximum likelihood estimators for the matrix von Mises-Fisher +and bingham distributions. The Annals of Statistics, 7(3):599–606, 1979. (Cited on pages 2, 7, +and 20.) +[24] O. Kallenberg and O. Kallenberg. Foundations of modern probability, volume 2. Springer, 1997. +(Cited on page 25.) +[25] D. P. Kingma and J. Ba. +Adam: A method for stochastic optimization. +arXiv preprint +arXiv:1412.6980, 2014. (Cited on page 36.) +[26] S. Kolouri, K. Nadjahi, U. Simsekli, R. Badeau, and G. Rohde. Generalized sliced Wasserstein +distances. In Advances in Neural Information Processing Systems, pages 261–272, 2019. (Cited +on pages 2, 12, 19, and 23.) +[27] S. Kolouri, P. E. Pope, C. E. Martin, and G. K. Rohde. Sliced Wasserstein auto-encoders. In +International Conference on Learning Representations, 2018. (Cited on page 2.) +[28] S. Kolouri, G. K. Rohde, and H. Hoffmann. Sliced Wasserstein distance for learning Gaussian +mixture models. In Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, pages 3427–3436, 2018. (Cited on pages 2 and 24.) +[29] A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Master’s +thesis, Department of Computer Science, University of Toronto, 2009. (Cited on page 12.) +[30] C.-Y. Lee, T. Batra, M. H. Baig, and D. Ulbricht. Sliced Wasserstein discrepancy for unsuper- +vised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision +and Pattern Recognition, pages 10285–10295, 2019. (Cited on page 2.) +[31] J. Lezama, W. Chen, and Q. Qiu. Run-sort-rerun: Escaping batch size limitations in sliced +Wasserstein generative models. +In International Conference on Machine Learning, pages +6275–6285. PMLR, 2021. (Cited on page 24.) +15 + +[32] T. Lin, C. Fan, N. Ho, M. Cuturi, and M. Jordan. Projection robust Wasserstein distance and +Riemannian optimization. Advances in Neural Information Processing Systems, 33:9383–9397, +2020. (Cited on page 23.) +[33] T. Lin, N. Ho, X. Chen, M. Cuturi, and M. I. Jordan. Fixed-support Wasserstein barycenters: +Computational hardness and fast algorithm. In NeurIPS, pages 5368–5380, 2020. (Cited on +page 1.) +[34] T. Lin, N. Ho, and M. Jordan. On efficient optimal transport: An analysis of greedy and +accelerated mirror descent algorithms. In International Conference on Machine Learning, pages +3982–3991, 2019. (Cited on page 1.) +[35] T. Lin, N. Ho, and M. I. Jordan. On the efficiency of entropic regularized algorithms for optimal +transport. Journal of Machine Learning Research (JMLR), 23:1–42, 2022. (Cited on page 1.) +[36] A. Liutkus, U. Simsekli, S. Majewski, A. Durmus, and F.-R. Stöter. Sliced-Wasserstein flows: +Nonparametric generative modeling via optimal transport and diffusions. In International +Conference on Machine Learning, pages 4104–4113. PMLR, 2019. (Cited on page 2.) +[37] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012. (Cited on page 7.) +[38] N. Naderializadeh, J. Comer, R. Andrews, H. Hoffmann, and S. Kolouri. Pooling by sliced- +Wasserstein embedding. Advances in Neural Information Processing Systems, 34, 2021. (Cited +on page 24.) +[39] K. Nadjahi, V. De Bortoli, A. Durmus, R. Badeau, and U. Şimşekli. Approximate Bayesian +computation with the sliced-Wasserstein distance. In ICASSP 2020-2020 IEEE International +Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5470–5474. IEEE, 2020. +(Cited on pages 2 and 24.) +[40] K. Nadjahi, A. Durmus, L. Chizat, S. Kolouri, S. Shahrampour, and U. Simsekli. Statistical +and topological properties of sliced probability divergences. Advances in Neural Information +Processing Systems, 33:20802–20812, 2020. (Cited on page 1.) +[41] K. Nadjahi, A. Durmus, U. Simsekli, and R. Badeau. Asymptotic guarantees for learning +generative models with the sliced-Wasserstein distance. In Advances in Neural Information +Processing Systems, pages 250–260, 2019. (Cited on pages 25 and 34.) +[42] K. Nguyen and N. Ho. Amortized projection optimization for sliced Wasserstein generative +models. Advances in Neural Information Processing Systems, 2022. (Cited on pages 2, 12, 19, +and 34.) +[43] K. Nguyen and N. Ho. Revisiting sliced Wasserstein on images: From vectorization to convolution. +Advances in Neural Information Processing Systems, 2022. (Cited on pages 2, 23, and 28.) +[44] K. Nguyen, N. Ho, T. Pham, and H. Bui. Distributional sliced-Wasserstein and applications to +generative modeling. In International Conference on Learning Representations, 2021. (Cited on +pages 2, 7, and 19.) +16 + +[45] K. Nguyen, D. Nguyen, Q. Nguyen, T. Pham, H. Bui, D. Phung, T. Le, and N. Ho. On +transportation of mini-batches: A hierarchical approach. In Proceedings of the 39th International +Conference on Machine Learning, 2022. (Cited on page 34.) +[46] K. Nguyen, D. Nguyen, T. Pham, and N. Ho. Improving mini-batch optimal transport via +partial transportation. In Proceedings of the 39th International Conference on Machine Learning, +2022. (Cited on page 34.) +[47] K. Nguyen, S. Nguyen, N. Ho, T. Pham, and H. Bui. Improving relational regularized au- +toencoders with spherical sliced fused Gromov-Wasserstein. In International Conference on +Learning Representations, 2021. (Cited on pages 2, 19, and 21.) +[48] K. Nguyen, T. Ren, H. Nguyen, L. Rout, T. Nguyen, and N. Ho. Hierarchical sliced wasserstein +distance. arXiv preprint arXiv:2209.13570, 2022. (Cited on page 23.) +[49] S. Nietert, R. Sadhu, Z. Goldfeld, and K. Kato. Statistical, robustness, and computational +guarantees for sliced wasserstein distances. Advances in Neural Information Processing Systems, +2022. (Cited on pages 1, 2, 4, and 8.) +[50] F.-P. Paty and M. Cuturi. Subspace robust Wasserstein distances. In International Conference +on Machine Learning, pages 5072–5081, 2019. (Cited on page 23.) +[51] G. Peyré and M. Cuturi. Computational optimal transport: With applications to data science. +Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019. (Cited on page 3.) +[52] G. Peyré and M. Cuturi. Computational optimal transport, 2020. (Cited on page 1.) +[53] M. Rowland, J. Hron, Y. Tang, K. Choromanski, T. Sarlos, and A. Weller. Orthogonal estimation +of Wasserstein distances. In The 22nd International Conference on Artificial Intelligence and +Statistics, pages 186–195. PMLR, 2019. (Cited on pages 2, 4, and 19.) +[54] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved +techniques for training GANs. Advances in Neural Information Processing Systems, 29, 2016. +(Cited on page 12.) +[55] T. Salimans, H. Zhang, A. Radford, and D. Metaxas. Improving GANs using optimal transport. +In International Conference on Learning Representations, 2018. (Cited on page 34.) +[56] F. Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63):94, +2015. (Cited on page 10.) +[57] M. Sommerfeld and A. Munk. Inference for empirical wasserstein distances on finite spaces. +Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):219–238, 2018. +(Cited on page 34.) +[58] S. Sra. Directional statistics in machine learning: a brief review. arXiv preprint arXiv:1605.00316, +2016. (Cited on page 21.) +[59] N. M. Temme. Special functions: An introduction to the classical functions of mathematical +physics. John Wiley & Sons, 2011. (Cited on page 20.) +17 + +[60] C. Villani. Optimal transport: Old and New. Springer, 2008. (Cited on pages 1 and 3.) +[61] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, +2008. (Cited on pages 25 and 27.) +[62] M. J. Wainwright. +High-dimensional statistics: A non-asymptotic viewpoint. +Cambridge +University Press, 2019. (Cited on page 29.) +[63] J. Wu, Z. Huang, D. Acharya, W. Li, J. Thoma, D. P. Paudel, and L. V. Gool. Sliced Wasserstein +generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, pages 3713–3722, 2019. (Cited on pages 2 and 24.) +[64] M. Yi and S. Liu. Sliced Wasserstein variational inference. In Fourth Symposium on Advances +in Approximate Bayesian Inference, 2021. (Cited on pages 2 and 24.) +18 + +Supplement to “Markovian Sliced Wasserstein Distances: Beyond +Independent Projections" +In this supplementary material, we present additional materials in Appendix A. In particular, we +provide additional background on sliced Wasserstein variants in Appendix A.1, background on von +Mises-Fisher distribution in Appendix A.2, algorithms for computing Markovian sliced Wasserstein +distances in Appendix A.3, additional information about burned thinned MSW in Appendix A.4, +and discussion on related works in Appendix A.5. We then provide skipped proofs in the main paper +in Appendix B. Additional experiments are presented in Appendix C. +A +Additional Materials +A.1 +Background on Sliced Wasserstein Variants +We review computational aspects of sliced Wasserstein variants. +Computation of Max sliced Wasserstein distance: We demonstrate the empirical estimation +of Max-SW via projected sub-gradient ascent algorithm in Algorithm 1. The initialization step for +ˆθ0 is rarely discussed in previous works. Normally, ˆθ0 is randomly initialized by drawing from the +uniform distribution over the unit-hypersphere. Many previous works [26, 44, 47, 42] use Adam +update instead of the standard gradient ascent update for Max-SW. In this work, we find out that +using the standard gradient ascent update is more stable and effective. +Algorithm 1 Max sliced Wasserstein distance +Input: Probability measures µ, ν, learning rate η, the order p, and the number of iterations T. +Initialize ˆθ0. +for t = 1 to T − 1 do +ˆθt = ˆθt−1 + η · ∇ˆθt−1Wp(ˆθt−1♯µ, ˆθt−1♯ν) +ˆθt = +ˆθt +||ˆθt||2 +end for +Return: Wp(ˆθT ♯µ, ˆθT ♯ν) +K sliced Wasserstein distance: We first review the Gram–Schmidt process in Algorithm 2. With +the Gram–Schmidt process, the sampling from U(VK(Rd)) can be done by sampling θ1, . . . , θk +i.i.d from N(0, Id) then applying the Gram-Schmidt process on them. Therefore, we present the +computation of K sliced Wasserstein distance in Algorithm 3. We would like to recall that the +original work of K-SW [53] uses only one set of orthogonal projecting directions. Here, we generalize +the original work by using L sets of orthogonal projecting directions. +Max K sliced Wasserstein distance: We now present the empirical estimation of Max-K-SW +via projected sub-gradient ascent algorithm in Algorithm 4. This algorithm is first discussed in +the original paper of Max-K-SW [12]. The optimization of Max-K-SW can be solved by using +Riemannian optimization since the Stiefel manifold is a Riemannian manifold. However, to the best +of our knowledge, Riemannian optimization has not been applied to Max-K-SW. +19 + +Algorithm 2 Gram–Schmidt process +Input: K vectors θ1, . . . , θK +θ1 = +θ1 +||θ1||2 +for k = 2 to K do +for i = 1 to k − 1 do +θk = θk − ⟨θi,θk⟩ +⟨θi,θi⟩ θi +end for +θk = +θk +||θk||2 +end for +Return: θ1, . . . , θK +Algorithm 3 K sliced Wasserstein distance +Input: Probability measures µ, ν, the dimension d, the order p, the number of projections L, the +number of orthogonal projections K. +for l = 1 to L do +Draw θl1, . . . , θlK i.i.d from N(0, Id). +θl1, . . . , θlK = Gram–Schmidt(θl1, . . . , θlK) +end for +Return: +� +1 +LK +�L +l=1 +�K +k=1 Wp +p(θlk♯µ, θlk♯ν) +� 1 +p +A.2 +Von Mises-Fisher Distribution +We first start with the definition of von Mises-Fisher (vMF) distribution. +Definition 3. The von Mises–Fisher distribution ( vMF)[23] is a probability distribution on the unit +hypersphere Sd−1 with the density function be: +f(x|ϵ, κ) := Cd(κ) exp(κϵ⊤x), +(8) +where ϵ ∈ Sd−1 is the location vector, κ ≥ 0 is the concentration parameter, and Cd(κ) := +κd/2−1 +(2π)d/2Id/2−1(κ) is the normalization constant. Here, Iv is the modified Bessel function of the first +kind at order v [59]. +Algorithm 4 Max-K sliced Wasserstein distance +Input: Probability measures µ, ν, learning rate η, the dimension d, the order p, the number of +iterations T > 1, and the number of orthogonal projections K > 1. +Initialize ˆθ01, . . . , ˆθ0K to be orthogonal. +for t = 1 to T − 1 do +for k = 1 to K do +ˆθtk = θtk + η · ∇ˆθt−1kWp(ˆθt−1k♯µ, ˆθt−1k♯ν) +end for +ˆθt1, . . . , ˆθtK = Gram-Schmidt(ˆθt1, . . . , ˆθtK) +end for +Return: +� +1 +K +�K +k=1 Wp +p(ˆθTk♯µ, ˆθTk♯ν) +� 1 +p +20 + +Algorithm 5 Sampling from vMF distribution +Input: location ϵ, concentration κ, dimension d, unit vector e1 = (1, 0, .., 0) +Draw v ∼ U(Sd−2) +b ← −2κ+√ +4κ2+(d−1)2 +d−1 +, a ← (d−1)+2κ+√ +4κ2+(d−1)2 +4 +, m ← +4ab +(1+b) − (d − 1) log(d − 1) +repeat +Draw ψ ∼ Beta +� 1 +2(d − 1), 1 +2(d − 1) +� +ω ← h(ψ, κ) = 1−(1+b)ψ +1−(1−b)ψ +t ← +2ab +1−(1−b)ψ +Draw u ∼ U([0, 1]) +until (d − 1) log(t) − t + m ≥ log(u) +h1 ← (ω, +√ +1 − ω2v⊤)⊤ +ϵ′ ← e1 − ϵ +u = +ϵ′ +||ϵ′||2 +U = I − 2uu⊤ +Output: Uh1 +The vMF distribution is a continuous distribution, its mass concentrates around the mean ϵ, and its +density decrease when x goes away from ϵ. When κ → 0, vMF converges in distribution to U(Sd−1), +and when κ → ∞, vMF converges in distribution to the Dirac distribution centered at ϵ [58]. +Sampling: +We review the sampling process in Algorithm 5 [13, 47]. The sampling process of vMF +distribution is based on the rejection sampling procedure. It is worth noting that the sampling +algorithm is doing reparameterization implicitly. However, we only use the algorithm to obtain +random samples without estimating stochastic gradients. +A.3 +Algorithms for Computing Markovian Sliced Wasserstein Distances +We first start with the general computation of MSW in Algorithm 6. +For the random walk +transition in rMSW, we replace the line θlt ∼ σt(θt|θlt−1) by θlt ∼ vMF(θt|ϵ = θlt−1, κ) (Algorithm 5) +with the concentration hyperparameter κ. For the orthogonal-based transition in oMSW, we use +θlt ∼ U(Sd−1 +θlt−1) by first sampling θ′ +lt ∼ U(Sd−1) then set θlt = θlt− ⟨θ′ +lt,θlt⟩ +⟨θ′ +lt,θ′ +lt⟩θ′ +lt then normalize θlt = +θlt +||θlt||2 . +For deterministic input-awared transition, iMSW, we set θlt = θlt−1 + η∇θlt−1Wp(θlt−1♯µ, θlt−1♯ν) +then normalize θlt = +θlt +||θlt||2 . For probabilistic input-awared transition, viMSW, θlt ∼ vMF(θt|ϵ = +ProdSd−1θ′ +lt, κ) with θ′ +lt = θlt−1 + η∇θlt−1Wp(θlt−1♯µ, θlt−1♯ν). +A.4 +Burned Thinned Markovian Sliced Wasserstein Distance +We continue the discussion on burned thinned MSW in Section 3.3. We first start with the Monte +Carlo estimation of burned thinned MSW. +Monte Carlo Estimation: +We samples θ11, . . . , θL1 ∼ σ1(θ1) for L ≥ 1, then we samples +θlt ∼ σt(θt|θlt−1) for t = 1, . . . , T and l = 1, . . . , L. We then obtain samples θ′ +lt by filtering out t < M +and t%N ̸= 0 from the set {θlt} for l = 1, . . . , L and t = 1, . . . , T. The Monte Carlo approximation +21 + +Algorithm 6 Markovian sliced Wasserstein distance +Input: Probability measures µ, ν, the dimension d, the order p, the number of projections L, and +the number of timesteps T. +for l = 1 to L do +Draw θl0 ∼ σ(θ0) +for t = 1 to T − 1 do +Draw θlt ∼ σt(θt|θlt−1) +end for +end for +Return: +� +1 +LT +�L +l=1 +�T +t=1 Wp +p(θlt♯µ, θlt♯ν) +� 1 +p +of the burned-thinned Markovian sliced Wasserstein distance is: +� +MSWp,T,N,M(µ, ν) = +� +� +N +L(T − M) +L +� +l=1 +(T−M)/N +� +t=1 +W p +p +� +θ′ +lt♯µ, θ′ +lt♯ν +� +� +� +1 +p +. +(9) +Theoretical properties: We first state the following assumption: A2: Given T > M ≥ 0, N ≥ 1, +the prior distribution σ1(θ1) and the transition distribution σt(θt|θt−1) are chosen such that there +exists marginals σt(θt) = +� +t− σ(θ1, . . . , θt)dt− with t ≥ M and t%N = 0, t− = {t′ = 1, . . . , T|t′ ̸= t}. +The assumption A2 can be easily obtained by using vMF transition, e.g., in random walk transition +and probabilistic input-awared transition. From this assumption, we can derive theoretical properties +of burned-thinned MSW including topological properties and statistical complexity. +Proposition 4. For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, and dimension d ≥ 1, if A2 holds, the +burned thinned Markovian sliced Wasserstein distance MSWp,T,N,M(·, ·) is a valid metric on the +space of probability measures Pp(Rd), namely, it satisfies the (i) non-negativity, (ii) symmetry, (iii) +triangle inequality, and (iv) identity. +The proof of Proposition 4 follows directly the proof of Theorem 1 in Appendix B.1. +Proposition 5 (Weak Convergence). For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, and dimension d ≥ 1, +if A2 holds, the convergence of probability measures in Pp(Rd) under the burned thinned Markovian +sliced Wasserstein distance MSWp,T,N,M(·, ·) implies weak convergence of probability measures and +vice versa. +The proof of Proposition 5 follows directly the proof of Theorem 2 in Appendix B.2. +Proposition 6. For any p ≥ 1 and dimension d ≥ 1, for any T ≥ 1, M ≥ 0, N ≥ 1 and +µ, ν ∈ Pp(Rd), MSWp,T,N,M(µ, ν) ≤ Max-SWp(µ, ν) ≤ Wp(µ, ν). +The proof of Proposition 6 follows directly the proof of Proposition 1 in Appendix B.3. +22 + +Proposition 7 (Sample Complexity). Let X1, X2, . . . , Xn be i.i.d. samples from the probability +measure µ being supported on compact set of Rd. We denote the empirical measure µn = 1 +n +�n +i=1 δXi. +Then, for any p ≥ 1 and T ≥ 1, M ≥ 0, N ≥ 1, there exists a universal constant C > 0 such that +E[MSWp,T,N,M(µn, µ)] ≤ C +� +(d + 1) log n/n, +where the outer expectation is taken with respect to the data X1, X2, . . . , Xn. +The proof of Proposition 7 follows directly the proof of Proposition 2 in Appendix B.4. +Proposition 8 (Monte Carlo error). For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, dimension d ≥ 1, and +µ, ν ∈ Pp(Rd), we have: +E|� +MSW +p +p,T,N,M(µ, ν) − MSWp +p,T,N,M(µ, ν)| +≤ +√ +N +� +TL(T − M) +L +� +l=1 +V ar +� +� +(T−M)/N +� +t=1 +W p +p +� +θ′ +t♯µ, θ′ +t♯ν +� +� +� +1 +2 +, +where the variance is with respect to σ(θ′ +1, . . . , θ′ +(T−M)/N). +The proof of Proposition 8 follows directly the proof of Proposition 3 in Appendix B.5. +A.5 +Discussions on Related Works +K-SW is autoregressive decomposition: In MSW, we assume that the joint distribution over pro- +jecting directions has the first-order Markov structure: σ(θ1, . . . , θT ) = σ1(θ1) �T +t=2 σt(θt|θt−1). How- +ever, we can consider the full autoregressive decomposition σ(θ1, . . . , θT ) = σ1(θ1) �T +t=2 σt(θt|θ1, . . . , θt−1). +Let T = K in K-SW, hence the transition distribution that is used in K-SW is: σt(θt|θ1, . . . , θt−1) = +Gram-Schmidtθ1,...,θt−1♯U(Sd−1), where Gram-Schmidtθ1,...,θt−1(θt) denotes the Gram-Schmidt pro- +cess update that applies on θt. +Generalization of Max-K-SW: Similar to Max-SW, we can derive a Markovian-based K-sliced +Wasserstein distance that generalizes the idea of the projected gradient ascent update in Max-K-SW. +However, the distance considers the transition on the Stiefel manifold instead of the unit hypersphere, +hence, it will be more computationally expensive. Moreover, orthogonality might not be a good +constraint. Therefore, the generalization of Max-K-SW might not have many advantages. +Beyond the projected sub-gradient ascent update: In the input-awared transition for MSW, +we utilize the projected sub-gradient update as the transition function to create a new projecting +direction. Therefore, we could other optimization techniques such as momentum, adaptive stepsize, +and so on to create the transition function. We will leave the investigation about this direction to +future work. +Applications to other sliced Wasserstein variants: The Markovian approach can be applied +to other variants of sliced Wasserstein distances e.g., generalized sliced Wasserstein [26], augmented +sliced Wasserstein distance [10], projected robust Wasserstein (PRW) [50, 32, 22] (k > 1 dimensional +projection), convolution sliced Wasserstein [43], sliced partial optimal transport [6, 2], hierarchical +sliced Wasserstein [48] and so on. +23 + +Markovian sliced Wasserstein distances in other applications: We can apply MSW to the +setting in [31] which is an implementation technique that utilizes both RAM and GPUs’ memory for +training sliced Wasserstein generative models. MSW can also replace sliced Wasserstein distance in +pooling in [38]. Similarly, MSW can be used in applications that exist sliced Wasserstein distance +e.g., clustering [28], Bayesian inference [39, 64], domain adaptation [63], and so on. +B +Proofs +B.1 +Proof of Theorem 1 +(i), (ii): the MSW is an expectation of the one-dimensional Wasserstein distance hence the non- +negativity and symmetry properties of the MSW follow directly by the non-negativity and symmetry +of the Wasserstein distance. +(iii) From the definition of MSW in Definition 1, given three probability measures µ1, µ2, µ3 ∈ Pp(Rd) +we have: +MSWp,T (µ1, µ3) = +� +E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µ1, θt♯µ3) +�� 1 +p +≤ +� +E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +(Wp (θt♯µ1, θt♯µ2) + Wp (θt♯µ2, θt♯µ3))p +�� 1 +p +≤ +� +E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µ1, θt♯µ2) +�� 1 +p ++ +� +E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µ2, θt♯µ3) +�� 1 +p += MSWp,T (µ1, µ2) + MSWp,T (µ2, µ3), +where the first inequality is due to the triangle inequality of Wasserstein distance and the second +inequality is due to the Minkowski inequality. We complete the triangle inequality proof. +(iv) We need to show that MSWp,T (µ, ν) = 0 if and only if µ = ν. First, from the definition of MSW, +we obtain directly µ = ν implies MSWp,T (µ, ν) = 0. For the reverse direction, we use the same proof +technique in [8]. If MSWp,T (µ, ν) = 0, we have +� +S(d−1)⊗T 1 +T +�T +t=1 Wp (θt♯µ, θt♯ν) dσ(θ1:T ) = 0. If A1 +holds, namely, the prior distribution σ1(θ1) is supported on all the unit-hypersphere or exists a +transition distribution σt(θt|θt−1) is supported on all the unit-hypersphere, we have Wp(θ♯µ, θ♯ν) = 0 +for all θ ∈ Sd−1 where σ denotes the prior or the transition distribution that satisfies the assumption +A1. From the identity property of the Wasserstein distance, we obtain θ♯µ = θ♯ν for σ-a.e θ ∈ Sd−1. +Therefore, for any t ∈ R and θ ∈ Sd−1, we have: +F[µ](tθ) = +� +Rd e−it⟨θ,x⟩dµ(x) = +� +R +e−itzdθ♯µ(z) = F[θ♯µ](t) += F[θ♯ν](t) = +� +R +e−itzdθ♯ν(z) = +� +Rd e−it⟨θ,x⟩dν(x) = F[ν](tθ), +24 + +where F[γ](w) = +� +Rd′ e−i⟨w,x⟩dγ(x) denotes the Fourier transform of γ ∈ P(Rd′). By the injectivity +of the Fourier transform, we obtain µ = ν which concludes the proof. +B.2 +Proof of Theorem 2 +Our goal is to show that for any sequence of probability measures (µk)k∈N and µ in Pp(Rd), +limk→+∞ MSWp,T (µk, µ) = 0 if and only if for any continuous and bounded function f : Rd → R, +limk→+∞ +� +f dµk = +� +f dµ. The proof follows the techniques in [41]. We first state the following +lemma. +Lemma 1. For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds and a sequence of probability +measures (µk)k∈N satisfies limk→+∞ MSWp,T (µk, µ) = 0 with µ in Pp(Rd), there exists an increasing +function φ : N → N such that the subsequence +� +µφ(k) +� +k∈N converges weakly to µ. +Proof. We are given that limk→+∞ MSWp,T (µk, µ) = 0, therefore +limk→∞ +� +S(d−1)⊗T 1 +T +�T +t=1 Wp (θt♯µk, θt♯µ) dσ(θ1:T ) = 0. If A1 holds, namely, the prior distribution +σ1(θ1) is supported on all the unit-hypersphere or exists a transition distribution σt(θt|θt−1) is +supported on all the unit-hypersphere, we have +lim +k→∞ +� +Sd−1 Wp (θ♯µk, θ♯µ) dσ(θ) = 0, +where σ denotes the prior or the transition distribution that satisfies the assumption A1. From Theo- +rem 2.2.5 in [3], there exists an increasing function φ : N → N such that limk→∞ Wp(θ♯µφ(k), θ♯ν) = 0 +for σ-a.e θ ∈ Sd−1. Since the Wasserstein distance of order p implies weak convergence in Pp(Rd) [61], +� +θ♯µφ(k) +� +k∈N converges weakly to θ♯µ for σ-a.e θ ∈ Sd−1. +Let Φµ = +� +Rd ei⟨v,w⟩dµ(w) be the characteristic function of µ ∈ Pp(Rd), we have the weak conver- +gence implies the convergence of characteristic function (Theorem 4.3 [24]): limk→∞ Φθ♯µφ(k)(s) = +Φθ♯µ(s), +∀s ∈ R, for σ-a.e θ ∈ Sd−1. Therefore, limk→∞ Φµφ(k)(z) = Φµ(z), for almost most every +z ∈ Rd. +For any γ > 0 and a continuous function f : Rd → R with compact support, we denote fγ(x) = +f ∗ gγ(x) = +� +2πγ2�−d/2 � +Rd f(x − z) exp +� +−∥z∥2/ +� +2γ2�� +dz where gγ is the density function of +25 + +N(0, γId). We have: +� +Rd fγ(z)dµφ(k)(z) = +� +Rd +� +Rd f(w)gγ(z − w)dw dµφ(k)(z) += +� +Rd +� +Rd f(w) +� +2πγ2�−d/2 exp(−||z − w||2/(2γ2))dw dµφ(k)(z) += +� +2πγ2�−d/2 � +Rd +� +Rd f(w) +� +Rd ei⟨z−w,x⟩g1/γ(x)dx dw dµφ(k)(z) += +� +2πγ2�−d/2 � +Rd +� +Rd f(w) +� +Rd e−i⟨w,x⟩ei⟨z,x⟩g1/γ(x)dx dw dµφ(k)(z) += +� +2πγ2�−d/2 � +Rd +� +Rd f(w)e−i⟨w,x⟩g1/γ(x) +� +Rd ei⟨z,x⟩ dµφ(k)(z)dx dw += +� +2πγ2�−d/2 � +Rd +� +Rd f(w)e−i⟨w,x⟩g1/γ(x)Φµφ(k)(x)dx dw += +� +2πγ2�−d/2 � +Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx, +where the third equality is due to the fact that +� +Rd ei⟨z−w,x⟩g1/γ(x)dx = exp(−||z − w||2/(2γ2)) and +F[f](w) = +� +Rd′ f(x)e−i⟨w,x⟩dx denotes the Fourier transform of the bounded function f. Similarly, +we have +� +Rd fγ(z)dµ(z) = +� +Rd +� +Rd f(w)gγ(z − w)dw dµ(z) += +� +Rd +� +Rd f(w) +� +2πγ2�−d/2 exp(−||z − w||2/(2γ2))dw dµ(z) += +� +2πγ2�−d/2 � +Rd +� +Rd f(w) +� +Rd ei⟨z−w,x⟩g1/γ(x)dx dw dµ(z) += +� +2πγ2�−d/2 � +Rd +� +Rd f(w) +� +Rd e−i⟨w,x⟩ei⟨z,x⟩g1/γ(x)dx dw dµ(z) += +� +2πγ2�−d/2 � +Rd +� +Rd f(w)e−i⟨w,x⟩g1/γ(x) +� +Rd ei⟨z,x⟩ dµ(z)dx dw += +� +2πγ2�−d/2 � +Rd +� +Rd f(w)e−i⟨w,x⟩g1/γ(x)Φµ(x)dx dw += +� +2πγ2�−d/2 � +Rd F[f](x)g1/γ(x)Φµ(x)dx. +Since f is assumed to have compact support, F[f] exists and is bounded by +� +Rd |f(w)|dw < +∞. +Hence, for any k ∈ R and x ∈ Rd, we have +���F[f](x)g1/γ(x)Φµφ(k)(x) +��� ≤ g1/γ(x) +� +Rd |f(w)|dw and +��F[f](x)g1/γ(x)Φµ(x) +�� ≤ g1/γ(x) +� +Rd |f(w)|dw. Using the proved result of limk→∞ Φµφ(k)(z) = Φµ(z) +and Lebesgue’s Dominated Convergence Therefore, we obtain +lim +k→∞ +� +Rd fγ(z)dµφ(k)(z) = lim +k→∞ +� +2πγ2�−d/2 � +Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx += +� +2πγ2�−d/2 � +Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx += +� +Rd fγ(z)dµ(z). +26 + +Moreover, we have: +lim +γ→0 lim sup +k→+∞ +���� +� +Rd f(z)dµφ(k)(z) − +� +Rd f(z)dµ(z) +���� +≤ lim +γ→0 lim sup +k→+∞ +� +2 sup +z∈Rd |f(z) − fγ(z)| + +���� +� +Rd fγ(z)dµφ(k)(z) − +� +Rd fγ(z)dµ(z) +���� +� += lim +γ→0 2 sup +z∈Rd |f(z) − fγ(z)| = 0, +which implies +� +µφ(k) +� +k∈N converges weakly to µ. +We now continue the proof of Theorem 2. We first show that if limk→∞ MSWp,T (µk, µ) = 0, (µk)k∈N +converges weakly to µ. We consider a sequence +� +µφ(k) +� +k∈N such that limk→∞ MSWp,T (µk, µ) = 0 +and we suppose +� +µφ(k) +� +k∈N does not converge weakly to µ. Therefore, let dP be the Lévy-Prokhorov +metric, limk→∞ dP(µk,µ) ̸= 0 that implies there exists ε > 0 and a subsequence +� +µψ(k) +� +k∈N with an +increasing function ψ : N → N such that for any k ∈ N: dP(µψ(k), µ) ≥ ε. However, we have +MSWp,T (µ, ν) = +� +E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µ, θt♯ν) +�� 1 +p +≥ E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +Wp (θt♯µ, θt♯ν) +� +≥ E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W1 (θt♯µ, θt♯ν) +� += MSW1,T (µ, ν), +by the Holder inequality with µ, ν ∈ Pp(Rd). Therefore, limk→∞ MSW1,T (µψ(k), µ) = 0 which +implies that there exists s a subsequence +� +µφ(ψ(k)) +� +k∈N with an increasing function φ : N → N such +that +� +µφ(ψ(k)) +� +k∈N converges weakly to µ by Lemma 1. Hence, limk→∞ dP +� +µφ(ψ(k)), µ +� += 0 which +contradicts our assumption. We conclude that if limk→∞ MSWp,T (µk, µ) = 0, (µk)k∈N converges +weakly to µ. +Now, we show that if (µk)k∈N converges weakly to µ, limk→∞ MSWp,T (µk, µ) = 0. By the con- +tinuous mapping theorem, we obtain (θ♯µk)k∈N converges weakly to θ♯µ for any θ ∈ Sd−1. Since +the weak convergence implies the convergence under the Wasserstein distance [61], we obtain +limk→∞ Wp(θ♯µk, µ) = 0. Moreover, the Wasserstein distance is also bounded, hence the bounded +convergence theorem: +lim +k→∞ MSWp +p,T (µk, µ) = E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µk, θt♯µ) +� += E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +0 +� += 0. +By the continuous mapping theorem with function x → x1/p, we obtain limk→∞ MSWp,T (µk, µ) → 0 +which completes the proof. +27 + +B.3 +Proof of Proposition 1 +(i) We recall the definition of Max-SW: +Max-SWp(µ, ν) = max +θ∈Sd−1 Wp(θ♯µ, θ♯ν). +Let θ∗ = argmaxθ∈Sd−1Wp(θ♯µ, θ♯ν), from Definition 1, for any p ≥ 1, T ≥ 1, dimension d ≥ 1, and +µ, ν ∈ Pp(Rd) we have: +MSWp,T (µ, ν) = +� +E(θ1:T )∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µ, θt♯ν) +�� 1 +p +≤ 1 +T +T +� +t=1 +W p +p (θ∗♯µ, θ∗♯ν) = W p +p (θ∗♯µ, θ∗♯ν) = Max-SWp(µ, ν). +Furthermore, by applying the Cauchy-Schwartz inequality, we have: +Max-SWp +p(µ, ν) = max +θ∈Sd−1 +� +inf +π∈Π(µ,ν) +� +Rd +���θ⊤x − θ⊤y +��� +p +dπ(x, y) +� +≤ max +θ∈Sd−1 +� +inf +π∈Π(µ,ν) +� +Rd×Rd ∥θ∥p∥x − y∥pdπ(x, y) +� += +inf +π∈Π(µ,ν) +� +Rd×Rd ∥θ∥p∥x − y∥pdπ(x, y) += +inf +π∈Π(µ,ν) +� +Rd×Rd ∥x − y∥pdπ(x, y) += W p +p (µ, ν), +which completes the proof. +(ii) This result can be directly obtained from the definitions of MSW and SW. +B.4 +Proof of Proposition 2 +In this proof, we denote Θ ⊂ Rd as the compact set of the probability measure P. From Proposition 1, +we find that +E[MSWp,T (µn, µ)] ≤ E [Max-SWp(µn, µ)] . +Therefore, the proposition follows as long as we can demonstrate that +E[Max-SWp(µn, µ)] ≤ C +� +(d + 1) log2 n/n +where C > 0 is some universal constant and the outer expectation is taken with respect to the data. +The proof for this result follows from the proof of Proposition 3 in [43]. Here, we provide the proof +for the completeness. By defining Fn,θ and Fθ as the cumulative distributions of θ♯µn and θ♯µ, the +28 + +closed-form expression of the Wasserstein distance in one dimension leads to the following equations +and inequalities: +Max-SWp +p(µn, µ) = max +θ∈Sd−1 +� 1 +0 +|F −1 +n,θ(u) − F −1 +θ +(u)|pdu += +max +θ∈Rd:∥θ∥=1 +� 1 +0 +|F −1 +n,θ(u) − F −1 +θ +(u)|pdu +≤ diam(Θ) +max +θ∈Rd:∥θ∥≤1 |Fn,θ(x) − Fθ(x)|p. +We can check that +max +θ∈Rd:∥θ∥≤1 |Fn,θ(x) − Fθ(x)| = sup +B∈B +|Pn(B) − P(B)|, +where B is the set of half-spaces {z ∈ Rd : θ⊤z ≤ x} for all θ ∈ Rd such that ∥θ∥ ≤ 1. From [62], +we can show that the Vapnik-Chervonenkis (VC) dimension of B is at most d + 1. Therefore, the +following inequality holds: +sup +B∈B +|Pn(B) − P(B)| ≤ +� +32 +n [(d + 1) log2(n + 1) + log2(8/δ)] +with probability at least 1 − δ. Putting the above results together leads to +E[Max-SWp(µn, µ)] ≤ C +� +(d + 1) log2 n/n, +where C > 0 is some universal constant. +As a consequence, we obtain the conclusion of the +proposition. +B.5 +Proof of Proposition 3 +For any p ≥ 1, T ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd), using the Holder’s inequality, we have: +E|� +MSW +p +p,T (µ, ν) − MSWp +p,T (µ, ν)| +≤ +� +E|� +MSW +p +p,k(µ, ν) − MSWp +p,k(µ, ν)|2� 1 +2 += +� +�E +����� +1 +TL +T +� +t=1 +L +� +l=1 +Wp +p(θtl♯µ, θtl♯ν) − Eθ1:T ∼σ(θ1:T ) +� +1 +T +T +� +t=1 +W p +p (θt♯µ, θt♯ν) +������ +2� +� +1 +2 += +� +V ar +� +1 +TL +T +� +t=1 +L +� +l=1 +W p +p (θt♯µ, θt♯ν) +�� 1 +2 += +1 +√ +TL +L +� +l=1 +V ar +� T +� +t=1 +W p +p (θt♯µ, θt♯ν) +� 1 +2 +, +which completes the proof. +29 + +Algorithm 7 Gradient flow with the Euler scheme +Input: the start distribution µ = 1 +n +�n +i=1 δXi, the target distribution ν = 1 +n +�n +i=1 δYi, number of +Euler iterations T (abuse of notation), Euler step size η (abuse of notation), a metric D. +for t = 1 to T do +X = X − n · η∇XD(PX, PY ) +end for +Output: µ = 1 +n +�n +i=1 δXi +Table 3: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in gradient flow +application. +Distances +Wasserstein-2 (↓) +Time (↓) +Distances +Wasserstein-2 (↓) +Time (↓) +SW (L=10) +0.0113 × 10−2 +0.85 +SW (L=100) +0.0096 × 10−2 +4.32 +Max-SW (T=5) +0.0231 × 10−2 +1.02 +Max-SW (T=100) +0.0083 × 10−2 +10.46 +K-SW (L=5,K=2) +0.0104 × 10−2 +0.92 +K-SW (L=20,K=2) +0.0096 × 10−2 +1.97 +Max-K-SW (K=2,T=5) +0.0152 × 10−2 +1.41 +Max-K-SW (K=2,T=100) +0.0083 × 10−2 +10.46 +rMSW (L=2,T=5,κ=10) +0.0109 × 10−2 +2.11 +rMSW (L=2,T=5,κ=100) +0.0141 × 10−2 +17.98 +iMSW (L=1,T=5) +0.0109 × 10−2 +1.07 +iMSW (L=5,T=5) +0.0055 × 10−2 +2.44 +iMSW (L=2,T=10) +0.0052 × 10−2 +2.79 +iMSW (L=5,T=2) +0.0071 × 10−2 +1.14 +iMSW (L=2,T=5,M=4) +0.0101 × 10−2 +1.2 +iMSW (L=2,T=5,M=2) +0.0055 × 10−2 +1.25 +iMSW (L=2,T=5,M=0,N=2) +0.0066 × 10−2 +1.28 +iMSW (L=2,T=5,M=2,N=2) +0.0072 × 10−2 +1.19 +viMSW (L=2,T=5,κ=10) +0.0052 × 10−2 +3.12 +viMSW (L=2,T=5,κ=100) +0.0053 × 10−2 +2.76 +C +Additional Experiments +In this section, we present the detail of experimental frameworks and additional experiments on +gradient flows, color transfer, and deep generative modeling which are not in the main paper. +C.1 +Gradient Flows +Framework: We have discussed in detail the framework of gradient flow in Section 4.1 in the main +paper. Here, we summarize the Euler scheme for solving the gradient flow in Algorithm 7. +Visualization of gradient flows: We show the visualization of gradient flows from all distances +(Table 1) in Figure 5. Overall, we observe that the quality of the flows is consistent with the +quantitative Wasserstein-2 score which is computed using [18]. From the figures, we see that iMSW +and viMSW help the flows converge very fast. Namely, Wasserstein-2 scores at steps 200 of iMSW +and viMSW are much lower than other distances. For oMSW, with L = 5, T = 2, it achieves a +comparable result to SW, K-SW, and Max-SW while being faster. The random walk transition does +not work well in rMSW with the concentration parameter κ = 50. +Studies on hyper-parameters: We run gradient flows with different values of hyper-parameters +and report the Wasserstein-2 scores and computational time in Table 3. From the table and Figure 5, +we see that SW with L = 10 is worse than oMSW, iMSW, and viMSW with L = 2, T = 5 (10 total +projections). Increasing the number of projections to 100, SW gets better, however, its Wasserstein-2 +score is still higher than the scores of iMSW and viMSW while its computational time is bigger. +30 + +SW L=30 +W2: 25.3149×10 +2 (0s) +W2: 0.5913×10 +2 (1.07s) +W2: 0.0099×10 +2 (1.55s) +Max-SW T=30 +W2: 25.3149×10 +2 (0s) +W2: 0.1091×10 +2 (2.37s) +W2: 0.0098×10 +2 (3.48s) +K-SW L=15 K=2 +W2: 25.3149×10 +2 (0s) +W2: 0.5846×10 +2 (1.16s) +W2: 0.0098×10 +2 (1.71s) +Max-K-SW K=2 T=15 +W2: 25.3149×10 +2 (0s) +W2: 0.7388×10 +2 (2.36s) +W2: 0.0146×10 +2 (3.35s) +rMSW L=2 T=5 =50 +W2: 25.3149×10 +2 (0s) +W2: 0.8628×10 +2 (1.48s) +W2: 0.0157×10 +2 (2.16s) +oMSW L=5 T=2 +W2: 25.3149×10 +2 (0s) +W2: 0.5783×10 +2 (0.59s) +W2: 0.0104×10 +2 (0.87s) +steps 0 +iMSW L=2 T=5 +W2: 25.3149×10 +2 (0s) +steps 200 +W2: 0.0483×10 +2 (0.99s) +steps 300 +W2: 0.0064×10 +2 (1.41s) +steps 0 +viMSW L=2 T=5 =50 +W2: 25.3149×10 +2 (0s) +steps 200 +W2: 0.0512×10 +2 (2.05s) +steps 300 +W2: 0.0043×10 +2 (2.94s) +Figure 5: The figures show the gradient flows that are from the empirical distribution over the +color points to the empirical distribution over S-shape points produced by different distances. The +corresponding Wasserstein-2 distance between the empirical distribution at the current step and the +S-shape distribution and the computational time (in second) to reach the step is reported at the top +of the figure. +Similarly, Max-(K)-SW with T = 100 is better than Max-(K)-SW with T = 5 and T = 10, however, +it is still worse than iMSW and viMSW in terms of computation and performance. For burning +and thinning, we see that the technique can help improve the computation considerably. More +importantly, the burning and thinning techniques do not reduce the performance too much. For +iMSW, increasing L and T leads to a better flow. For the same number of total projections e.g., +10, L = 2, T = 5 is better than L = 5, T = 2. For viMSW, it usually performs better than iMSW, +however, its computation is worse due to the sampling complexity of the vMF distribution. We vary +the concentration parameter κ ∈ {10, 50, 100} and find that κ = 50 is the best. Hence, it might +suggest that a good balance between heading to the “max" projecting direction and exploring the +space of projecting directions is the best strategy. +C.2 +Color Transfer +Framework: In our experiments, we first compress the color palette of the source image and the +target image to 3000 colors by using K-Mean clustering. After that, the color transfer application is +31 + +Source +SW (L=45), 37.97(s), W2 = 414.51 +Max-SW (T=45), 57.48(s), W2 = 449.42 +K-SW (L=15,K=3), 38.21(s), W2 = 411.74 +Max-K-SW (K=3,T=15), 52.6(s), W2 = 479.43 +rMSW (L=3,T=5, =50), 15.65(s), W2 = 444.35 +oMSW (L=3,T=5), 14.17(s), W2 = 415.06 +iMSW (L=3,T=5), 25.39(s), W2 = 16.97 +viMSW (L=3,T=5, =50), 29.27(s), W2 = 16.48 +Target +Figure 6: The figures show the source image, the target image, and transferred images from +different distances. The corresponding Wasserstein-2 distance between the empirical distribution +over transferred color palates and the empirical distribution over the target color palette and the +computational time (in second) is reported at the top of the figure. The color palates are given +below the corresponding images. +Algorithm 8 Color Transfer +Input: source color palette X ∈ {0, . . . , 255}n×3, target color palette Y ∈ {0, . . . , 255}n×3, number +of Euler iterations T (abuse of notation), Euler step size η (abuse of notation), a metric D. +for t = 1 to T do +X = X − n · η∇XD(PX, PY ) +end for +X = round(X, {0, . . . , 255}) +Output: X +conducted by using Algorithm 8 which is a modified version of the gradient flow algorithm since the +color palette contains only positive integer in {0, . . . , 255}. The flow can be seen as an incomplete +transportation map that maps from the source color palette to a color palette that is close to the +target color palette. This is quite similar to the iterative distribution transfer algorithm [8], however, +the construction of the iterative map is different. +Visuallization of transferred images: We show the source image, the target image, and the +corresponding transferred images from distances in Figure 6 and Figure 7. The color palates are given +below the corresponding images. The corresponding Wasserstein-2 distance between the empirical +distribution over transferred color palates and the empirical distribution over the target color palette +and the computational time (in second) is reported at the top of the figure. First, we observe that +32 + +Source +SW (L=45), 38.0(s), W2 = 68.09 +Max-SW (T=45), 58.17(s), W2 = 207.12 +K-SW (L=15,K=3), 38.34(s), W2 = 67.88 +Max-K-SW (K=3,T=15), 52.72(s), W2 = 65.52 +rMSW (L=3,T=5, =50), 15.63(s), W2 = 69.4 +oMSW (L=3,T=5), 13.48(s), W2 = 68.51 +iMSW (L=3,T=5), 25.56(s), W2 = 22.35 +viMSW (L=3,T=5, =50), 28.42(s), W2 = 22.1 +Target +Figure 7: The figures show the source image, the target images, and transferred images from +different distances. The corresponding Wasserstein-2 distance between the empirical distribution +over transferred color palates and the empirical distribution over the target color palette and the +computational time (in second) is reported at the top of the figure. The color palates are given +below the corresponding images. +the qualitative comparison (transferred images and color palette) is consistent with the Wasserstein +scores. We observe that iMSW and viMSW have their transferred images closer to the target image +in terms of color than other distances. More importantly, iMSW and viMSW are faster than other +distances. Max-SW and Max-K-SW do not perform well in this application, namely, they are slow +and give high Wasserstein distances. For oMSW, it is comparable to SW and K-SW while being +faster. +Studies on hyper-parameters: In addition to result in Figure 6, we run color transfer with other +settings of distances in Table 4. From the table, increasing the number of projections L lead to +a better result for SW and K-SW. However, they are still worse than iMSW and viMSW with a +33 + +Table 4: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in the color +transfer application. +Distances +Wasserstein-2 (↓) +Time (↓) +Distances +Wasserstein-2 (↓) +Time (↓) +SW (L=45) +414.51 +37.97 +SW (L=15) +421.5 +12.96 +Max-SW (T=45) +449.42 +57.48 +Max-SW (T=15) +450.37 +19.03 +K-SW (L=15,K=3) +411.74 +38.21 +K-SW (L=5,K=3) +413.16 +14.2 +Max-K-SW (K=3,T=15) +479.43 +52.6 +Max-K-SW (K=3,T=5) +510.43 +17.46 +rMSW (L=3,T=5,κ=50) +444.35 +15.65 +rMSW (L=3,T=5,κ=100) +446.35 +16.14 +oMSW (L=3,T=5) +415.06 +14.17 +oMSW (L=3,T=15) +414.29 +38.51 +iMSW (L=3,T=5) +16.97 +25.39 +iMSW (L=3,T=15) +15.23 +79.47 +iMSW (L=5,T=5) +21.63 +39.82 +iMSW (L=5,T=3) +24.02 +22.27 +iMSW (L=3,T=15,M=14) +26.23 +48.08 +iMSW (L=3,T=15,M=10) +18.67 +55.55 +iMSW (L=3,T=15,M=0,N=2) +16.6 +62.66 +iMSW (L=3,T=15,M=10,N=2) +19.2 +50.1 +viMSW (L=3,T=5,κ=50) +16.48 +29.27 +viMSW (L=3,T=5,κ=100) +16.49 +28.52 +smaller number of projections. Similarly, increasing T helps Max-SW, Max-K-SW, and iMSW better. +As discussed in the main paper, the burning and thinning technique improves the computation and +sometimes enhances the performance. +C.3 +Deep Generative Models +Framework: We follow the generative modeling framework from [20, 42]. Here, we state an adaptive +formulation of the framework. We are given a data distribution µ ∈ P(X) through its random +samples (data). Our goal is to estimate a parametric distribution νφ that belongs to a family of +distributions indexed by parameters φ in a parameter space Φ. Deep generative modeling is interested +in constructing νφ via pushforward measure. In particular, νφ is implicitly represented by pushing +forward a random noise ν0 ∈ P(Z) e.g., standard multivariable Gaussian, through a parametric +function Gφ : Z → X (a neural network with weights φ). To estimate φ (νφ), the expected distance +estimator [57, 41] is used: +argminφ∈ΦE(X,Z)∼µ⊗m⊗ν⊗m +0 +[D(PX, PGφ(Z))], +where m ≥ 1, D can be any distance on space of probability measures, µ⊗ is the product measures, +namely, X = (x1, . . . , xm) ∼ µ⊗ is equivalent to xi ∼ µ for i = 1, . . . , m, and PX = +1 +m +�m +i=1 δxi. +Similarly, Z = (z1, . . . , zm) with zi ∼ ν0 for i = 1, . . . , m, and Gφ(Z) is the output of the neural +work given the input mini-batch Z. +By using Wasserstein distance, sliced Wasserstein distance, and their variants as the distance D, +we obtain the corresponding estimators. These estimators are sometimes known as mini-batch +Wasserstein losses [16, 45, 46] However, applying directly those estimators to natural image data +cannot give perceptually good results [20, 15]. The reason is that Wasserstein distance, sliced +Wasserstein distances, and their variants require a ground metric as input e.g., L2, however, those +ground metrics are not meaningful on images. Therefore, previous works propose using a function +that maps the original data space X to a feature space F where the L2 norm is meaningful [55]. We +denote the feature function Fγ : X → F. Now the estimator becomes: +argminφ∈ΦE(X,Z)∼µ⊗m⊗ν⊗m +0 +[D(PFγ(X), PFγ(Gφ(Z)))]. +34 + +The above optimization can be solved by stochastic gradient descent algorithm with the following +stochastic gradient estimator: +∇φE(X,Z)∼µ⊗m⊗ν⊗m +0 +[D(PFγ(X), PFγ(Gφ(Z)))] = E(X,Z)∼µ⊗m⊗ν⊗m +0 +[∇φD(PFγ(X), PFγ(Gφ(Z)))] +≈ 1 +K +K +� +k=1 +∇φD(PFγ(Xk), PFγ(Gφ(Zk))), +where X1, . . . , XK are drawn i.i.d from µ⊗m and Z1, . . . , ZK are drawn i.i.d from ν⊗m +0 +. There are +several ways to estimate the feature function Fγ in practice. In our experiments, we use the following +objective [15]: +min +γ +� +EX∼µ⊗m[min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m +0 +[min(0, −1 − H(Fγ(Gφ(Z)))))] +� +, +where H : F → R. The above optimization problem is also solved by the stochastic gradient descent +algorithm with the following gradient estimator: +∇γ +� +EX∼µ⊗m[min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m +0 +[min(0, −1 − H(Fγ(Gφ(Z)))))] +� += EX∼µ⊗m[∇γ min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m +0 +[∇γ min(0, −1 − H(Fγ(Gφ(Z)))))] +≈ 1 +K +K +� +k=1 +[∇γ min(0, −1 + H(Fγ(Xk)))] + 1 +K +K +� +k=1 +[∇γ min(0, −1 − H(Fγ(Gφ(Zk)))))], +where X1, . . . , XK are drawn i.i.d from µ⊗m and Z1, . . . , ZK are drawn i.i.d from ν⊗m +0 +. +Settings: We use the following neural networks for Gφ and Fγ: +• CIFAR10: +– Gφ: z ∈ R128(∼ ν0 : N(0, 1)) → 4 × 4 × 256(Dense, Linear) → ResBlock up 256 → +ResBlock up 256 → ResBlock up 256 → BN, ReLU, → 3 × 3 conv, 3 Tanh . +– Fγ1: x ∈ [−1, 1]32×32×3 → ResBlock down 128 → ResBlock down 128 → ResBlock down 128 → +ResBlock 128 → ResBlock 128. +– Fγ2: x ∈ R128×8×8 → ReLU → Global sum pooling(128) → 1(Spectral normalization). +– Fγ(x) = (Fγ1(x), Fγ2(Fγ1(x))) and H(Fγ(x)) = Fγ2(Fγ1(x)). +• CelebA: +– Gφ: z ∈ R128(∼ ν0 : N(0, 1)) → 4 × 4 × 256(Dense, Linear) → ResBlock up 256 → +ResBlock up 256 → +ResBlock up 256 → +ResBlock up 256 → +BN, ReLU, +→ 3 × +3 conv, 3 Tanh . +– Fγ1: x ∈ [−1, 1]32×32×3 → ResBlock down 128 → ResBlock down 128 → ResBlock down 128 → +ResBlock 128 → ResBlock 128. +– Fγ2: x ∈ R128×8×8 → ReLU → Global sum pooling(128) → 1(Spectral normalization). +– Fγ(x) = (Fγ1(x), Fγ2(Fγ1(x))) and H(Fγ(x)) = Fγ2(Fγ1(x)). +35 + +SW +Max-SW +K-SW +Max-K-SW +rMSW +oMSW +iMSW +viMSW +Figure 8: Random generated images of distances on CIFAR10. +Table 5: Summary of FID and IS scores of methods on CIFAR10 (32x32), and CelebA (64x64) +Method +CIFAR10 (32x32) +CelebA (64x64) +FID (↓) +IS (↑) +FID (↓) +iMSW (L=100,T=10,M=0,N=1) +14.61±0.72 +8.15±0.15 +9.73±0.33 +iMSW (L=100,T=10,M=9,N=1) +14.16±1.11 +8.17±0.07 +9.10±0.34 +iMSW (L=100,T=10,M=5,N=1) +13.93±0.21 +8.15±0.05 +9.49±0.52 +iMSW (L=100,T=10,M=0,N=2) +14.33±0.32 +8.15±0.06 +8.99±0.64 +iMSW (L=10,T=100,M=0,N=1) +14.26±0.74 +8.15±0.07 +8.89±0.23 +iMSW (L=10,T=100,M=99,N=1) +14.50±0.70 +8.12±0.08 +9.55±0.35 +iMSW (L=10,T=100,M=50,N=1) +14.41±0.58 +8.12±0.06 +9.46±0.73 +iMSW (L=10,T=100,M=0,N=2) +14.65±0.01 +8.11±0.06 +9.49±0.39 +For all datasets, the number of training iterations is set to 50000. We update the generator Gφ each +5 iterations while we update the feature function Fγ every iteration. The mini-batch size m is set +128 in all datasets. The learning rate for Gφ and Fγ is 0.0002 and the optimizer is Adam [25] with +parameters (β1, β2) = (0, 0.9). We use the order p = 2 for all sliced Wasserstein variants. We use +50000 random samples from estimated generative models Gφ for computing the FID scores and the +Inception scores. In evaluating FID scores, we use all training samples for computing statistics of +datasets2. +Generated images: We show generated images on CIFAR10 and CelebA from different generative +models trained by different distances in Figure 8 and in Figure 9 in turn. Overall, images are visually +consistent with the quantitative FID scores in Table 2. +Studies on hyperparameters: We run some additional settings of iMSW to investigate the +2We evaluate the scores based on the code from https://github.com/GongXinyuu/sngan.pytorch. +36 + +SW +Max-SW +K-SW +Max-K-SW +rMSW +oMSW +iMSW +viMSW +Figure 9: Random generated images of distances on CelebA. +performance of the burning thinning technique and to compare the role of L and T in Table 5. +First, we see that burning and thinning helps to improve FID score and IS score on CIFAR10 and +CelebA in the settings of L = 100, T = 10. It is worth noting that the original purpose of burning +and thinning is to reduce computational complexity and memory complexity. The side benefit of +improving performance requires more investigation that is left for future work. In addition, we find +that for the same number of total projections 1000 without burning and thinning, the setting of +L = 10, T = 100 is better than the setting of L = 100, T = 10 on CIFAR10. However, the reverse +direction happens on CelebA. Therefore, on different datasets, it might require hyperparameter +tunning for finding the best setting of the number of projections L and the number of timesteps T. +37 + diff --git a/5NE2T4oBgHgl3EQfOgaA/content/tmp_files/load_file.txt b/5NE2T4oBgHgl3EQfOgaA/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4f4acd487224653050ac000b40724a7e37e6e09 --- /dev/null +++ b/5NE2T4oBgHgl3EQfOgaA/content/tmp_files/load_file.txt @@ -0,0 +1,1713 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf,len=1712 +page_content='Markovian Sliced Wasserstein Distances: Beyond Independent Projections Khai Nguyen Tongzheng Ren Nhat Ho The University of Texas at Austin January 11, 2023 Abstract Sliced Wasserstein (SW) distance suffers from redundant projections due to independent uniform random projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To partially overcome the issue, max K sliced Wasserstein (Max-K-SW) distance (K ≥ 1), seeks the best discriminative orthogonal projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Despite being able to reduce the number of projections, the metricity of Max-K-SW cannot be guaranteed in practice due to the non-optimality of the optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, the orthogonality constraint is also computationally expensive and might not be effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To address the problem, we introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We discuss various members of MSW by specifying the Markov structure including the prior distribution, the transition distribution, and the burning and thinning technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we investigate the theoretical properties of MSW including topological properties (metricity, weak convergence, and connection to other distances), statistical properties (sample complexity, and Monte Carlo estimation error), and computational properties (computational complexity and memory complexity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Finally, we compare MSW distances with previous SW variants in various applications such as gradient flows, color transfer, and deep generative modeling to demonstrate the favorable performance of MSW 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 1 Introduction Sliced Wasserstein (SW) [7] distance has been well-known as a great alternative statistical distance for Wasserstein distance [60, 52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In short, SW takes the average of Wasserstein distances between corresponding pairs of one-dimensional projected measures as the distance between the two original measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Because of that, the SW has a low computational complexity compared to the conventional Wasserstein distance due to the closed-form solution of optimal transport in one dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' When the probability measures have at most n supports, the computational complexity of the SW is only O(n log n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' This complexity is much lower than the computational complexity O(n3 log n) of Wasserstein distance and the complexity O(n2) [1, 34, 35, 33] of entropic Wasserstein [11] (Sinkhorn divergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, the memory complexity of the SW which is O(n) which is lower than the memory complexity O(n2) of the Wasserstein (Sinkhorn) distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The reason is that SW does not need to store the cost matrix between supports which cost O(n2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' An additional appealing property of the SW is that it does not suffer from the curse of dimensionality, namely, its sample complexity is O(n−1/2) [40, 49] compared to O(n−1/d) [19] of the Wasserstein distance (d is the number of dimensions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Due to the scalability, the SW has been applied to almost all applications where the Wasserstein distance is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For example, we refer to some applications of the SW which are generative model- 1Code for the experiments will be published at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='com/UT-Austin-Data-Science-Group/MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='03749v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='ML] 10 Jan 2023 ing [63, 15, 27, 42], domain adaptation [30], clustering [28], approximate Bayesian computation [39], gradient flows [36, 5], and variational inference [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, there are many attempts to improve the SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The generalized sliced Wasserstein (GSW) distance that uses non-linear projection is proposed in [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Distributional sliced Wasserstein distance is proposed in [44, 47] by replacing the uniform distribution on the projecting directions in SW with an estimated distribution that puts high probabilities for discriminative directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Spherical sliced Wasserstein which is defined between distributions that have their supports on the hyper-sphere is introduced in [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A sliced Wasserstein variant between probability measures over images with convolution is defined in [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Despite having a lot of improvements, one common property in previous variants of the SW is that they use independent projecting directions that are sampled from a distribution over a space of projecting direction e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', the unit-hypersphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Those projecting directions are further utilized to project two interested measures to corresponding pairs of one-dimensional measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Due to the independence, practitioners have reported that many projections do not have the power to discriminative between two input probability measures [26, 15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, having a lot of projections leads to redundancy and losing computation for uninformative pairs of projected measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' This problem is known as the projection complexity limitation of the SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To partially address the issue, the max sliced Wasserstein (Max-SW) distance is introduced in [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Max-SW seeks the best projecting direction that can maximize the projected Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since the Max-SW contains a constraint optimization problem, the projected subgradient ascent algorithm is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since the algorithm only guarantees to obtain local maximum [49], the performance of empirical estimation Max-SW is not stable in practice [42] since the metricity of Max-SW can be only obtained at the global optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Another approach is to force the orthogonality between projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In particular, K-sliced Wasserstein [53] (K-SW) uses K > 1 orthogonal projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, to generalize the Max-SW and the K-SW, max-K sliced Wasserstein (Max-K-SW) distance (K > 1) appears in [12] to find the best K projecting directions that are orthogonal to each other via the projected sub-gradient ascent algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nevertheless, the orthogonality constraint is computationally expensive and might not be good in terms of reflecting discrepancy between general measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, Max-K-SW also suffers from the non-optimality problem which leads to losing the metricity property in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To avoid the independency and to satisfy the requirement of creating informative projecting directions efficiently, we propose to impose a sequential structure on projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Namely, we choose a new projecting direction based on the previously chosen directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For having more efficiency in computation, we consider first-order Markovian structure in the paper which means that a projecting direction can be sampled by using only the previous direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the first projecting direction, it can follow any types of distributions on the unit-hypersphere that were used in the literature e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', uniform distribution [7] and von Mises-Fisher distribution [23, 47] to guarantee the metricity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the transition distribution on the second projecting direction and later, we propose three types of family which are random walk transition, orthogonal-based transition, and input-awared transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the random walk transition, we use the von Mises-Fisher with the mean as the previous projecting direction as the conditional distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the orthogonal-based transition, we choose the projecting direction uniformly on the unit hypersphere such that it is orthogonal to the previous direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In contrast to the previous two transitions which do not use the information from the two input measures, the input-awared transition uses the sub-gradient with respect to the previous projecting direction of the corresponding projected Wasserstein distance between the 2 two measures to design the transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In particular, the projected sub-gradient update is used to create the new projecting direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we further improve the computational time and computational memory by introducing the burning and thinning technique to reduce the number of random projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Contribution: In summary, our contributions are two-fold: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We propose a novel family of distances on the space of probability measures, named Markovian sliced Wasserstein (MSW) distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' MSW considers a first-order Markovian structure on random projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we derive three variants of MSW that use three different types of conditional transition distributions: random walk, orthogonal-based, and input-awared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We investigate the theoretical properties of MSW including topological properties (metricity, weak convergence, and connection to other distances), statistical properties (sample complexity, and Monte Carlo estimation error), and computational properties (computational complexity and memory complexity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we introduce a burning and thinning approach to further reduce computational and memory complexity, and we discuss the properties of the resulting distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We conduct experiments to compare MSW with SW, Max-SW, K-SW, and Max-K-SW in various applications, namely, gradient flows, color transfer, and deep generative models on standard image datasets: CIFAR10 and CelebA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We show that the input-awared MSW can yield better qualitative and quantitative performance while consuming less computation than previous distances in gradient flows and color transfer, and comparable computation in deep generative modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Finally, we investigate the role of hyper-parameters of distances e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', the number of projections, the number of time-steps, and so on, in applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We first provide background for Wasserstein distance, sliced Wasserstein distance, and max sliced Wasserstein distance in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Section 3, we propose Markovian sliced Wasserstein distances and derive their theoretical properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Section 4 contains the comparison of MSW to previous SW variants in gradient flows, color transfer, and deep generative modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We then conclude the paper in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Finally, we defer the proofs of key results in the paper and supplementary materials to Appendices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For p ≥ 1, Pp(Rd) is the set of all probability measures on Rd that have finite p- moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any d ≥ 2, we denote U(Sd−1) is the uniform measure over the unit hyper-sphere Sd−1 := {θ ∈ Rd | ||θ||2 2 = 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any two sequences an and bn, the notation an = O(bn) means that an ≤ Cbn for all n ≥ 1, where C is some universal constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We denote θ♯µ is the push-forward measures of µ through the function f : Rd → R that is f(x) = θ⊤x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 2 Background We start with reviewing the background on Wasserstein distance, sliced Wasserstein distances, their computation techniques, and their limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Wasserstein distance: Given two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd), the Wasserstein distance [60, 51] between µ and ν is : Wp p(µ, ν) = inf π∈Π(µ,ν) � Rd×Rd ∥x − y∥p pdπ(x, y) (1) 3 where Π(µ, ν) is set of all couplings that have marginals are µ and ν respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The computational complexity and memory complexity of Wasserstein distance are O(n3 log n) and O(n2) in turn when µ and ν have at most n supports.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' When d = 1, the Wasserstein distance can be computed with a closed form: Wp p(µ, ν) = � 1 0 |F −1 µ (z) − F −1 ν (z)|pdz, where Fµ and Fν are the cumulative distribution function (CDF) of µ and ν respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced Wasserstein distance: By randomly projecting two interested high-dimensional measures to corresponding pairs of one-dimensional measures, sliced Wasserstein (SW) distance can exploit the closed-form benefit of Wasserstein distance in one dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The definition of sliced Wasserstein distance [7] between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: SWp p(µ, ν) = Eθ∼U(Sd−1)Wp p(θ♯µ, θ♯ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (2) Monte Carlo samples are often used to approximate the intractable expectation unbiasedly: � SW p p(µ, ν) = 1 L �L l=1 Wp p(θl♯µ, θl♯ν), where θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θL are drawn randomly from U(Sd−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' When µ and ν are dis- crete measures that have at most n supports in d dimension, the computational complexity of SW is O(Ln log2 n + Ldn) and the memory complexity for storing the projecting directions and the projected supports of SW is O(L(d + n)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, Ln log2 n is for sorting L sets of projected supports and Ld is for projecting supports to L sets of scalars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Max sliced Wasserstein distance: To select the best discriminative projecting direction, the max sliced Wasserstein (Max-SW) distance [14] between µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is introduced as follows: Max-SWp(µ, ν) = max θ∈Sd Wp(θ♯µ, θ♯ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (3) Computing Max-SW requires solving the constrained optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In practice, the projected sub-gradient ascent algorithm with T > 1 iterations is often used to obtain a surrogate projecting direction ˆθT for the global optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hence, the empirical Max-SW distance is � Max-SWp(µ, ν) = Wp(ˆθT ♯µ, ˆθT ♯ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The detail of the projected sub-gradient ascent algorithm is given in Algorithm 1 in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The computational complexity of Max-SW is O(Tn log2 n + Tdn) and the memory complexity of Max-SW is O(d + n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' It is worth noting that the projected sub-gradient ascent can only yield local maximum [49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the empirical Max-SW might not be distance even when T → ∞ since the metricity of Max-SW can be only obtained at the global maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' K sliced Wasserstein distance: The authors in [53] propose to estimate the sliced Wasserstein distance based on orthogonal projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We refer the distance as K sliced Wasserstein distance (K-SW).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The definition of K-SW between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: K-SWp(µ, ν) = E � 1 K K � i=1 Wp p(θi♯µ, θi♯ν) � , (4) where the expectation is with respect to (θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θK) ∼ U(Vk(Rd)) with VK(Rd) = {(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θK) ∈ Sd−1|⟨θi, θj⟩ = 0 ∀i, j ≤ K} is the Stiefel manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The expectation can be approximated with Monte Carlo samples (θ1l, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θKl)L l=1 from U(VK(Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In the original paper, L is set to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To sample from the uniform distribution over the Stiefel manifold U(Vk(Rd)), it requires using the 4 Gram-Schmidt orthogonality process which has the computational complexity O(K2d) (quadratic in K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the total computational complexity of K-SW is O(LKn log2 n + LKdn + LK2d) and the memory complexity of K-SW is O(LK(d + n)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' More detail related to K-SW including Gram-Smith process and sampling uniformly from Stiefel manifold is given in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Max K sliced Wasserstein distance: To generalize both Max-SW and K-SW, Max K sliced Wasserstein is introduced in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Its definition between µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: Max-K-SWp p(µ, ν) = max (θ1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=',θK)∈VK(Rd) � 1 K K � i=1 Wp p(θi♯µ, θi♯ν) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (5) Similar to Max-SW, a projected sub-gradient ascent algorithm with T > 1 iterations is used to approximate Max-K-SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We refer the reader to Algorithm 4 in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 for greater detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since the projecting operator to the Stiefel manifold is the Gram-Smith process, the computational complexity of Max-K-SW is O(TKn log2 n+TKdn+TK2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The memory complexity of Max-K-SW is O(K(d + n)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similar to Max-SW, the metricity of Max-K-SW is only obtained at the global optimum, hence, the empirical estimation might not be stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, the orthogonality constraint is also computationally expensive i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', quadratic in terms of the number of orthogonal projections K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 3 Markovian Sliced Wasserstein distances As discussed, the limitations of the previous works are independent projecting directions, compu- tationally expensive dependency, and the lost of asymptotic metricity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In order to address those limitations, we propose to impose the dependency between projecting directions via the first-order Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By doing so, a new projecting direction can be created efficiently while being depen- dent on previous projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In this section, we first define Markovian sliced Wasserstein (MSW) distance and discuss its theoretical properties including topological properties, statistical properties, and computational properties in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2, we discuss some choices in designing the Markov chain including the prior distribution and the transition distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Finally, we discuss the burning and thinning variant of MSW which can reduce the computational and memory complexity in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 Definitions, Topological, Statistical, and Computational Properties We first start with a general definition of Markovian sliced Wasserstein distance in Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, the Markovian sliced Wasserstein of order p between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: MSWp p,T (µ, ν) = E � 1 T T � t=1 W p p (θt♯µ, θt♯ν) � , (6) where T is the number of time steps, the expectation is under the projecting distribution θ1:T ∼ σ(θ1:T ) with σ(θ1:T ) = σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ) = σ1(θ1) �T l=2 σt(θt|θt−1), and σ1(θ1), σt(θt|θt−1) ∈ P(Sd−1) for all t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 5 The first projecting direction θ1 follows the distribution σ1(θ1) with σ1(θ1) to be any distributions on the unit hyper-sphere, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', the uniform distribution, a von Mises-Fisher distribution, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By designing the transition distribution σl(θl|θl−1), we can obtain various variants of MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Before going to the specific design of those distributions, we first discuss the empirical estimation of MSW, and investigate its theoretical properties including topological properties, statistical properties, and computational properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Monte Carlo estimation: Similar to SW, we also need to use Monte Carlo samples to approximate the expectation in Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We first samples θ11, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θL1 ∼ σ1(θ1) for L ≥ 1, then we samples θlt ∼ σt(θt|θlt−1) for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T and l = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' After that, we can form an unbiased empirical estimation of MSW as follows: � MSW p p,T (µ, ν) = 1 LT �L l=1 �T t=1 Wp p(θlt♯µ, θlt♯ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Topological Properties: We first state the following assumption: A1: In MSW, the prior distribution σ1(θ1) is supported on all the unit-hypersphere or there exists a transition distribution σt(θt|θt−1) being supported on all the unit-hypersphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The assumption A1 is easy to satisfy and it holds for all later choices of the prior distribution and transition distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We now consider the metricity properties of the Markovian sliced Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Theorem 1 (Metricity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds, Markovian sliced Wasserstein MSWp,T (·, ·) is a valid metric on the space of probability measures Pp(Rd), namely, it satisfies the (i) non-negativity, (ii) symmetry, (iii) triangle inequality, and (iv) identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Theorem 1 is in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Next, we show that the convergence in MSW implies the weak convergence of probability measures and the reverse also holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Theorem 2 (Weak Convergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds, the convergence of probability measures in Pp(Rd) under the Markovian sliced Wasserstein distance MSWp,T (·, ·) implies weak convergence of probability measures and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Theorem 2 means that for any sequence of probability measures (µk)k∈N and µ in Pp(Rd), we have limk→+∞ MSWp,T (µk, µ) = 0 if and only if for any continuous and bounded function f : Rd → R, limk→+∞ � f dµk = � f dµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Theorem 2 is in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Next, we discuss the connection of MSW to previous sliced Wasserstein variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1 and dimension d ≥ 1, (i) For any T ≥ 1 and µ, ν ∈ Pp(Rd), MSWp,T (µ, ν) ≤ Max-SWp(µ, ν) ≤ Wp(µ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (ii) If T = 1 and the prior σ1(θ1) := U(Sd−1), MSWp,T (µ, ν) = SWp(µ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 1 is in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Statistical Properties: We first investigate the sample complexity or the empirical estimation rate of MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 2 (Sample Complexity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Let X1, X2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , Xn be i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' samples from the probability measure µ being supported on compact set of Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We denote the empirical measure µn = 1 n �n i=1 δXi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Then, for any p ≥ 1 and T ≥ 1, there exists a universal constant C > 0 such that E[MSWp,T (µn, µ)] ≤ C � (d + 1) log n/n, where the outer expectation is taken with respect to the data X1, X2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , Xn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 6 The proof of Proposition 2 is in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The above sample complexity suggests that MSW does not suffer from the curse of dimensionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Next, we investigate the Monte Carlo approximation error for MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 3 (Monte Carlo error).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd), we have: E|� MSW p p,T (µ, ν) − MSWp p,T (µ, ν)| 1 √ TL L � l=1 V ar � T � t=1 W p p (θt♯µ, θt♯ν) � 1 2 , where the variance is with respect to σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 3 is in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From the above proposition, we know that increasing the number of projections L reduces the approximation error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Computational Properties: When µ and ν are two discrete probability measures in Pp(Rd) that have at most n supports, the computational complexity for the Monte Carlo approximation of MSW is O(TLn log2 n+TLdn) where O(TLn log n) is for computation of TL one-dimensional Wasserstein distances and O(TLdn) is the projecting complexity for TL projections from d dimension to 1 dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The memory complexity of MSW is O(TL(d + n)) for storing the projecting directions and the projections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 Specific Choices of the Projecting Distribution Designing the projecting distribution σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ) is the central task in using MSW since it controls the projecting behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For each choice of the σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ), we obtain a variant of MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since we impose the first order Markov structure σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ) = σ1(θ1) �T t=2 σt(θt|θt−1), there are two types of distributions that we need to choose: the prior distribution σ1(θ1) and the transition distribution σt(θt|θt−1) for all t = 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Prior distribution: The most simple choice of σ1(θ1) when we know nothing about probability measures that we want to compare is the uniform distribution over the unit hypersphere U(Sd−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, the metricity of MSW is guaranteed regardless of the transition distribution with this choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the uniform distribution is the choice that we use in our experiments in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' It is worth noting that we could also use a distribution that is estimated from two interested probability measures [44];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' however, this approach costs more computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Now, we discuss some specific choices of the transition distributions σt(θt|θt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Detailed algorithms for computing MSW with specific transitions are given in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Random Walk transition: Motivated by the Gaussian Random Walk in MCMC literature [37], we use a version of Gaussian on the unit hypersphere which is the von Mises-Fisher (vMF) distri- bution [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The details about the vMF distribution including its probability density function, its sampling procedure, and its properties are given in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In summary, the vMF distribution has two parameters: the location parameter ϵ ∈ Sd−1 which is the mean, and the concentration parameter κ ∈ R+ which plays the role as the variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the transition distribution is σt(θt|θt−1) = vMF(θt|ϵ = θt−1, κ) where κ is a hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Orthogonal-based transition: Motivated by the orthogonality constraint in Max-K-SW and K-SW, we can design a transition distribution that gives us an orthogonal projecting direction to the 7 previous one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In particular, given a previous projecting direction θt−1, we want to have θt such that ⟨θt, θt−1⟩ = 0, namely, we want to sample from the subsphere Sd−1 θt−1 := {θt ∈ Sd−1|⟨θt, θt−1⟩ = 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To the best of our knowledge, there is no explicit form of distribution (known pdf) that is defined on that set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, we can still sample from the uniform distribution over that set: U(Sd−1 θt−1) since that distribution can be constructed by pushing the uniform distribution over the whole unit hypersphere U(Sd−1) through the projection operator: Prodθt−1(θt) = ProdSd−1 � θt − ⟨θt−1,θt⟩ ⟨θt−1,θt−1⟩θt−1 � where ProdSd−1(θ) = θ ||θ||2 is the normalizing operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In a greater detail, we first sample θ′ t ∼ U(Sd−1) and then set θt = Prodθt−1(θ′ t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, in this case, we have σt(θt|θt−1) = U(Sd−1 t−1 ) = Prodθt−1♯U(Sd−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Input-awared transition: The above two transition distributions do not take into account the information of the two probability measures µ and ν that we want to compare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hence, they could be inefficient to explore good projecting directions in terms of comparing µ and ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Motivated by the projected sub-gradient ascent [9] update in finding the “max" projecting direction, we could design the transition distribution as follows: σt(θt|θt−1) = δf(θt−1|η,µ,ν) where δ denotes the Dirac Delta function and the transition function f(θt−1|η, µ, ν) = ProdSd−1 � θt−1 + η∇θt−1Wp (θt−1♯µ, θt−1♯ν) � , with η > 0 is the stepsize hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' As the current choice is a deterministic transition, it requires the prior distribution to have supports on all Sd−1 to obtain the metricity for MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A choice to guarantee the metricity regardless of the prior distribution is the vMF distribution, namely, σt(θt|θt−1) = vMF(θt|ϵ = f(θt−1|η, µ, µ), κ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Thank the interpolation properties of the vMF distribution: limκ→0 vMF(θ|ϵ, κ) = U(Sd−1) and limκ→∞ vMF(θ|ϵ, κ) = δϵ, the transition distribution can balance between heading to the “max" projecting direction and exploring the space of directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Stationarity of σT (θT ): A natural important question arises: what is the distribution of σT (θT ) = � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' � σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT )dθ1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' dθT−1 when T → ∞?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The answer to the above questions depends on the choice of the projection distribution which is discussed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the Random Walk and the Orthogonal-based transitions and the uniform distribution prior, it is unclear whether the stationary distribution exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the deterministic Input-awared transition and the uniform prior, we have limT→∞ σT (θT ) = �A a=1 αaδθ∗a with �A a=1 αa = 1 where θ∗ a (a = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , A) are local maximas of the optimization problem maxθ∈Sd−1 Wp(θ♯µ, θ♯ν) and some unknown weights αa that depend on µ and ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' This property is due to the fact that the projected sub-gradient ascent can guarantee local maxima convergence [49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the Input-awared vMF transition, it is also unclear if the stationary distribution exists when the parameter κ < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3 Burning and Thinning In the definition of MSW in Definition 1, we take the expectation on the joint distribution over all timesteps σ(θ1:T ) which leads to the time and memory complexities to be linear with T in the Monte Carlo approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, we can adapt the practical technique from MCMC methods which is burning in and thinning in to reduce the number of random variables while still having a dependency structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' T ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' dimension d ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' the number of burned steps M ≥ 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' and the number of thinned steps N ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' the burned thinned Markovian sliced Wasserstein of order p between 8 two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is: MSWp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='T,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='N,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='M(µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ν) = E � � N T − M (T−M)/N � t=1 W p p � θ′ t♯µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θ′ t♯ν � � � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (7) where the expectation is under the projection distribution θ′ 1:N(T−M) ∼ σ(θ′ 1:N(T−M)) with σ(θ′ 1:N/(T−M)) being the marginal distribution which is obtained by integrating out random projecting directions at the time step t such that t ≤ M or t%N ̸= 0 from σ(θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ) = σ1(θ1) �T l=2 σt(θt|θt−1), and σ1(θ1), σt(θt|θt−1) ∈ P(Sd−1) for all t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similar to MSW, the burned-thinned MSW is also a metric on Pp(Rd) when there exists a time step t that is not burned, is not thinned, and θt is a random variable that has the supports on all Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We discuss more details about the burned-thinned MSW including its topological and statistical properties in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The Monte Carlo estimation of the burned-thinned MSW is given in Equation equation 9 in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The approximation is the average of the projected Wasserstein distance from θtl with t ≥ M and t%N = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By reducing the number of random projecting directions, the computational complexity of the burned-thinned MSW is improved to O(((T −M)Ln log2 n+(T −M)Ldn)/N) in the random walk and the orthogonal-based transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In the case of the input-awared transition, the computational complexity is still O(TLn log2 n + TLdn) since the transition requires computing the gradient of the projected Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, in all cases, the memory complexity is reduced to O((T − M)L(d + n)/N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Burned thinned MSW is the generalization of Max-SW: the empirical computation of Max- SW [14] with the projected sub-gradient ascent and uniform random initialization can be viewed as a special case of burned thinned MSW with the input-awared transition and with the number of burned samples M = T − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The difference is that Max-SW uses only one local maximum to compute the distance while the burned thinned MSW uses L ≥ 1 maximums (might not be unique).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' More discussions: We refer the reader to Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5 for other related discussions e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', “K-SW is autoregressive decomposition of projecting distribution", “sequential generalization of Max-K-SW", and related literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 4 Experiments In this section, we refer MSW with random walk transition as rMSW, MSW with orthogonal-based transition as oMSW, MSW with input-awared transition as iMSW (using the Dirac distribution) and viMSW (using the vMF distribution).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We compare MSW variants to SW, Max-SW, K-SW, and Max-K-SW in standard applications e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', gradient flows, color transfer, and deep generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we also investigate the role of hyperparameters, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', concentration parameter κ, the number of projections L, the number of time steps T, the number of burning steps M, and the number of thinning steps N in applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 Gradient Flows and Color Transfer Gradient flows: We follow the same setting in [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The gradient flow models a distribution µ(t) flowing with time t along the gradient flow of a loss functional µ(t) → D(µ(t), ν) that drives 9 SW L=30 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5913×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='07s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0099×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='55s) Max-SW T=30 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1091×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='37s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0098×10 2 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48s) steps 0 iMSW L=2 T=5 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) steps 200 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0483×10 2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='99s) steps 300 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0064×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='41s) steps 0 viMSW L=2 T=5 =50 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) steps 200 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0512×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='05s) steps 300 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0043×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='94s) Figure 1: The figures show the gradient flows that are from the empirical distribution over the color points to the empirical distribution over S-shape points produced by different distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The corresponding Wasserstein-2 distance between the empirical distribution at the current step and the S-shape distribution and the computational time (in seconds) to reach the step is reported at the top of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' it towards a target distribution ν [56] where D is a given distance between probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In this setup, we consider ν = 1 n �n i=1 δYi as a fixed empirical target distribution and the model distribution µ(t) = 1 n �n i=1 δXi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, the model distribution is parameterized by a time-varying point cloud X(t) = (Xi(t))n i=1 ∈ � Rd�n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Starting from an initial condition at time t = 0, we integrate the ordinary differential equation ˙X(t) = −n∇X(t) � D � 1 n �n i=1 δXi(t), ν �� for each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In the experiments, we utilze the Euler scheme with 300 timesteps and the step size is 10−3 to move the empirical distribution over colorful points µ(0) to the distribution over S-shape points (ν) (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For Max-SW, Max-K-SW, iMSW, and viMSW, we use the learning rate parameter for projecting directions η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We report the Wasserstein-2 distances between the empirical distribution µ(t) and the target empirical distribution ν, and the computational time in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We also give the visualization of some obtained flows in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We refer the reader to Figure 5 in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 for the full visualization of all flows and detailed algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We observe that iMSW gives better flows than SW, Max-SW, K-SW, and Max-K-SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Namely, the empirical distribution µ(t) (t = 300) with iMSW is closer to ν in terms of Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' More importantly, iMSW consumes less computation than its competitors since it can use a smaller number of projections due to more informative projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Furthermore, viMSW gives better final results than iMSW, however, the trade-off is doubling the time computation due to the sampling step of vMF distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We also observe that rMSW does not give good results in both Wasserstein-2 and computational time due to the random walk transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In this case, K-SW is equivalent to our oMSW with T=K=2 since the dimension d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We refer the reader to Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 for more discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Studies on hyperparameters: From Table 3 in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1, increasing the number of projections L yields better performance for SW, K-SW, and iMSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similarly, increasing the number of timesteps T also helps Max-SW and iMSW better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we find that for the same number of total projections e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', L = 5, T = 2 and T = 2, L = 5, a larger timestep T might lead to a better result for iMSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For burning and thinning, we see that they help to reduce the computation while the performance stays comparable or even better if choosing the right value of M and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Also, iMSW 10 Source SW (L=45), 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97(s), W2 = 414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='51 Max-SW (T=45), 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48(s), W2 = 449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='42 K-SW (L=15,K=3), 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='21(s), W2 = 411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='74 Max-K-SW (K=3,T=15), 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='6(s), W2 = 479.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='43 rMSW (L=3,T=5, =50), 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='65(s), W2 = 444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 oMSW (L=3,T=5), 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='17(s), W2 = 415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 iMSW (L=3,T=5), 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='39(s), W2 = 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97 viMSW (L=3,T=5, =50), 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='27(s), W2 = 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48 Target Figure 2: The figures show the source image, the target image, and the transferred images from different distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The corresponding Wasserstein-2 distance between the empirical distribution over transferred color palates and the empirical distribution over the target color palette and the computational time (in second) are reported at the top of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Table 1: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in gradient flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Distances Wasserstein-2 (↓) Time (↓) SW (L=30) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0099 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='55 Max-SW (T=30) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0098 × 10−2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48 K-SW (L=15,K=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0098 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='71 Max-K-SW (K=2,T=15) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0146 × 10−2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 rMSW (L=2,T=5,κ=50) (ours) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0157 × 10−2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='16 iMSW (L=2,T=5) (ours) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0064 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='41 viMSW (L=2,T=5,κ=50)(ours) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0043 × 10−2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='94 Table 2: Summary of FID and IS scores of methods on CIFAR10 (32x32), and CelebA (64x64).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Method CIFAR10 (32x32) CelebA (64x64) FID (↓) IS (↑) FID (↓) SW 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='21±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='19±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='07 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='93±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='23 Max-SW 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='38±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='08 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='02 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='94±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 KSW 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='24±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='02 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='03 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='41±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='16 Max-K-SW 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='83±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='01 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='17±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='03 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='29±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='29 rMSW (ours) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='33±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='51 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='44 oMSW (ours) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='54 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='20±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='05 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='68±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='55 iMSW (ours) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='24±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='09 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='89±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='23 viMSW (ours) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='98±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='59 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='20 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='91±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='11 the burning steps M = T − 1 is still better than Max-SW with T time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the concentration parameter κ in rMSW and viMSW, a larger value of κ leads to a faster computation due to faster sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, the performance of viMSW is not monotonic in terms of κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Color transfer: We aim to transfer the color palate (RGB) of a source image to the color palette (RGB) target image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, it is natural to build a gradient flow that starts from the empirical distribution over the color palette of the source image to the empirical distribution over the color palette of the target image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since the value of color palette is in the set {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , 255}3, we round the 11 200 300 400 500 600 Epochs 14 16 18 20 22 24 26 28 FID Score CIFAR10 SW Max-SW K-SW Max-K-SW rMSW oMSW iMSW viMSW 200 300 400 500 600 Epochs 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='8 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 IS Score CIFAR10 SW Max-SW K-SW Max-K-SW rMSW oMSW iMSW viMSW 25 50 75 100 125 150 175 200 Epochs 10 15 20 25 30 35 40 FID Score CelebA SW Max-SW K-SW Max-K-SW rMSW oMSW iMSW viMSW Figure 3: The FID scores over epochs of different distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' value of the supports of the empirical distribution at the final step of the Euler scheme with 2000 steps and 10−3 step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Greater detail can be found in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For Max-SW, Max-K-SW, iMSW, and viMSW, we use the learning rate parameter for projecting directions η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We show the transferred images, the corresponding Wasserstein-2 distances between the empirical distribution over the transferred color palette and the empirical distribution over the target color palette, and the corresponding computational time in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From the figures, iMSW and viMSW give the best transferred images quantitatively and qualitatively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, oMSW and rMSW are comparable to SW, Max-SW, K-SW, and are better than Max-K-SW while consuming much less computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We refer the reader to Figure 6 in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 for the color palette visualization and to Figure 7 for another choice of the source and target images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We also conduct studies on hyperparameters in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 where we observe some similar phenomenons as in gradient flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 Deep Generative Models We follow the setup of sliced Wasserstein deep generative models in [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The full settings of the framework including neural network architectures, training framework, and hyperparameters are given Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We compare MSW with previous baselines including SW, Max-SW, K-SW, and Max-K-SW on benchmark datasets: CIFAR10 (image size 32x32) [29], and CelebA (image size 64x64).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The evaluation metrics are FID score [21] and Inception score (IS) [54] (except on CelebA since IS score poorly captures the perceptual quality of face images [21]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A notable change in computing Max-SW is that we do not use momentum in optimization for max projecting direction like in previous works [26, 42], which leads to a better result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Summary of generative performance: We train generative models with SW (L ∈ {100, 1000, 10000}), Max-SW (T ∈ {10, 100, 1000}, the learning rate for projected gradient ascent algorithm η ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='01, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1}), K-SW (L ∈ {1, 10, 100}, K = 10), Max-K-SW (K = 10, η ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='01, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1}), MSW (all variant, L = {10, 100}, T ∈ {10, 100}), iMSW and viMSW (η ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='01, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1}), rMSW and viMSW and (κ ∈ {10, 50}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We report the best FID score and the best IS score for each distance in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In addition, we show how scores change with respect to the training epochs in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Overall, we observe that viMSW and iMSW give the best generative performance in terms of the final scores and fast convergence on CIFAR10 and CelebA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Other MSW variants including rMSW and oMSW give comparable results to baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since most computation in training deep generative models is for updating neural networks, the computational time for distances is almost the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Furthermore, we show some generated images on CelebA in Figure 4 and all generated images on CIFAR10 and 12 SW Max-K-SW iMSW Figure 4: Random generated images of distances on CelebA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' CelebA in Figure 8 and Figure 9 in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We visually observe that the qualitative results are consistent with the quantitative results in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Studies on hyperparameters: We conduct experiments to understand the behavior of the burning and thinning technique, and to compare the role of L and T in Table 5 in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Overall, burning (thinning) sometimes helps to improve the performance of training generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' There is no clear sign of superiority between burning and thinning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We compare two settings of the same number of total projections (same complexities): L = 10, T = 100 and L = 100, T = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' On CIFAR10, the first setting is better while the reverse case happens on CelebA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 5 Conclusion We have introduced the Markovian sliced Wasserstein (MSW), a novel family of sliced Wasserstein (SW) distances, which imposes a first-order Markov structure on projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We have investigated the theoretical properties of MSW including topological properties, statistical properties, and computational properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, we have discussed three types of transition distribution for MSW, namely, random walk, orthogonal-based, and input-awared transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In addition, we have proposed a burning and thinning technique to improve the computational time and memory of MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Finally, we have compared MSW to previous variants of SW in gradient flows, color transfer, and generative modeling to show that MSW distances are both effective and efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' References [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Altschuler, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Niles-Weed, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rigollet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 1964–1974, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [2] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bai, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Schmitzer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Thorpe, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kolouri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced optimal partial transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' arXiv preprint arXiv:2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='08049, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [3] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bogachev and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ruas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Measure theory, volume 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Springer, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bonet, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Berg, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Courty, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Septier, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Drumetz, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Spherical sliced- wasserstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='08780, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') 13 [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bonet, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Courty, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Septier, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Drumetz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Efficient gradient flows in sliced-wasserstein space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Transactions on Machine Learning Research, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [6] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bonneel and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Coeurjolly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Spot: sliced partial optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ACM Transactions on Graphics (TOG), 38(4):1–13, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [7] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bonneel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rabin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Peyré, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pfister.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced and Radon Wasserstein barycenters of measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Journal of Mathematical Imaging and Vision, 1(51):22–45, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 1, 2, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [8] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bonnotte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Unidimensional and evolution methods for optimal transportation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PhD thesis, Paris 11, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 24 and 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [9] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bubeck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Convex optimization: Algorithms and complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [10] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Yang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Augmented sliced Wasserstein distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' International Conference on Learning Representations, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sinkhorn distances: Lightspeed computation of optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 2292–2300, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [12] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Dai and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Seljak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced iterative normalizing flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 2352–2364.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 5, and 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Davidson, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Falorsi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' De Cao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kipf, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Tomczak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hyperspherical variational auto-encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 856–865.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Association For Uncertainty in Artificial Intelligence (AUAI), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [14] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Deshpande, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sun, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pyrros, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Siddiqui, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Koyejo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Zhao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Forsyth, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Schwing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Max-sliced Wasserstein distance and its use for GANs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10648–10656, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 4, and 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [15] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Deshpande, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Zhang, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Schwing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Generative modeling using the sliced Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3483–3491, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 12, 34, and 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [16] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Fatras, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Zine, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Flamary, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Gribonval, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Courty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Learning with minibatch Wasser- stein: asymptotic and gradient properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In AISTATS 2020-23nd International Conference on Artificial Intelligence and Statistics, volume 108, pages 1–20, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Feydy, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Séjourné, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Vialard, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Amari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Trouve, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Peyré.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Interpolating between optimal transport and MMD using Sinkhorn divergences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2681–2690, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [18] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Flamary, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Courty, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Gramfort, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Alaya, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Boisbunon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chambon, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chapel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Corenflos, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Fatras, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Fournier, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Gautheron, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Gayraud, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Janati, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rakotoma- monjy, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Redko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rolet, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Schutz, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Seguy, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sutherland, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Tavenard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Tong, and 14 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Vayer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pot: Python optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Journal of Machine Learning Research, 22(78):1–8, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [19] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Fournier and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Guillin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' On the rate of convergence in Wasserstein distance of the empirical measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Probability Theory and Related Fields, 162:707–738, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Genevay, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Peyré, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Learning generative models with Sinkhorn divergences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Artificial Intelligence and Statistics, pages 1608–1617.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [21] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Heusel, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ramsauer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Unterthiner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nessler, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hochreiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' GANs trained by a two time-scale update rule converge to a local Nash equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 6626–6637, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Huang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ma, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A Riemannian block coordinate descent method for computing the projection robust Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 4446–4455.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [23] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Jupp and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Mardia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Maximum likelihood estimators for the matrix von Mises-Fisher and bingham distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The Annals of Statistics, 7(3):599–606, 1979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 7, and 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [24] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kallenberg and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kallenberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Foundations of modern probability, volume 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Springer, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [25] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kingma and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [26] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kolouri, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nadjahi, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Simsekli, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Badeau, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rohde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Generalized sliced Wasserstein distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 261–272, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 12, 19, and 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [27] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kolouri, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pope, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Martin, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rohde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced Wasserstein auto-encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kolouri, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rohde, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hoffmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced Wasserstein distance for learning Gaussian mixture models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3427–3436, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2 and 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [29] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Krizhevsky, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hinton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Learning multiple layers of features from tiny images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Master’s thesis, Department of Computer Science, University of Toronto, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [30] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lee, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Batra, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Baig, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ulbricht.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced Wasserstein discrepancy for unsuper- vised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10285–10295, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [31] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lezama, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chen, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Qiu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Run-sort-rerun: Escaping batch size limitations in sliced Wasserstein generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 6275–6285.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') 15 [32] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Fan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Projection robust Wasserstein distance and Riemannian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:9383–9397, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [33] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Fixed-support Wasserstein barycenters: Computational hardness and fast algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In NeurIPS, pages 5368–5380, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [34] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' On efficient optimal transport: An analysis of greedy and accelerated mirror descent algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 3982–3991, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [35] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' On the efficiency of entropic regularized algorithms for optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Journal of Machine Learning Research (JMLR), 23:1–42, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [36] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Liutkus, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Simsekli, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Majewski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Durmus, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Stöter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced-Wasserstein flows: Nonparametric generative modeling via optimal transport and diffusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 4104–4113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [37] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Murphy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Machine learning: a probabilistic perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' MIT press, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [38] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Naderializadeh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Comer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Andrews, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hoffmann, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kolouri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pooling by sliced- Wasserstein embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [39] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nadjahi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' De Bortoli, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Durmus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Badeau, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Şimşekli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Approximate Bayesian computation with the sliced-Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5470–5474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2 and 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [40] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nadjahi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Durmus, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chizat, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kolouri, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Shahrampour, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Simsekli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Statistical and topological properties of sliced probability divergences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:20802–20812, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [41] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nadjahi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Durmus, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Simsekli, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Badeau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Asymptotic guarantees for learning generative models with the sliced-Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 250–260, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 25 and 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [42] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Amortized projection optimization for sliced Wasserstein generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 12, 19, and 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [43] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Revisiting sliced Wasserstein on images: From vectorization to convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 23, and 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [44] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pham, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Distributional sliced-Wasserstein and applications to generative modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 7, and 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') 16 [45] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pham, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bui, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Phung, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Le, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' On transportation of mini-batches: A hierarchical approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the 39th International Conference on Machine Learning, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [46] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pham, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Improving mini-batch optimal transport via partial transportation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the 39th International Conference on Machine Learning, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [47] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Pham, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Bui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Improving relational regularized au- toencoders with spherical sliced fused Gromov-Wasserstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 19, and 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [48] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ren, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rout, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nguyen, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hierarchical sliced wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' arXiv preprint arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='13570, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [49] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Nietert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sadhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Goldfeld, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Kato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Statistical, robustness, and computational guarantees for sliced wasserstein distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 1, 2, 4, and 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [50] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Paty and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Subspace robust Wasserstein distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 5072–5081, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [51] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Peyré and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Computational optimal transport: With applications to data science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [52] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Peyré and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cuturi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Computational optimal transport, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [53] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Rowland, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hron, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Tang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Choromanski, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sarlos, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Weller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Orthogonal estimation of Wasserstein distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In The 22nd International Conference on Artificial Intelligence and Statistics, pages 186–195.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2, 4, and 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [54] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Salimans, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Goodfellow, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Zaremba, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cheung, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Radford, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Improved techniques for training GANs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [55] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Salimans, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Zhang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Radford, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Metaxas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Improving GANs using optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [56] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Santambrogio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Optimal transport for applied mathematicians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Birkäuser, NY, 55(58-63):94, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [57] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sommerfeld and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Munk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Inference for empirical wasserstein distances on finite spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):219–238, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [58] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Directional statistics in machine learning: a brief review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' arXiv preprint arXiv:1605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='00316, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [59] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Temme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Special functions: An introduction to the classical functions of mathematical physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' John Wiley & Sons, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') 17 [60] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Villani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Optimal transport: Old and New.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Springer, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 1 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [61] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Villani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Optimal transport: old and new, volume 338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Springer Science & Business Media, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 25 and 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [62] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Wainwright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' High-dimensional statistics: A non-asymptotic viewpoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Cambridge University Press, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on page 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [63] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Huang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Acharya, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Thoma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Paudel, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced Wasserstein generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3713–3722, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2 and 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') [64] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Yi and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sliced Wasserstein variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In Fourth Symposium on Advances in Approximate Bayesian Inference, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (Cited on pages 2 and 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=') 18 Supplement to “Markovian Sliced Wasserstein Distances: Beyond Independent Projections" In this supplementary material, we present additional materials in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In particular, we provide additional background on sliced Wasserstein variants in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1, background on von Mises-Fisher distribution in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2, algorithms for computing Markovian sliced Wasserstein distances in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3, additional information about burned thinned MSW in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4, and discussion on related works in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We then provide skipped proofs in the main paper in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Additional experiments are presented in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A Additional Materials A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 Background on Sliced Wasserstein Variants We review computational aspects of sliced Wasserstein variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Computation of Max sliced Wasserstein distance: We demonstrate the empirical estimation of Max-SW via projected sub-gradient ascent algorithm in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The initialization step for ˆθ0 is rarely discussed in previous works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Normally, ˆθ0 is randomly initialized by drawing from the uniform distribution over the unit-hypersphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Many previous works [26, 44, 47, 42] use Adam update instead of the standard gradient ascent update for Max-SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In this work, we find out that using the standard gradient ascent update is more stable and effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Algorithm 1 Max sliced Wasserstein distance Input: Probability measures µ, ν, learning rate η, the order p, and the number of iterations T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Initialize ˆθ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' for t = 1 to T − 1 do ˆθt = ˆθt−1 + η · ∇ˆθt−1Wp(ˆθt−1♯µ, ˆθt−1♯ν) ˆθt = ˆθt ||ˆθt||2 end for Return: Wp(ˆθT ♯µ, ˆθT ♯ν) K sliced Wasserstein distance: We first review the Gram–Schmidt process in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' With the Gram–Schmidt process, the sampling from U(VK(Rd)) can be done by sampling θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θk i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d from N(0, Id) then applying the Gram-Schmidt process on them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, we present the computation of K sliced Wasserstein distance in Algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We would like to recall that the original work of K-SW [53] uses only one set of orthogonal projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, we generalize the original work by using L sets of orthogonal projecting directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Max K sliced Wasserstein distance: We now present the empirical estimation of Max-K-SW via projected sub-gradient ascent algorithm in Algorithm 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' This algorithm is first discussed in the original paper of Max-K-SW [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The optimization of Max-K-SW can be solved by using Riemannian optimization since the Stiefel manifold is a Riemannian manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, to the best of our knowledge, Riemannian optimization has not been applied to Max-K-SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 19 Algorithm 2 Gram–Schmidt process Input: K vectors θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θK θ1 = θ1 ||θ1||2 for k = 2 to K do for i = 1 to k − 1 do θk = θk − ⟨θi,θk⟩ ⟨θi,θi⟩ θi end for θk = θk ||θk||2 end for Return: θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θK Algorithm 3 K sliced Wasserstein distance Input: Probability measures µ, ν, the dimension d, the order p, the number of projections L, the number of orthogonal projections K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' for l = 1 to L do Draw θl1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θlK i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d from N(0, Id).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θl1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θlK = Gram–Schmidt(θl1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θlK) end for Return: � 1 LK �L l=1 �K k=1 Wp p(θlk♯µ, θlk♯ν) � 1 p A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 Von Mises-Fisher Distribution We first start with the definition of von Mises-Fisher (vMF) distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The von Mises–Fisher distribution ( vMF)[23] is a probability distribution on the unit hypersphere Sd−1 with the density function be: f(x|ϵ, κ) := Cd(κ) exp(κϵ⊤x), (8) where ϵ ∈ Sd−1 is the location vector, κ ≥ 0 is the concentration parameter, and Cd(κ) := κd/2−1 (2π)d/2Id/2−1(κ) is the normalization constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, Iv is the modified Bessel function of the first kind at order v [59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Algorithm 4 Max-K sliced Wasserstein distance Input: Probability measures µ, ν, learning rate η, the dimension d, the order p, the number of iterations T > 1, and the number of orthogonal projections K > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Initialize ˆθ01, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , ˆθ0K to be orthogonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' for t = 1 to T − 1 do for k = 1 to K do ˆθtk = θtk + η · ∇ˆθt−1kWp(ˆθt−1k♯µ, ˆθt−1k♯ν) end for ˆθt1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , ˆθtK = Gram-Schmidt(ˆθt1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , ˆθtK) end for Return: � 1 K �K k=1 Wp p(ˆθTk♯µ, ˆθTk♯ν) � 1 p 20 Algorithm 5 Sampling from vMF distribution Input: location ϵ, concentration κ, dimension d, unit vector e1 = (1, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='.,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 0) Draw v ∼ U(Sd−2) b ← −2κ+√ 4κ2+(d−1)2 d−1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' a ← (d−1)+2κ+√ 4κ2+(d−1)2 4 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' m ← 4ab (1+b) − (d − 1) log(d − 1) repeat Draw ψ ∼ Beta � 1 2(d − 1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 1 2(d − 1) � ω ← h(ψ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' κ) = 1−(1+b)ψ 1−(1−b)ψ t ← 2ab 1−(1−b)ψ Draw u ∼ U([0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 1]) until (d − 1) log(t) − t + m ≥ log(u) h1 ← (ω,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' √ 1 − ω2v⊤)⊤ ϵ′ ← e1 − ϵ u = ϵ′ ||ϵ′||2 U = I − 2uu⊤ Output: Uh1 The vMF distribution is a continuous distribution,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' its mass concentrates around the mean ϵ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' and its density decrease when x goes away from ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' When κ → 0, vMF converges in distribution to U(Sd−1), and when κ → ∞, vMF converges in distribution to the Dirac distribution centered at ϵ [58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Sampling: We review the sampling process in Algorithm 5 [13, 47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The sampling process of vMF distribution is based on the rejection sampling procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' It is worth noting that the sampling algorithm is doing reparameterization implicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, we only use the algorithm to obtain random samples without estimating stochastic gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3 Algorithms for Computing Markovian Sliced Wasserstein Distances We first start with the general computation of MSW in Algorithm 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the random walk transition in rMSW, we replace the line θlt ∼ σt(θt|θlt−1) by θlt ∼ vMF(θt|ϵ = θlt−1, κ) (Algorithm 5) with the concentration hyperparameter κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the orthogonal-based transition in oMSW, we use θlt ∼ U(Sd−1 θlt−1) by first sampling θ′ lt ∼ U(Sd−1) then set θlt = θlt− ⟨θ′ lt,θlt⟩ ⟨θ′ lt,θ′ lt⟩θ′ lt then normalize θlt = θlt ||θlt||2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For deterministic input-awared transition, iMSW, we set θlt = θlt−1 + η∇θlt−1Wp(θlt−1♯µ, θlt−1♯ν) then normalize θlt = θlt ||θlt||2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For probabilistic input-awared transition, viMSW, θlt ∼ vMF(θt|ϵ = ProdSd−1θ′ lt, κ) with θ′ lt = θlt−1 + η∇θlt−1Wp(θlt−1♯µ, θlt−1♯ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4 Burned Thinned Markovian Sliced Wasserstein Distance We continue the discussion on burned thinned MSW in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We first start with the Monte Carlo estimation of burned thinned MSW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Monte Carlo Estimation: We samples θ11, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θL1 ∼ σ1(θ1) for L ≥ 1, then we samples θlt ∼ σt(θt|θlt−1) for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T and l = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We then obtain samples θ′ lt by filtering out t < M and t%N ̸= 0 from the set {θlt} for l = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , L and t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The Monte Carlo approximation 21 Algorithm 6 Markovian sliced Wasserstein distance Input: Probability measures µ, ν, the dimension d, the order p, the number of projections L, and the number of timesteps T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' for l = 1 to L do Draw θl0 ∼ σ(θ0) for t = 1 to T − 1 do Draw θlt ∼ σt(θt|θlt−1) end for end for Return: � 1 LT �L l=1 �T t=1 Wp p(θlt♯µ, θlt♯ν) � 1 p of the burned-thinned Markovian sliced Wasserstein distance is: � MSWp,T,N,M(µ, ν) = � � N L(T − M) L � l=1 (T−M)/N � t=1 W p p � θ′ lt♯µ, θ′ lt♯ν � � � 1 p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (9) Theoretical properties: We first state the following assumption: A2: Given T > M ≥ 0, N ≥ 1, the prior distribution σ1(θ1) and the transition distribution σt(θt|θt−1) are chosen such that there exists marginals σt(θt) = � t− σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θt)dt− with t ≥ M and t%N = 0, t− = {t′ = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , T|t′ ̸= t}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The assumption A2 can be easily obtained by using vMF transition, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', in random walk transition and probabilistic input-awared transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From this assumption, we can derive theoretical properties of burned-thinned MSW including topological properties and statistical complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, and dimension d ≥ 1, if A2 holds, the burned thinned Markovian sliced Wasserstein distance MSWp,T,N,M(·, ·) is a valid metric on the space of probability measures Pp(Rd), namely, it satisfies the (i) non-negativity, (ii) symmetry, (iii) triangle inequality, and (iv) identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 4 follows directly the proof of Theorem 1 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 5 (Weak Convergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, and dimension d ≥ 1, if A2 holds, the convergence of probability measures in Pp(Rd) under the burned thinned Markovian sliced Wasserstein distance MSWp,T,N,M(·, ·) implies weak convergence of probability measures and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 5 follows directly the proof of Theorem 2 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1 and dimension d ≥ 1, for any T ≥ 1, M ≥ 0, N ≥ 1 and µ, ν ∈ Pp(Rd), MSWp,T,N,M(µ, ν) ≤ Max-SWp(µ, ν) ≤ Wp(µ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 6 follows directly the proof of Proposition 1 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 22 Proposition 7 (Sample Complexity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Let X1, X2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , Xn be i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' samples from the probability measure µ being supported on compact set of Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We denote the empirical measure µn = 1 n �n i=1 δXi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Then, for any p ≥ 1 and T ≥ 1, M ≥ 0, N ≥ 1, there exists a universal constant C > 0 such that E[MSWp,T,N,M(µn, µ)] ≤ C � (d + 1) log n/n, where the outer expectation is taken with respect to the data X1, X2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , Xn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 7 follows directly the proof of Proposition 2 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proposition 8 (Monte Carlo error).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd), we have: E|� MSW p p,T,N,M(µ, ν) − MSWp p,T,N,M(µ, ν)| ≤ √ N � TL(T − M) L � l=1 V ar � � (T−M)/N � t=1 W p p � θ′ t♯µ, θ′ t♯ν � � � 1 2 , where the variance is with respect to σ(θ′ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θ′ (T−M)/N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof of Proposition 8 follows directly the proof of Proposition 3 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5 Discussions on Related Works K-SW is autoregressive decomposition: In MSW, we assume that the joint distribution over pro- jecting directions has the first-order Markov structure: σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ) = σ1(θ1) �T t=2 σt(θt|θt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' How- ever, we can consider the full autoregressive decomposition σ(θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θT ) = σ1(θ1) �T t=2 σt(θt|θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Let T = K in K-SW, hence the transition distribution that is used in K-SW is: σt(θt|θ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , θt−1) = Gram-Schmidtθ1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=',θt−1♯U(Sd−1), where Gram-Schmidtθ1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=',θt−1(θt) denotes the Gram-Schmidt pro- cess update that applies on θt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Generalization of Max-K-SW: Similar to Max-SW, we can derive a Markovian-based K-sliced Wasserstein distance that generalizes the idea of the projected gradient ascent update in Max-K-SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, the distance considers the transition on the Stiefel manifold instead of the unit hypersphere, hence, it will be more computationally expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, orthogonality might not be a good constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the generalization of Max-K-SW might not have many advantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Beyond the projected sub-gradient ascent update: In the input-awared transition for MSW, we utilize the projected sub-gradient update as the transition function to create a new projecting direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, we could other optimization techniques such as momentum, adaptive stepsize, and so on to create the transition function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We will leave the investigation about this direction to future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Applications to other sliced Wasserstein variants: The Markovian approach can be applied to other variants of sliced Wasserstein distances e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', generalized sliced Wasserstein [26], augmented sliced Wasserstein distance [10], projected robust Wasserstein (PRW) [50, 32, 22] (k > 1 dimensional projection), convolution sliced Wasserstein [43], sliced partial optimal transport [6, 2], hierarchical sliced Wasserstein [48] and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 23 Markovian sliced Wasserstein distances in other applications: We can apply MSW to the setting in [31] which is an implementation technique that utilizes both RAM and GPUs’ memory for training sliced Wasserstein generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' MSW can also replace sliced Wasserstein distance in pooling in [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similarly, MSW can be used in applications that exist sliced Wasserstein distance e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', clustering [28], Bayesian inference [39, 64], domain adaptation [63], and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' B Proofs B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 Proof of Theorem 1 (i), (ii): the MSW is an expectation of the one-dimensional Wasserstein distance hence the non- negativity and symmetry properties of the MSW follow directly by the non-negativity and symmetry of the Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (iii) From the definition of MSW in Definition 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' given three probability measures µ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' µ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' µ3 ∈ Pp(Rd) we have: MSWp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='T (µ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' µ3) = � E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯µ3) �� 1 p ≤ � E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 (Wp (θt♯µ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯µ2) + Wp (θt♯µ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯µ3))p �� 1 p ≤ � E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯µ2) �� 1 p + � E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯µ3) �� 1 p = MSWp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='T (µ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' µ2) + MSWp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='T (µ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' µ3),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' where the first inequality is due to the triangle inequality of Wasserstein distance and the second inequality is due to the Minkowski inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We complete the triangle inequality proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (iv) We need to show that MSWp,T (µ, ν) = 0 if and only if µ = ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' First, from the definition of MSW, we obtain directly µ = ν implies MSWp,T (µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the reverse direction, we use the same proof technique in [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' If MSWp,T (µ, ν) = 0, we have � S(d−1)⊗T 1 T �T t=1 Wp (θt♯µ, θt♯ν) dσ(θ1:T ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' If A1 holds, namely, the prior distribution σ1(θ1) is supported on all the unit-hypersphere or exists a transition distribution σt(θt|θt−1) is supported on all the unit-hypersphere, we have Wp(θ♯µ, θ♯ν) = 0 for all θ ∈ Sd−1 where σ denotes the prior or the transition distribution that satisfies the assumption A1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From the identity property of the Wasserstein distance, we obtain θ♯µ = θ♯ν for σ-a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='e θ ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, for any t ∈ R and θ ∈ Sd−1, we have: F[µ](tθ) = � Rd e−it⟨θ,x⟩dµ(x) = � R e−itzdθ♯µ(z) = F[θ♯µ](t) = F[θ♯ν](t) = � R e−itzdθ♯ν(z) = � Rd e−it⟨θ,x⟩dν(x) = F[ν](tθ), 24 where F[γ](w) = � Rd′ e−i⟨w,x⟩dγ(x) denotes the Fourier transform of γ ∈ P(Rd′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By the injectivity of the Fourier transform, we obtain µ = ν which concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 Proof of Theorem 2 Our goal is to show that for any sequence of probability measures (µk)k∈N and µ in Pp(Rd), limk→+∞ MSWp,T (µk, µ) = 0 if and only if for any continuous and bounded function f : Rd → R, limk→+∞ � f dµk = � f dµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof follows the techniques in [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We first state the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds and a sequence of probability measures (µk)k∈N satisfies limk→+∞ MSWp,T (µk, µ) = 0 with µ in Pp(Rd), there exists an increasing function φ : N → N such that the subsequence � µφ(k) � k∈N converges weakly to µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We are given that limk→+∞ MSWp,T (µk, µ) = 0, therefore limk→∞ � S(d−1)⊗T 1 T �T t=1 Wp (θt♯µk, θt♯µ) dσ(θ1:T ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' If A1 holds, namely, the prior distribution σ1(θ1) is supported on all the unit-hypersphere or exists a transition distribution σt(θt|θt−1) is supported on all the unit-hypersphere, we have lim k→∞ � Sd−1 Wp (θ♯µk, θ♯µ) dσ(θ) = 0, where σ denotes the prior or the transition distribution that satisfies the assumption A1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From Theo- rem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5 in [3], there exists an increasing function φ : N → N such that limk→∞ Wp(θ♯µφ(k), θ♯ν) = 0 for σ-a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='e θ ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since the Wasserstein distance of order p implies weak convergence in Pp(Rd) [61], � θ♯µφ(k) � k∈N converges weakly to θ♯µ for σ-a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='e θ ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Let Φµ = � Rd ei⟨v,w⟩dµ(w) be the characteristic function of µ ∈ Pp(Rd), we have the weak conver- gence implies the convergence of characteristic function (Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3 [24]): limk→∞ Φθ♯µφ(k)(s) = Φθ♯µ(s), ∀s ∈ R, for σ-a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='e θ ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, limk→∞ Φµφ(k)(z) = Φµ(z), for almost most every z ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For any γ > 0 and a continuous function f : Rd → R with compact support, we denote fγ(x) = f ∗ gγ(x) = � 2πγ2�−d/2 � Rd f(x − z) exp � −∥z∥2/ � 2γ2�� dz where gγ is the density function of 25 N(0, γId).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We have: � Rd fγ(z)dµφ(k)(z) = � Rd � Rd f(w)gγ(z − w)dw dµφ(k)(z) = � Rd � Rd f(w) � 2πγ2�−d/2 exp(−||z − w||2/(2γ2))dw dµφ(k)(z) = � 2πγ2�−d/2 � Rd � Rd f(w) � Rd ei⟨z−w,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩g1/γ(x)dx dw dµφ(k)(z) = � 2πγ2�−d/2 � Rd � Rd f(w) � Rd e−i⟨w,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩ei⟨z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩g1/γ(x)dx dw dµφ(k)(z) = � 2πγ2�−d/2 � Rd � Rd f(w)e−i⟨w,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩g1/γ(x) � Rd ei⟨z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩ dµφ(k)(z)dx dw = � 2πγ2�−d/2 � Rd � Rd f(w)e−i⟨w,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩g1/γ(x)Φµφ(k)(x)dx dw = � 2πγ2�−d/2 � Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' where the third equality is due to the fact that � Rd ei⟨z−w,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩g1/γ(x)dx = exp(−||z − w||2/(2γ2)) and F[f](w) = � Rd′ f(x)e−i⟨w,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='x⟩dx denotes the Fourier transform of the bounded function f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similarly, we have � Rd fγ(z)dµ(z) = � Rd � Rd f(w)gγ(z − w)dw dµ(z) = � Rd � Rd f(w) � 2πγ2�−d/2 exp(−||z − w||2/(2γ2))dw dµ(z) = � 2πγ2�−d/2 � Rd � Rd f(w) � Rd ei⟨z−w,x⟩g1/γ(x)dx dw dµ(z) = � 2πγ2�−d/2 � Rd � Rd f(w) � Rd e−i⟨w,x⟩ei⟨z,x⟩g1/γ(x)dx dw dµ(z) = � 2πγ2�−d/2 � Rd � Rd f(w)e−i⟨w,x⟩g1/γ(x) � Rd ei⟨z,x⟩ dµ(z)dx dw = � 2πγ2�−d/2 � Rd � Rd f(w)e−i⟨w,x⟩g1/γ(x)Φµ(x)dx dw = � 2πγ2�−d/2 � Rd F[f](x)g1/γ(x)Φµ(x)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since f is assumed to have compact support, F[f] exists and is bounded by � Rd |f(w)|dw < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hence, for any k ∈ R and x ∈ Rd, we have ���F[f](x)g1/γ(x)Φµφ(k)(x) ��� ≤ g1/γ(x) � Rd |f(w)|dw and ��F[f](x)g1/γ(x)Φµ(x) �� ≤ g1/γ(x) � Rd |f(w)|dw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Using the proved result of limk→∞ Φµφ(k)(z) = Φµ(z) and Lebesgue’s Dominated Convergence Therefore, we obtain lim k→∞ � Rd fγ(z)dµφ(k)(z) = lim k→∞ � 2πγ2�−d/2 � Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx = � 2πγ2�−d/2 � Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx = � Rd fγ(z)dµ(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 26 Moreover, we have: lim γ→0 lim sup k→+∞ ���� � Rd f(z)dµφ(k)(z) − � Rd f(z)dµ(z) ���� ≤ lim γ→0 lim sup k→+∞ � 2 sup z∈Rd |f(z) − fγ(z)| + ���� � Rd fγ(z)dµφ(k)(z) − � Rd fγ(z)dµ(z) ���� � = lim γ→0 2 sup z∈Rd |f(z) − fγ(z)| = 0, which implies � µφ(k) � k∈N converges weakly to µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We now continue the proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We first show that if limk→∞ MSWp,T (µk, µ) = 0, (µk)k∈N converges weakly to µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We consider a sequence � µφ(k) � k∈N such that limk→∞ MSWp,T (µk, µ) = 0 and we suppose � µφ(k) � k∈N does not converge weakly to µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, let dP be the Lévy-Prokhorov metric, limk→∞ dP(µk,µ) ̸= 0 that implies there exists ε > 0 and a subsequence � µψ(k) � k∈N with an increasing function ψ : N → N such that for any k ∈ N: dP(µψ(k), µ) ≥ ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, we have MSWp,T (µ, ν) = � E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µ, θt♯ν) �� 1 p ≥ E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 Wp (θt♯µ, θt♯ν) � ≥ E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W1 (θt♯µ, θt♯ν) � = MSW1,T (µ, ν), by the Holder inequality with µ, ν ∈ Pp(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, limk→∞ MSW1,T (µψ(k), µ) = 0 which implies that there exists s a subsequence � µφ(ψ(k)) � k∈N with an increasing function φ : N → N such that � µφ(ψ(k)) � k∈N converges weakly to µ by Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hence, limk→∞ dP � µφ(ψ(k)), µ � = 0 which contradicts our assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We conclude that if limk→∞ MSWp,T (µk, µ) = 0, (µk)k∈N converges weakly to µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Now, we show that if (µk)k∈N converges weakly to µ, limk→∞ MSWp,T (µk, µ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By the con- tinuous mapping theorem, we obtain (θ♯µk)k∈N converges weakly to θ♯µ for any θ ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Since the weak convergence implies the convergence under the Wasserstein distance [61], we obtain limk→∞ Wp(θ♯µk, µ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Moreover, the Wasserstein distance is also bounded, hence the bounded convergence theorem: lim k→∞ MSWp p,T (µk, µ) = E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µk, θt♯µ) � = E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 0 � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By the continuous mapping theorem with function x → x1/p, we obtain limk→∞ MSWp,T (µk, µ) → 0 which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 27 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3 Proof of Proposition 1 (i) We recall the definition of Max-SW: Max-SWp(µ, ν) = max θ∈Sd−1 Wp(θ♯µ, θ♯ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Let θ∗ = argmaxθ∈Sd−1Wp(θ♯µ, θ♯ν), from Definition 1, for any p ≥ 1, T ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd) we have: MSWp,T (µ, ν) = � E(θ1:T )∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µ, θt♯ν) �� 1 p ≤ 1 T T � t=1 W p p (θ∗♯µ, θ∗♯ν) = W p p (θ∗♯µ, θ∗♯ν) = Max-SWp(µ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Furthermore, by applying the Cauchy-Schwartz inequality, we have: Max-SWp p(µ, ν) = max θ∈Sd−1 � inf π∈Π(µ,ν) � Rd ���θ⊤x − θ⊤y ��� p dπ(x, y) � ≤ max θ∈Sd−1 � inf π∈Π(µ,ν) � Rd×Rd ∥θ∥p∥x − y∥pdπ(x, y) � = inf π∈Π(µ,ν) � Rd×Rd ∥θ∥p∥x − y∥pdπ(x, y) = inf π∈Π(µ,ν) � Rd×Rd ∥x − y∥pdπ(x, y) = W p p (µ, ν), which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' (ii) This result can be directly obtained from the definitions of MSW and SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4 Proof of Proposition 2 In this proof, we denote Θ ⊂ Rd as the compact set of the probability measure P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From Proposition 1, we find that E[MSWp,T (µn, µ)] ≤ E [Max-SWp(µn, µ)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the proposition follows as long as we can demonstrate that E[Max-SWp(µn, µ)] ≤ C � (d + 1) log2 n/n where C > 0 is some universal constant and the outer expectation is taken with respect to the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The proof for this result follows from the proof of Proposition 3 in [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, we provide the proof for the completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By defining Fn,θ and Fθ as the cumulative distributions of θ♯µn and θ♯µ, the 28 closed-form expression of the Wasserstein distance in one dimension leads to the following equations and inequalities: Max-SWp p(µn, µ) = max θ∈Sd−1 � 1 0 |F −1 n,θ(u) − F −1 θ (u)|pdu = max θ∈Rd:∥θ∥=1 � 1 0 |F −1 n,θ(u) − F −1 θ (u)|pdu ≤ diam(Θ) max θ∈Rd:∥θ∥≤1 |Fn,θ(x) − Fθ(x)|p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We can check that max θ∈Rd:∥θ∥≤1 |Fn,θ(x) − Fθ(x)| = sup B∈B |Pn(B) − P(B)|, where B is the set of half-spaces {z ∈ Rd : θ⊤z ≤ x} for all θ ∈ Rd such that ∥θ∥ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From [62], we can show that the Vapnik-Chervonenkis (VC) dimension of B is at most d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, the following inequality holds: sup B∈B |Pn(B) − P(B)| ≤ � 32 n [(d + 1) log2(n + 1) + log2(8/δ)] with probability at least 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Putting the above results together leads to E[Max-SWp(µn, µ)] ≤ C � (d + 1) log2 n/n, where C > 0 is some universal constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' As a consequence, we obtain the conclusion of the proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5 Proof of Proposition 3 For any p ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' T ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' dimension d ≥ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' and µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ν ∈ Pp(Rd),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' using the Holder’s inequality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' we have: E|� MSW p p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='T (µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ν) − MSWp p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='T (µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ν)| ≤ � E|� MSW p p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='k(µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ν) − MSWp p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='k(µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' ν)|2� 1 2 = � �E ����� 1 TL T � t=1 L � l=1 Wp p(θtl♯µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θtl♯ν) − Eθ1:T ∼σ(θ1:T ) � 1 T T � t=1 W p p (θt♯µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯ν) ������ 2� � 1 2 = � V ar � 1 TL T � t=1 L � l=1 W p p (θt♯µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯ν) �� 1 2 = 1 √ TL L � l=1 V ar � T � t=1 W p p (θt♯µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' θt♯ν) � 1 2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 29 Algorithm 7 Gradient flow with the Euler scheme Input: the start distribution µ = 1 n �n i=1 δXi, the target distribution ν = 1 n �n i=1 δYi, number of Euler iterations T (abuse of notation), Euler step size η (abuse of notation), a metric D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' for t = 1 to T do X = X − n · η∇XD(PX, PY ) end for Output: µ = 1 n �n i=1 δXi Table 3: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in gradient flow application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Distances Wasserstein-2 (↓) Time (↓) Distances Wasserstein-2 (↓) Time (↓) SW (L=10) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0113 × 10−2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='85 SW (L=100) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0096 × 10−2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='32 Max-SW (T=5) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0231 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='02 Max-SW (T=100) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0083 × 10−2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='46 K-SW (L=5,K=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0104 × 10−2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='92 K-SW (L=20,K=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0096 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97 Max-K-SW (K=2,T=5) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0152 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='41 Max-K-SW (K=2,T=100) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0083 × 10−2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='46 rMSW (L=2,T=5,κ=10) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0109 × 10−2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='11 rMSW (L=2,T=5,κ=100) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0141 × 10−2 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='98 iMSW (L=1,T=5) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0109 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='07 iMSW (L=5,T=5) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0055 × 10−2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='44 iMSW (L=2,T=10) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0052 × 10−2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='79 iMSW (L=5,T=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0071 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='14 iMSW (L=2,T=5,M=4) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0101 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 iMSW (L=2,T=5,M=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0055 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='25 iMSW (L=2,T=5,M=0,N=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0066 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='28 iMSW (L=2,T=5,M=2,N=2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0072 × 10−2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='19 viMSW (L=2,T=5,κ=10) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0052 × 10−2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12 viMSW (L=2,T=5,κ=100) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0053 × 10−2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='76 C Additional Experiments In this section, we present the detail of experimental frameworks and additional experiments on gradient flows, color transfer, and deep generative modeling which are not in the main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 Gradient Flows Framework: We have discussed in detail the framework of gradient flow in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 in the main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, we summarize the Euler scheme for solving the gradient flow in Algorithm 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Visualization of gradient flows: We show the visualization of gradient flows from all distances (Table 1) in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Overall, we observe that the quality of the flows is consistent with the quantitative Wasserstein-2 score which is computed using [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From the figures, we see that iMSW and viMSW help the flows converge very fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Namely, Wasserstein-2 scores at steps 200 of iMSW and viMSW are much lower than other distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For oMSW, with L = 5, T = 2, it achieves a comparable result to SW, K-SW, and Max-SW while being faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The random walk transition does not work well in rMSW with the concentration parameter κ = 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Studies on hyper-parameters: We run gradient flows with different values of hyper-parameters and report the Wasserstein-2 scores and computational time in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From the table and Figure 5, we see that SW with L = 10 is worse than oMSW, iMSW, and viMSW with L = 2, T = 5 (10 total projections).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Increasing the number of projections to 100, SW gets better, however, its Wasserstein-2 score is still higher than the scores of iMSW and viMSW while its computational time is bigger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 30 SW L=30 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5913×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='07s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0099×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='55s) Max-SW T=30 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1091×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='37s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0098×10 2 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48s) K-SW L=15 K=2 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5846×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='16s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0098×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='71s) Max-K-SW K=2 T=15 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='7388×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='36s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0146×10 2 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35s) rMSW L=2 T=5 =50 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='8628×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0157×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='16s) oMSW L=5 T=2 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5783×10 2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='59s) W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0104×10 2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='87s) steps 0 iMSW L=2 T=5 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) steps 200 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0483×10 2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='99s) steps 300 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0064×10 2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='41s) steps 0 viMSW L=2 T=5 =50 W2: 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3149×10 2 (0s) steps 200 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0512×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='05s) steps 300 W2: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0043×10 2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='94s) Figure 5: The figures show the gradient flows that are from the empirical distribution over the color points to the empirical distribution over S-shape points produced by different distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The corresponding Wasserstein-2 distance between the empirical distribution at the current step and the S-shape distribution and the computational time (in second) to reach the step is reported at the top of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similarly, Max-(K)-SW with T = 100 is better than Max-(K)-SW with T = 5 and T = 10, however, it is still worse than iMSW and viMSW in terms of computation and performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For burning and thinning, we see that the technique can help improve the computation considerably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' More importantly, the burning and thinning techniques do not reduce the performance too much.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For iMSW, increasing L and T leads to a better flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For the same number of total projections e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', 10, L = 2, T = 5 is better than L = 5, T = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For viMSW, it usually performs better than iMSW, however, its computation is worse due to the sampling complexity of the vMF distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We vary the concentration parameter κ ∈ {10, 50, 100} and find that κ = 50 is the best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Hence, it might suggest that a good balance between heading to the “max" projecting direction and exploring the space of projecting directions is the best strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 Color Transfer Framework: In our experiments, we first compress the color palette of the source image and the target image to 3000 colors by using K-Mean clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' After that, the color transfer application is 31 Source SW (L=45), 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97(s), W2 = 414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='51 Max-SW (T=45), 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48(s), W2 = 449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='42 K-SW (L=15,K=3), 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='21(s), W2 = 411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='74 Max-K-SW (K=3,T=15), 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='6(s), W2 = 479.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='43 rMSW (L=3,T=5, =50), 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='65(s), W2 = 444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 oMSW (L=3,T=5), 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='17(s), W2 = 415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 iMSW (L=3,T=5), 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='39(s), W2 = 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97 viMSW (L=3,T=5, =50), 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='27(s), W2 = 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48 Target Figure 6: The figures show the source image, the target image, and transferred images from different distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The corresponding Wasserstein-2 distance between the empirical distribution over transferred color palates and the empirical distribution over the target color palette and the computational time (in second) is reported at the top of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The color palates are given below the corresponding images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Algorithm 8 Color Transfer Input: source color palette X ∈ {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , 255}n×3, target color palette Y ∈ {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , 255}n×3, number of Euler iterations T (abuse of notation), Euler step size η (abuse of notation), a metric D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' for t = 1 to T do X = X − n · η∇XD(PX, PY ) end for X = round(X, {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , 255}) Output: X conducted by using Algorithm 8 which is a modified version of the gradient flow algorithm since the color palette contains only positive integer in {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , 255}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The flow can be seen as an incomplete transportation map that maps from the source color palette to a color palette that is close to the target color palette.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' This is quite similar to the iterative distribution transfer algorithm [8], however, the construction of the iterative map is different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Visuallization of transferred images: We show the source image, the target image, and the corresponding transferred images from distances in Figure 6 and Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The color palates are given below the corresponding images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The corresponding Wasserstein-2 distance between the empirical distribution over transferred color palates and the empirical distribution over the target color palette and the computational time (in second) is reported at the top of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' First, we observe that 32 Source SW (L=45), 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0(s), W2 = 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='09 Max-SW (T=45), 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='17(s), W2 = 207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12 K-SW (L=15,K=3), 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='34(s), W2 = 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='88 Max-K-SW (K=3,T=15), 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='72(s), W2 = 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='52 rMSW (L=3,T=5, =50), 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='63(s), W2 = 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='4 oMSW (L=3,T=5), 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48(s), W2 = 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='51 iMSW (L=3,T=5), 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='56(s), W2 = 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 viMSW (L=3,T=5, =50), 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='42(s), W2 = 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 Target Figure 7: The figures show the source image, the target images, and transferred images from different distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The corresponding Wasserstein-2 distance between the empirical distribution over transferred color palates and the empirical distribution over the target color palette and the computational time (in second) is reported at the top of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The color palates are given below the corresponding images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' the qualitative comparison (transferred images and color palette) is consistent with the Wasserstein scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We observe that iMSW and viMSW have their transferred images closer to the target image in terms of color than other distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' More importantly, iMSW and viMSW are faster than other distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Max-SW and Max-K-SW do not perform well in this application, namely, they are slow and give high Wasserstein distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' For oMSW, it is comparable to SW and K-SW while being faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Studies on hyper-parameters: In addition to result in Figure 6, we run color transfer with other settings of distances in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' From the table, increasing the number of projections L lead to a better result for SW and K-SW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, they are still worse than iMSW and viMSW with a 33 Table 4: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in the color transfer application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Distances Wasserstein-2 (↓) Time (↓) Distances Wasserstein-2 (↓) Time (↓) SW (L=45) 414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='51 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97 SW (L=15) 421.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='96 Max-SW (T=45) 449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='42 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48 Max-SW (T=15) 450.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='37 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='03 K-SW (L=15,K=3) 411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='74 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='21 K-SW (L=5,K=3) 413.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='16 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 Max-K-SW (K=3,T=15) 479.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='43 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='6 Max-K-SW (K=3,T=5) 510.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='43 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='46 rMSW (L=3,T=5,κ=50) 444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='65 rMSW (L=3,T=5,κ=100) 446.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='14 oMSW (L=3,T=5) 415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='17 oMSW (L=3,T=15) 414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='29 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='51 iMSW (L=3,T=5) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='97 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='39 iMSW (L=3,T=15) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='23 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='47 iMSW (L=5,T=5) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='63 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='82 iMSW (L=5,T=3) 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='02 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='27 iMSW (L=3,T=15,M=14) 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='23 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='08 iMSW (L=3,T=15,M=10) 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='67 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='55 iMSW (L=3,T=15,M=0,N=2) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='6 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='66 iMSW (L=3,T=15,M=10,N=2) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='2 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='1 viMSW (L=3,T=5,κ=50) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='48 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='27 viMSW (L=3,T=5,κ=100) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='49 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='52 smaller number of projections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similarly, increasing T helps Max-SW, Max-K-SW, and iMSW better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' As discussed in the main paper, the burning and thinning technique improves the computation and sometimes enhances the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='3 Deep Generative Models Framework: We follow the generative modeling framework from [20, 42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Here, we state an adaptive formulation of the framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We are given a data distribution µ ∈ P(X) through its random samples (data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Our goal is to estimate a parametric distribution νφ that belongs to a family of distributions indexed by parameters φ in a parameter space Φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Deep generative modeling is interested in constructing νφ via pushforward measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In particular, νφ is implicitly represented by pushing forward a random noise ν0 ∈ P(Z) e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', standard multivariable Gaussian, through a parametric function Gφ : Z → X (a neural network with weights φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' To estimate φ (νφ), the expected distance estimator [57, 41] is used: argminφ∈ΦE(X,Z)∼µ⊗m⊗ν⊗m 0 [D(PX, PGφ(Z))], where m ≥ 1, D can be any distance on space of probability measures, µ⊗ is the product measures, namely, X = (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , xm) ∼ µ⊗ is equivalent to xi ∼ µ for i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , m, and PX = 1 m �m i=1 δxi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Similarly, Z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , zm) with zi ∼ ν0 for i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , m, and Gφ(Z) is the output of the neural work given the input mini-batch Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' By using Wasserstein distance, sliced Wasserstein distance, and their variants as the distance D, we obtain the corresponding estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' These estimators are sometimes known as mini-batch Wasserstein losses [16, 45, 46] However, applying directly those estimators to natural image data cannot give perceptually good results [20, 15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The reason is that Wasserstein distance, sliced Wasserstein distances, and their variants require a ground metric as input e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=', L2, however, those ground metrics are not meaningful on images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, previous works propose using a function that maps the original data space X to a feature space F where the L2 norm is meaningful [55].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We denote the feature function Fγ : X → F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Now the estimator becomes: argminφ∈ΦE(X,Z)∼µ⊗m⊗ν⊗m 0 [D(PFγ(X), PFγ(Gφ(Z)))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 34 The above optimization can be solved by stochastic gradient descent algorithm with the following stochastic gradient estimator: ∇φE(X,Z)∼µ⊗m⊗ν⊗m 0 [D(PFγ(X), PFγ(Gφ(Z)))] = E(X,Z)∼µ⊗m⊗ν⊗m 0 [∇φD(PFγ(X), PFγ(Gφ(Z)))] ≈ 1 K K � k=1 ∇φD(PFγ(Xk), PFγ(Gφ(Zk))), where X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , XK are drawn i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d from µ⊗m and Z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , ZK are drawn i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d from ν⊗m 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' There are several ways to estimate the feature function Fγ in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In our experiments, we use the following objective [15]: min γ � EX∼µ⊗m[min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m 0 [min(0, −1 − H(Fγ(Gφ(Z)))))] � , where H : F → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The above optimization problem is also solved by the stochastic gradient descent algorithm with the following gradient estimator: ∇γ � EX∼µ⊗m[min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m 0 [min(0, −1 − H(Fγ(Gφ(Z)))))] � = EX∼µ⊗m[∇γ min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m 0 [∇γ min(0, −1 − H(Fγ(Gφ(Z)))))] ≈ 1 K K � k=1 [∇γ min(0, −1 + H(Fγ(Xk)))] + 1 K K � k=1 [∇γ min(0, −1 − H(Fγ(Gφ(Zk)))))], where X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , XK are drawn i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d from µ⊗m and Z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' , ZK are drawn i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='d from ν⊗m 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Settings: We use the following neural networks for Gφ and Fγ: CIFAR10: – Gφ: z ∈ R128(∼ ν0 : N(0, 1)) → 4 × 4 × 256(Dense, Linear) → ResBlock up 256 → ResBlock up 256 → ResBlock up 256 → BN, ReLU, → 3 × 3 conv, 3 Tanh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' – Fγ1: x ∈ [−1, 1]32×32×3 → ResBlock down 128 → ResBlock down 128 → ResBlock down 128 → ResBlock 128 → ResBlock 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' – Fγ2: x ∈ R128×8×8 → ReLU → Global sum pooling(128) → 1(Spectral normalization).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' – Fγ(x) = (Fγ1(x), Fγ2(Fγ1(x))) and H(Fγ(x)) = Fγ2(Fγ1(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' CelebA: – Gφ: z ∈ R128(∼ ν0 : N(0, 1)) → 4 × 4 × 256(Dense, Linear) → ResBlock up 256 → ResBlock up 256 → ResBlock up 256 → ResBlock up 256 → BN, ReLU, → 3 × 3 conv, 3 Tanh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' – Fγ1: x ∈ [−1, 1]32×32×3 → ResBlock down 128 → ResBlock down 128 → ResBlock down 128 → ResBlock 128 → ResBlock 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' – Fγ2: x ∈ R128×8×8 → ReLU → Global sum pooling(128) → 1(Spectral normalization).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' – Fγ(x) = (Fγ1(x), Fγ2(Fγ1(x))) and H(Fγ(x)) = Fγ2(Fγ1(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 35 SW Max-SW K-SW Max-K-SW rMSW oMSW iMSW viMSW Figure 8: Random generated images of distances on CIFAR10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Table 5: Summary of FID and IS scores of methods on CIFAR10 (32x32), and CelebA (64x64) Method CIFAR10 (32x32) CelebA (64x64) FID (↓) IS (↑) FID (↓) iMSW (L=100,T=10,M=0,N=1) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='61±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='72 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='73±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='33 iMSW (L=100,T=10,M=9,N=1) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='16±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='11 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='17±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='07 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='10±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='34 iMSW (L=100,T=10,M=5,N=1) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='93±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='21 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='05 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='49±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='52 iMSW (L=100,T=10,M=0,N=2) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='33±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='32 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='99±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='64 iMSW (L=10,T=100,M=0,N=1) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='26±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='74 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='07 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='89±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='23 iMSW (L=10,T=100,M=99,N=1) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='50±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='70 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='08 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='55±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='35 iMSW (L=10,T=100,M=50,N=1) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='41±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='58 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='46±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='73 iMSW (L=10,T=100,M=0,N=2) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='65±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='01 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='11±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='06 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='49±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='39 For all datasets, the number of training iterations is set to 50000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We update the generator Gφ each 5 iterations while we update the feature function Fγ every iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The mini-batch size m is set 128 in all datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The learning rate for Gφ and Fγ is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='0002 and the optimizer is Adam [25] with parameters (β1, β2) = (0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We use the order p = 2 for all sliced Wasserstein variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' We use 50000 random samples from estimated generative models Gφ for computing the FID scores and the Inception scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In evaluating FID scores, we use all training samples for computing statistics of datasets2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Generated images: We show generated images on CIFAR10 and CelebA from different generative models trained by different distances in Figure 8 and in Figure 9 in turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Overall, images are visually consistent with the quantitative FID scores in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Studies on hyperparameters: We run some additional settings of iMSW to investigate the 2We evaluate the scores based on the code from https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='com/GongXinyuu/sngan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content='pytorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 36 SW Max-SW K-SW Max-K-SW rMSW oMSW iMSW viMSW Figure 9: Random generated images of distances on CelebA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' performance of the burning thinning technique and to compare the role of L and T in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' First, we see that burning and thinning helps to improve FID score and IS score on CIFAR10 and CelebA in the settings of L = 100, T = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' It is worth noting that the original purpose of burning and thinning is to reduce computational complexity and memory complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' The side benefit of improving performance requires more investigation that is left for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' In addition, we find that for the same number of total projections 1000 without burning and thinning, the setting of L = 10, T = 100 is better than the setting of L = 100, T = 10 on CIFAR10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' However, the reverse direction happens on CelebA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' Therefore, on different datasets, it might require hyperparameter tunning for finding the best setting of the number of projections L and the number of timesteps T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} +page_content=' 37' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE2T4oBgHgl3EQfOgaA/content/2301.03749v1.pdf'} diff --git a/5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss b/5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..479329505cf5963df8fbeccf81d3f5fe4724a975 --- /dev/null +++ b/5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78c5dfea9ac3ddc456312924dd22ee19d9202d3a6dbf72a6cc52c439cf4ed6a5 +size 3080237 diff --git a/6tFKT4oBgHgl3EQf_i4o/content/tmp_files/2301.11962v1.pdf.txt b/6tFKT4oBgHgl3EQf_i4o/content/tmp_files/2301.11962v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..23e13d68a2d62591d4ff2a1312818c83520fe09b --- /dev/null +++ b/6tFKT4oBgHgl3EQf_i4o/content/tmp_files/2301.11962v1.pdf.txt @@ -0,0 +1,1774 @@ +On the Feasibility of Machine Learning Augmented +Magnetic Resonance for Point-of-Care Identification +of Disease +Raghav Singhal1∗ +Mukund Sudarshan1,∗ +Anish Mahishi1 +Sri Kaushik1 +Luke Ginnochio2 +Angela Tong2 +Hersh Chandarana2 +Daniel Sodickson2 +Rajesh Ranganath1,3 +Sumit Chopra1,2 +Abstract +Early detection of many life-threatening diseases (e.g., prostate and breast cancer) +within at-risk population can improve clinical outcomes and reduce cost of care. +While numerous disease-specific “screening" tests that are closer to Point-of-Care +(POC) are in use for this task, their low specificity results in unnecessary biopsies, +leading to avoidable patient trauma and wasteful healthcare spending. On the +other hand, despite the high accuracy of Magnetic Resonance (MR) imaging in +disease diagnosis, it is not used as a POC disease identification tool because of poor +accessibility. The root cause of poor accessibility of MR stems from the requirement +to reconstruct high-fidelity images, as it necessitates a lengthy and complex process +of acquiring large quantities of high-quality k-space measurements. In this study +we explore the feasibility of an ML-augmented MR pipeline that directly infers +the disease sidestepping the image reconstruction process. We hypothesise that +the disease classification task can be solved using a very small tailored subset of +k-space data, compared to image reconstruction. Towards that end, we propose a +method that performs two tasks: 1) identifies a subset of the k-space that maximizes +disease identification accuracy, and 2) infers the disease directly using the identified +k-space subset, bypassing the image reconstruction step. We validate our hypothesis +by measuring the performance of the proposed system across multiple diseases +and anatomies. We show that comparable performance to image-based classifiers, +trained on images reconstructed with full k-space data, can be achieved using small +quantities of data: 8% of the data for detecting multiple abnormalities in prostate +and brain scans, and 5% of the data for detecting knee abnormalities. To better +understand the proposed approach and instigate future research, we provide an +extensive analysis and release code. +1 +Introduction +Early and accurate identification of several terminal diseases, such as breast cancer [42], prostate +cancer [27], and colon cancer [65], within the at-risk population followed by appropriate intervention +leads to favorable clinical outcomes for patients by reducing mortality rates [57] and reducing cost of +care. In the current standard-of-care this goal is accomplished by subjecting at-risk but otherwise +∗Equal Contribution. 1 Department of Computer Science, New York University, New York, NY. 2 Center +for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University +Grossman School of Medicine, New York, NY, United States. 3 Center for Data Science, New York University, +New York, NY, United States. Correspondence to: Raghav Singhal . +arXiv:2301.11962v1 [cs.LG] 27 Jan 2023 + +asymptomatic individuals within the population to clinical tests (a.k.a., “screening tests”) that identify +the presence of the disease under consideration: a process formally referred to as “Population-Level +Screening (PLS).” Desiderata for an effective screening test are: 1) it should be safe for use, 2) +it should be accurate (have high sensitivity and specificity), and 3) it should be fast and easily +accessible to facilitate use at population-level. While numerous disease-specific screening tests that +are administered closer to point-of-care (POC), and hence are accessible at population-level, have +been proposed and are in use, most of them do not satisfy all the three requirements mentioned above. +For instance, prostate cancer [54] and breast cancer [16] have accessible tests, but these tests have +low specificity, as shown by multiple clinical trials [33, 17]. Low specificity of these tests results +in over-diagnosis and over-treatment of patients leading to many unnecessary, risky, and expensive +followup procedures, such as advanced imaging and/or invasive tissue biopsies. This in-turn causes +avoidable patient trauma and significant wasteful healthcare spending [33, 36, 5, 59]. +Magnetic Resonance Imaging (MRI) has been shown to be a highly effective tool for accurately +diagnosing multiple diseases, especially those involving soft-tissues [51, 18, 60, 68, 48, 6]. While +traditionally MRI is used to validate clinical hypotheses under a differential diagnosis regime and +is typically used as last in line tool, multiple recent studies have proposed new disease specific +data acquisition protocols that can potentially make MR useful for the purpose of early disease +identification [15, 62, 41, 4]. These studies have shown that MR can outperform the screening tests +being used as part of current standard-of-care. However, despite its proven clinical benefits, the +challenges associated with the accessibility of MRI, limits its widespread use at population-level. As +such, there is an unmet need for a POC tool that has the diagnostic accuracy of MR and yet is readily +accessible at population-level. Such a tool can have widespread positive impact on the standard-of- +care for multiple life threatening diseases. Specifically, patients will receive improved care via easy +access to MR technology outside of the high-friction specialized environments of imaging centers +for early and accurate identification of diseases; radiologists will see an increased diagnostic yield +of expensive followup scans since the tool will ensure that only patients with high-likelihood of the +disease undergo full diagnostic imaging; and health system will see a reduction in overall cost of +care with the decrease in the number of unnecessary expensive follow-up diagnostics and treatment +procedures. +To understand the reason behind poor accessibility of MR, we first shed light on the workings +of the pipeline. Figure 2 (c) depicts the full MR pipeline. +MR imaging is an indirect imaging +process in which the MR scanner subjects the human body with magnetic field and radio-frequency +signals and measures the subsequent electromagnetic response activity from within the body. These +measurements are collected in the Fourier space, also known as k-space (see section 5.4 in [7]) +(stage S1 in figure 2 (c)). The 3D volumetric image of the anatomy is reconstructed from these +k-space measurements using a multi-dimensional inverse Fourier transform (stage S2). The images +are then finally interpreted by sub-specialized radiologists who render the final diagnosis (stage S3). +The reason behind MR’s diagnostic success is its ability to generate these high-fidelity images with +excellent soft-tissue contrast properties, because such images enable the human radiologists to easily +discern the pathology accurately. The quality of the images is directly related to the quantity and the +quality of the k-space measurements acquired: large quantities of high-quality measurements results +in a high-quality image. This in-turn necessitates the need for 1) expensive specialized scanners +installed in special purpose imaging centers to collect large quantities of high-quality k-space data, +2) execution of long and complex data acquisition protocols to reconstruct the high-fidelity images +exhibiting multiple contrasts, and 3) sub-specialized radiologists to interpret the reconstructed images. +All these factors prevent MR scanning to be used as a tool closer to POC for early and accurate +disease identification. Instead its use is predominantly limited to validating a clinical hypothesis at +the end of the diagnostic chain. With the motivation of improving accessibility of MR, researchers +have proposed multiple solutions to simplify the pipeline. These include designing novel acquisition +protocols to acquire the k-space data [32, 14], learning the under-sampling pattern over k-space data +matrices so that the image quality is not compromised [2, 73, 64, 25], faster data acquisition and +image reconstruction from the under-sampled k-space data, and for simultaneous classification and +image-reconstruction using under-sampled k-space data [31, 39, 40, 70, 44, 20]. While these efforts +have expedited the data acquisition process, the requirement to generate high-fidelity images still +necessitates the use of expensive scanners and the need for a sub-specialized radiologist to interpret +them. Furthermore, image generation also imposes limits on how much one can under-sample the +k-space. For instance, [44] reports that reconstructed images started missing clinically relevant +pathologies if we sample less than 25% of the data. This phenomenon can be observed in Figure 1, +2 + +which shows images reconstructed by a state-of-the-art reconstruction model [56] using different +levels of sampling. A clearly visible lesion in the high resolution image is barely visible in the image +generated using 8% data. +Figure 1: Figure showing deterioration in the quality of reconstructed images with decreasing +sampling factors (from left to right). Lesion visible (red arrow) on image reconstructed from the +fully-sampled k-space data (left panel) is not visible in the image reconstructed from 12.5% (middle +panel) or 8% sampled data (right panel) when reconstructed with state-of-the-art reconstruction +methods. +This work is motivated by the goal of making the benefits of MR diagnostics available for population- +wide identification of disease. Towards that end, we ask the following questions: “If the clinical goal +is to merely identify the presence or absence of a specific disease (a binary task accomplished by +a typical screening test), is it necessary to generate a high-fidelity image of the entire underlying +anatomy? Instead, can we build an ML model that can accurately provide the final answer (whether +a disease is present or not) from a carefully selected subset of the k-space data?" Specifically, we +hypothesize that when the task is to infer the presence of a disease (a binary decision), we do not need +all the k-space measurements that are otherwise acquired to generate a high-fidelity image. Instead, +we can train an ML system that can accurately provide the binary answer directly from a carefully +tailored small fraction of degraded k-space data that can potentially be acquired using low-grade +inexpensive scanning devices. To validate the above hypothesis, one needs to answer the following +key questions: +Q1. Can we build a ML system that can accurately infer the presence of a disease using data +from standard MR sequences without generating images? +Q2. Can we build a ML system that uses only a small fraction of carefully tailored subset of the +k-space data to infer the presence of a disease without images? If so, how little data do we +need without compromising performance? +Q3. Can we build a ML system that can accurately infer the presence of a disease using degraded +k-space data without generating images? What are the limits on signal quality we can afford +to work with, without compromising performance? +Answers to these questions will shed light on the feasibility of making MR scanning accessible outside +of its current specialized environments to be potentially used for the purpose of early, efficient, and +accurate identification of disease at population-level. In this study we answer Q1, and Q2 and leave +the answers to Q3 as future work. Towards that end, we first propose a novel deep learning (DL) +model that takes as input the raw k-space data and generates the final (binary) answer, skipping the +image reconstruction step (Section 5). We show that it is indeed possible to train a ML model that +can directly generate an answer from the k-space data without generating an image. This result is +not surprising because mapping the k-space data to image space is accomplished by simply applying +an Inverse Fourier Transform (IFT) operation on the k-space data, which is a deterministic lossless +mapping. Next, to answer question Q2, we propose a novel ML methodology that can accurately infer +the presence of a disease directly from a small tailored subset of the k-space data, side-stepping the +image reconstruction step (Section 6). We call this methodology the End-to-end Magnetic Resonance +Triaging (EMRT). Figure 2(d) provides an outline of our methodology in comparison to the current +image reconstruction-based pipeline (Figure 2(c)). EMRT simultaneously accomplishes two tasks: +1. Identifies a small subset of the k-space that can provide sufficient signal for accurate +prediction of the disease by an ML model, ignoring the quality of the image it will generate. +2. It then infers the presence of the disease directly using data from only the identified subset +of the k-space, without generating an image. +3 + +We validate the efficacy of EMRT in identifying multiple diseases using scans from multiple anatomies, +namely, to detect presence of ACL sprains and meniscal tears in slice-level knee MR scans, to detect +enlarged ventricles and mass in slice-level brain MR scans, and detect the presence of clinically +significant prostate cancer (CS-PCA) in slice-level abdominal MR scans. The knee and brain scans are +made available in the FastMRI data set [70] with labels provided by the FastMRI+ data set [74]. We +use an internal data set for the prostate scans acquired as part of clinical exams of real patients at +NYU Langone Health system. We compare the performance of EMRT against two types of benchmark +methods. +Our first benchmark consists of a classifier trained with images reconstructed from fully-sampled +k-space data. Since the prediction accuracy of this benchmark is the best one can hope for from +any image-based classifier, we use this comparison to establish the limits of how much one can +under-sample the k-space and still accurately infer the disease, when not reconstructing images. Our +results show that EMRT can achieve the same level of accuracy as this benchmark using only 5% of +the data for knee scans and 8% of the data for brain and prostate scans. Our second benchmark is +another image-based classifier that uses as input the images reconstructed from an under-sampled +k-space data using the state-of-the-art image reconstruction models proposed in the literature [56, 44]. +The motivation behind this experiment was to show that for the same disease identification accuracy, +if we by-pass the image reconstruction step, we require a significantly smaller fraction of the k-space +data in comparison to when we reconstruct images. Our results also show that for all under-sampling +rates in our experiments, EMRT outperforms under-sampled image-reconstruction based benchmarks +even though the images are reconstructed using the state-of-the-art reconstruction models. Lastly, +we also provide an extensive analysis that shed light on understanding the workings of EMRT. Our +contributions include: +• EMRT: a first-of-its-kind machine learning methodology that identifies a subset of k-space +that maximizes disease classification accuracy, and then infers the presence of a disease +directly from the k-space data of the identified subset, without reconstructing images. +• Rigorous comparison to the state-of-the-art image reconstruction-based benchmark models +to prove the efficacy of the proposed methodology. +• Extensive analysis of EMRT to understand the reasons behind its superior performance. +• Release of the code and data used to build EMRT with the goal of facilitating further research +in end-to-end methods like EMRT, that have the potential to transform healthcare. +2 +Clinical Vision +This study is motivated by the overarching goal of making MR scanning accessible outside of its +current specialized environments so that its diagnostic benefits can be realized at population-level +for early, efficient, and accurate identification of life-threatening diseases. We argue that this poor +accessibility is rooted in the requirement to generate high-fidelity images, because image generation +necessitates the need to acquire large quantities of high-quality k-space data (forcing the use of +expensive scanners installed in specialized environments running complex data acquisition protocols) +and the need for sub-specialized radiologists for interpretation. As such we ask a sequence of +questions pertaining to k-space data requirements for accurate disease identification under the setting +when we are not generating intermediate high-fidelity images. Answers to questions posed in this +study will shed light on the feasibility of whether our end goals can be accomplished. +Assuming answers to all the questions are favorable, one can imagine an ultra-low-field inexpensive +scanning device that is only capable of acquiring small quantities of low quality k-space data, from +which it is difficult to reconstruct an image that has a clearly discernible pathology. However an ML +model (embedded within the device) could accurately infer the presence of the disease directly from +this data. Such an inexpensive system could be used clinically as a triaging tool in the following way: +The system is placed in a primary care clinic where it is used to test patients who are known to be +at risk of the disease. Patients for whom the system provides a “yes” answer (possibly with some +confidence score) are routed for the more thorough followup diagnostic procedures (full MR scan +and/or biopsy). Others are sent back into the surveillance pipeline for subsequent periodic screening. +More specifically, in Figure 2(a) and (b), we depict the utility of such a device when screening for +clinically significant prostate cancer (CSPCA), the second most common reason behind male mortality +4 + +More than 70% of people +get false biopsies when +mp-MRI is not available +At Risk Patients +Physician Orders PSA +To Screen for csPCA +PSA Test +>= 4 ng/ml +< 4 ng/ml +Do +Nothing +Patient is Referred to +Biopsy (and mp-MRI, if available) +MRI Scan +Biopsy +Less than 50%* of people +get false biopsies when +mp-MRI is not available +EMRT +Low yield on +mp-MRI scans +High yield on +mp-MRI scans +(a) Patient Flow in Current Standard of Care +(b) Patient Flow in Future Standard of Care +5 +C M Hyun et al +fix this unwanted distortion by placing the original x values in their corresponding positions in the k-space data +F(˜y). We call this k-space correction as fcor and set ˆx = fcor(F(˜y)). Because the original input data is preserved, +we expect to obtain a more satisfactory reconstruction image and, indeed, our experiments show that the k-space +correction is very effective. Finally, we apply the inverse Fourier transform to ˆx, take the absolute value and obtain +our reconstruction image |F−1(ˆx)|. In summary, our image reconstruction function f : x �→ y is given by +f = |F−1| ◦ fcor ◦ F ◦ fd ◦ |F−1| ◦ P, + +(10) +where fd is the trained U-net and fcor indicates the k-space correction. Here, fd should be determined by the +following training process. +To train and test the U-net fd, we generate the training and test sets as follows. Given ground-truth MR +images {y( j)}N +j=1, we take the Fourier transform of each y( j), apply our subsampling strategy S, which yields +x( j). This provides a dataset {(x( j), y( j))}N +j=1 of subsampled k-space data and ground-truth MR images. The +dataset is divided into two subsets : a training set {(x( j), y( j))}M +j=1 and test set {(x( j), y( j))}N +j=M+1. The input x( j) +of the image reconstruction function f is an undersampled k-space data and the output y( j) is the ground truth +image. Using the zero-padding operator, inverse Fourier transform, and absolute value, we obtain folded images +y( j) +S . Our training goal is then to recover the ground-truth images y( j) from the folded images y( j) +S . Note that +{y( j) +S , y( j)}M +j=1 is a set of pairs for training fd. +The architecture of our U-net is illustrated in figure 4. The first half of the network is the contracting path and +the last half is the expansive path. The size of the input and output images is 256 × 256. In the contracting path, +we first apply the 3 × 3 convolutions with zero-padding so that the image size does not decrease after convolu- +tion. The convolution layers improve the performance of machine learning systems by extracting useful features, +sharing parameters, and introducing sparse interactions and equivariant representations (Bengio et al 2015). +After each convolution, we use a rectified linear unit(ReLU) as an activation function to solve the vanishing gra- +dient problem (Glorot et al 2011). Then, we apply the 2 × 2 max pooling with a stride of 2. The max pooling helps +to make the representation approximately invariant to small translations of the input (Bengio et al 2015). In the +expansive path, we use the average unpooling instead of max-pooling to restore the size of the output. In order +to localize more precisely, the upsampled output is concatenated with the correspondingly feature from the con- +tracting path. At the last layer a 1 × 1 convolution is used to combine each the 64 features into one large feature +(Ronnerberger et al 2015). +The input of the net is y( j) +S , the weights are W, the net, as a function of weights W, is fnet(·, W), and the output +is denoted as fnet(y( j) +S , W). To train the net, we use the ℓ2 loss and find the optimal weight set W0 with +Figure 3. MR images of human brain with a tumor at the bottom. Images (a)–(e) are reconstructed from (f) full sampling, +(g) uniform subsampling of factor 2, (h) uniform subsampling of factor 2 with added some low frequencies, (i) uniform +subsampling of factor 4, and (j) uniform subsampling of factor 4 with added low frequencies , respectively. In (b) and (d), tumor-like +lesions are found at both the top and bottom; one is a copy of the other. Hence, a location uncertainty exists in the uniform sampling. +However, in the reconstructed image (c) and (e) using the uniform subsampling of factor 2 and 4 with added low frequencies, the +tumors are clearly located at the bottom. The location uncertainty can hence be addressed by adding a few low frequencies in k- +space. +Phys. Med. Biol. 63 (2018) 135007 (15pp) +Image +Reconstruction +K-space +Acquisition +Radiological +Diagnosis +(c) Standard MR Pipeline Involving Image Reconstruction and a Radiologist for Diagnosis +ML to Learn +Under-Sampling Pattern +of Low SNR K-Space +ML to Infer cs-PCA +From Selected Subsets of +Low SNR K-Space +(d) Proposed End-to-end Magnetic Resonance Triaging (EMRT) Pipeline using Ultra Low-Field Scanner +EMRT +S1 +S2 +S3 +High Performance Scanner +Full k-Space Data +Reconstructed Image +Radiology Report +Low Performance Scanner +Followup After +One Year +Low Risk +High Risk +High Risk of cs-PCa +Low Risk of cs-PCa +At Risk Patients +Physician Orders PSA +To Screen for csPCA +PSA Test +>= 4 ng/ml +< 4 ng/ml +Do +Nothing +Patient is Referred to +Biopsy (and mp-MRI, if available) +Biopsy +Mp-MRI +Figure 2: Overview of current and proposed standards of care for Prostate Cancer: Panel (a) +depicts the current practice of testing for clinically significant prostate cancer (CS-PCA), which +involves testing at-risk patients using a PSA test followed by an expensive multi-parametric MRI +(Panel (c)) and a biopsy. In Panel (b), with our proposed triaging tool, patients who have a high +PSA score undergo a subsequent test with the EMRT embedded ultra-low field MR device (Panel +(d)). With the use of the triaging device, only high risk patients get the expensive and inaccessible +multi-parametric MRI and invasive biopsy, reducing waste in the healthcare system and preventing as +many as 38% of the biopsies [55]. +within the United States. Figure 2(a) depicts the current standard of care for CSPCA screening, where +at-risk people are ordered to take the PSA test to screen for disease, followed by either an invasive +biopsy or a 40 minute long multi-parametric MRI exam (depending on the PSA value). Unfortunately, +the high false positive rate of the PSA test, causes unnecessary patient trauma and wasteful healthcare +spending as 70% of patients who have a positive PSA test can get a negative biopsy. In Figure 2(b), we +highlight how the proposed triaging tool can be placed in the pipeline. The PSA test can be followed +up by another test using the ultra-low-field EMRT embedded MRI device. Unlike a full MRI exam, the +EMRT-embedded device will not have to produce an image, just a risk score. Such a triaging device +can filter high and low risk patients further, and only select the high risk patients for subsequent +diagnostics tests such as, full MRI and/or biopsy. This in-turn will reduce waste in the healthcare +system and prevent patient trauma. +5 + +Clinical IndicationforMRl +Recent PSA level (ng/ml) +MRI Report +Prostate volume: Size must be measured as maximum transverse (T) x +( +Volume can be calculated as 0.52 x T x AP x L cm3 +Lesion reporting +Upto 4 lesions may be described, each assigned to a PiRADS category of 3, 4 or +5, +Index lesion must be identified. Index lesion refers to either lesion with highest +overallcategoryorwithEPE +For every lesion: +Location of lesion should be described with reference to sectors described in +sector map and may also be visually mapped for easier understanding +Size measured as largest diameter of the lesion in axial plane. If largest +should be mentioned along with the plane. Pz lesions should be preferably +measuredonDWl whileTZlesions shouldbe measuredonT2w +Signal characteristics on T2w/DWI with sequence scoring. +EPE +/-, SVI +/- +Overall PI-RADS category +Other findings (for staging): Suspicious lymph nodes, suspicious bone +metastasis, +Other benign findings such as cysts may be reported to use as landmarks for +argeted biopsies or to identify lesions on MR follow-upCThis scenario is not far from the realm of reality, as many organizations are manufacturing such +ultra low-field specialized scanners, such as Promaxo [45] for the prostate and Hyperfine [19] for the +brain, both of which are approved by the FDA. We note that while we are exploring the feasibility +of ML enabled MR scanning that generates an answer without images, the use-case of such a device +does not replace the current practice of radiology, which requires the generation of high-fidelity +images interpreted by sub-specialized radiologists to render the final diagnosis. Such an imaging and +subsequent interpretation exercise is important to render the final diagnosis, for staging and planning +treatment [8]. Instead, existence of such a device has the ability to generate alternate use-cases for +MR scanning technology. +3 +Related Work +Applications of deep learning (DL) within MR can be grouped into two categories, namely image +analysis and image reconstruction. Under the image analysis category, DL models take the spatially +resolved gray scale 2D or 3D MR images as input and perform tasks like tissue/organ segmentation +[1, 10] or disease identification [53, 69, 67, 72, 46]. DL models have achieved radiologist level +performance in identifying numerous diseases [23, 24, 22] and are increasingly being deployed as +part of computer aided diagnostic systems [52, 66]. For instance, the authors in [35] examined +the effect of DL assistance on both experienced and less-experienced radiologists. The DL assisted +radiologist surpassed the performance of both the individual radiologist and the DL system alone. +These approaches have improved diagnostic accuracy but have so far required high-resolution images +that are expensive to produce. +Most methods within the image reconstruction category are motivated with the goal of improving +the accessibility of MR scanning by reducing the scanning time. Towards that end, researchers have +proposed a variety of solutions to simplify and expedite the data acquisition process. Specifically, +researchers have proposed machine learning models to enable rapid reconstruction of spatially +resolved 2D images from under-sampled k-space data acquired by the scanner [40, 56, 10]. This task +requires addressing two key questions, namely, 1) what sampling pattern to choose? and 2) given +a sampling pattern, what reconstruction method to choose? For the first question, researchers have +proposed ML methods that learn the sampling pattern over the k-space data matrices so that the image +quality is not compromised [2, 71, 64, 3, 25]. In another line of work, researchers model the k-space +acquisition process as a sequential decision making process, where each sample is collected to improve +reconstruction performance and used reinforcement learning models to solve the task [47, 30, 3]. +To answer the second question, DL models have been proposed that use under-sampled k-space +data to reconstruct images of provable diagnostic quality [39, 40, 44, 20, 56, 34, 75, 49, 38, 26, 9]. +Researchers have also proposed NON-ML-based solutions to expedite the scanning time for MR. +These solutions involve the design and execution of novel data acquisition protocols and sequences +that enable rapid acquisition of the k-space data [32, 14]. Lastly, to facilitate research in image +reconstruction, several data sets and challenges have been released, such as the FASTMRI [70], +FASTMRI+ [74] and Stanford knee MRI with multi-task evaluation (SKM-TEA) [13]. These data sets +provide raw k-space measurements for MR scans along with labels of abnormalities associated with +those scans. +While these efforts have simplified and expedited the data acquisition process, the requirement to +generate high-fidelity images still necessitates the use of expensive scanners and the need for a +sub-specialized radiologist to interpret them. Furthermore, image generation imposes limits on how +much one can under-sample the k-space. +Our work instead studies the problem of using DL models to infer the presence/absence of a disease +directly from a small learned subset of the k-space data has never been considered. +4 +MR Background and Notation +MR imaging is an indirect process, whereby which spatially resolved images of a human subject’s +anatomy are reconstructed from the frequency space (a.k.a., k-space) measurements of the electro- +magnetic activity inside the subject’s body after it is subjected to magnetic field and radio-frequency +pulses. These measurements are captured by an instrument called a receiver coil which is kept in the +vicinity of the part of the body whose image is sought. The k-space measurements from a single-coil +6 + +are represented as a 2-dimensional complex valued matrix x ∈ Cr×c, where r is the number of rows +and c is the number of columns. The spatial image z is reconstructed from the k-space matrix by +a multi-dimensional inverse Fourier transform, z = F−1(x). We denote by y ∈ {1, . . . , K} the +clinically relevant response. In our case y will be a binary response variable (y ∈ {1, 0}) indicating +the presence/absence of the disease being inferred. +Multi-Coil Data: +In practice, to speed up the data acquisition process, most modern MR scanners +acquire measurements in parallel using multiple receiver coils. In case of multi-coil acquisition, +the k-space matrix xmc is 3-dimensional: xmc ∈ Cdc×r×c [70], where dc is the number of coils +used. The image produced by each coil has a slightly different view of the anatomy, since each +coil has different sensitivity to signals arising from different spatial locations. Multiple methods +have been proposed to combine/use these images in ways that are conducive for ingestion into any +downstream ML model. For instance, a commonly used method that combines these images from +different coils into a single aggregate image is called the root-sum-of-squares (RSS) method [37]. +Given the multi-coil k-space matrix, the RSS method requires computing the inverse Fourier transform +of each coil’s k-space matrix ˜mj = F−1(xj), and then generating the RSS image by +˜m = +� +� +� +� +Nc +� +j=1 +| ˜mj |2. +Instead of combining the data from multiple coils in the image space, one can also combine the +data in the original k-space. A methods called Emulated Single Coil (ESC) [61], aggregates directly +the k-space data from multiple coils and emulates it to be coming from a single coil. This process +reduces the dimension of the full matrix xmc ∈ Cdc×r×c to a matrix ˜xmc ∈ Cr×c. In the subsequent +discussion pertaining to the direct k-space model, we will assume that we are working with the +emulated single coil data matrix ˜xmc of dimensions r × c. +Figure 3: Examples of k-space sampling patterns: The left panel shows a unconstrained sampling +pattern with 30% sampling rate, the middle panel shows a random Cartesian sampling pattern with +a 30% sampling rate, and the right panel displays an equispaced Cartesian sampling pattern with a +25% sampling rate with sampling. +Under-Sampled Data: +The notion of “under-sampling” refers to measuring only a subset of entries +in the k-space matrix x. We represent the sampling pattern using a binary mask matrix s ∈ {0, 1}r×c +(sometimes also referred to as sampling mask), where sij = 1 if and only if the measurement xij +was acquired. The under-sampled k-space matrix is represented as xs = x ◦ s, where ◦ is element- +wise multiplication between the two matrices. In this work, we constrain the sampling pattern to +“Cartesian”, which consists of sampling the lines of the k-space matrix. More specifically, for a +Cartesian sampling pattern all the elements of some lines of the matrix s are 0 and all the elements of +other lines are set to 1. +See Figure 3 for structure of various sampling patterns. The k-space matrix has its origin in the center +of the matrix. The sampling rate α is defined as the total percentage of measurements. +4.1 +Image-Based Disease Identification using Deep Learning Models +The conventional way of using DL models to infer presence of a disease within the MR pipeline +involves two steps. In the first step, a high-fidelity image is reconstructed using the acquired k-space +measurements from multiple coils using the RSS method, as described above. In the second step, the +reconstructed image is provided as an input to a DL model that is trained to infer the presence/absence +7 + +Figure 4: (a) k-space layer: the k-space layer makes use of the convolution theorem to perform an +initial convolution operation between the complex valued k-space input x and the kernel z. The +resulting output is passed through an inverse Fourier transform operation to generate real valued +feature maps hR of size k × r × c × 2. These feature maps are passed as input to the subsequent +layers of KSPACE-NET. (b) KSPACE-NET: The KSPACE-NET takes the k-space as input followed +by the k-space layer, then it applies a convolutional architecture on the feature maps h to make a +classification. +of the disease. We refer to this model as MODELRSS. This is the best one can hope to achieve when +using images and we benchmark the accuracy of EMRT against it. +Since the high-fidelity images used by methods such as MODELRSS requires acquisition of large +quantities of high quality k-space data, researchers have also proposed to train image-based DL +classifiers using images reconstructed from the under-sampled k-space data. This approach requires +one to make decisions at two levels, namely 1) choosing the sampling pattern over the k-space, the +data from which will be used to reconstruct the image and 2) given the sampling pattern, choosing a +method to reconstruct the image. Multiple methods have been proposed to learn the sampling pattern +[2, 71, 64], and to reconstruct images using the under-sampled k-space data [34, 44, 20, 56, 75, 38, +26, 9]. We denote the class of these models by MODEL:, where refers to the +method used to choose the sampling pattern and refers to the image reconstruction method. +We compare the performance of EMRT against a variety of these models with different combinations +of sampling and reconstruction regimes (Section 7). +5 +Direct k-Space Classifier +We now describe the proposed DL model that takes as input the k-space data and directly generates the +final answer without reconstructing an intermediate image. The foundational block of our architecture +is the convolution theorem, which states that for any given functions f, g we have: +F(f ∗ g) = F(f) ◦ F(g), +(1) +where F is the Fourier transform, ∗ denotes the convolution operation, and ◦ denotes element- +wise multiplication. Multiple researchers in the past have used this operator duality to accelerate +convolutions in Convolutional Neural Networks (CNN) [50, 43]. +Since the k-space data is in the frequency domain, we can use Eq. 1 to adapt any convolutional neural +network architecture to directly use the k-space data as input. Specifically, let x ∈ Cr×c denote the +complex-valued k-space matrix of size r × c, and let z ∈ Ck×k be the kernel with which we want to +convolve the input x. We accomplish this convolution by first zero-padding the kernel to the right +and bottom to create a kernel z′ ∈ Cr×c which is of the same size as the input (see Figure 4). We +8 + +x +Elementwise multiplication +in complex domain +h +Pad +Z +Zetthen take the Fourier transform of the padded kernel z′, such that z′ +F = F(z′) is in the frequency +space. Using Equation 1, we compute the convolution between the input x and the kernel z by taking +the inverse Fourier transform of the element-wise multiplication of x and z′ +F: +h = F−1(x) ∗ z = F−1(x ◦ z′ +F). +(2) +The matrix h ∈ Cr×c is a complex matrix in the spatial (image) domain and serves as input to the +subsequent layers of the neural network. By design, subsequent layers of our proposed network takes +real-valued inputs. As a result we stack the real and imaginary components of h as two separate +channels. The resulting tensor hR is of size Rr×c×2, which is supplied as input to the downstream +layers of the neural network. In practice, much like in real convolution neural networks, we convolve +the k-space input x with p independent kernels {z1, z2, . . . , zp} to extract different features from +the input, resulting in feature maps of size hR ∈ Rp×r×c×2, which are supplied as input to the +subsequent layers of the neural network. +Following the k-space layer, we can adopt any standard architecture for the subsequent layers. The +real-valued feature map hR ∈ Rp×r×c×2 from the k-space layer is used as input to the subsequent +layers, where instead of a 3 channel input for RGB images, we have p × 2 input channels. In this +work, we use a Preact-ResNet [21] for the subsequent layers. The output of this layer is a feature +representation z ∈ Rhz. This feature representation is used as input to a feed-forward network to +output the probabilities of the positive and negative classes. Figure 4 depicts the full architecture +which we call the KSPACE-NET. We can easily extend KSPACE-NET to predict multiple pathologies +from the same input. For each pathology, we can use a different feed-forward network with the +feature representation z as input to each. +Lastly, extending the KSPACE-NET to work with under-sampled data is straightforward. We simply +replace the full k-space input x to the model with the under-sampled input xs, which is obtained by +taking an element-wise dot product with the sampling mask matrix s: xs = x ◦ s (see Section 4). +6 +End-to-End MR Triaging: EMRT +We now introduce End-to-End MR Triaging (EMRT): a novel method that infers the presence/absence +of a disease (a binary decision) directly from a drastically small amount of k-space data, skipping +the image reconstruction process. The underlying motivating hypothesis behind EMRT is that we +can accurately infer the presence of a disease (a binary decision) from a small amount of carefully +selected k-space measurements so long as we are not concerned with reconstructing high-fidelity +images. Towards that end, at a high-level, EMRT learns to identify the subsets of the k-space that have +the largest predictive signal pertaining to the disease being identified without considering the quality +of the image that would be generated using the data from identified subset. This is in contrast to the +image reconstruction approaches where the requirement to generate a high quality image of the entire +anatomy necessitates the sampling of a large portion of the k-space. Once the subset is identified, +only the data from the identified k-space subset is used by a DL to directly generate the final answer. +To the best of our knowledge, EMRT is the first method to propose classification of a disease directly +from a carefully chosen (learned) subset of the k-space. More formally EMRT is a two-step algorithm. +Step 1: In this step EMRT searches for a subset of the k-space that has the strongest signal that +can help accurately infer the presence of the disease. This is accomplished by learning a +sparse sampling pattern s∗, such that it maximizes the mutual information between under- +sampled k-space matrix xs∗ and the response variable y (a binary variable indicating the +presence/absence of the disease). +Step 2: Once the sampling pattern s∗ is learned, the second step involves using a KSPACE-NET +classifier (Section 5) that takes as input the under-sampled k-space data matrix xs∗ to infer +the presence of the disease y, without reconstructing an intermediate image. +To execute the above steps we need to answer the following questions, which we address in the +following sub-sections: Q1. How to learn a sparse sampling pattern s∗ of the k-space matrix that +maximizes the mutual information between the under-sampled k-space xs∗ and the response variable +y?; Q2. How to train the KSPACE-NET classifier that uses xs∗ as input to accurately infer the disease +y. +9 + +Algorithm 1 Estimating the conditional likelihood qval(y | xs) +Input: Training data set Dtr = {(xi, yi)} +Ntr +i=1, model qval(y | x; λ) with initial parameters λ, +mini-batch size M, acceleration factor α, and prior distribution π over sampling patterns +Return: Trained model qval(y | xs; λ∗) +while not converged do +Sample a mini-batch of training points of size M +Draw a sampling pattern s ∼ π, such that r×c +∥s∥0 = α +Update the model parameters +λt+1 = λt + γ +M +M +� +i=1 +∇λ log qval(yi|xi +s; λt) +end while +Return the trained model qval(y | xs; λ∗) +6.1 +Learning the Sparse k-Space Sampling Pattern +EMRT learns to identify a sampling pattern s∗ over the k-space matrix, such that the k-space data +xs∗ corresponding to this pattern has the maximum information required to accurately infer the +presence/absence of the disease. For any sampling pattern s, EMRT uses the mutual information +between the output variable y and the corresponding under-sampled k-space data xs, as a surrogate +for the information content in xs for disease inference. Then for a given sampling rate α, the process +of identifying s∗ (the optimal pattern) boils down to finding the sampling pattern that maximizes the +mutual information between y and xs∗. +Specifically, let I(y; xs) denote the mutual information between y and xs. For a given sampling rate +α, EMRT identifies a pattern s∗, such that: +s∗ = arg max +s∈{0,1}r×c I(y | xs), +(3) +where α = +r×c +∥s∥0 and s is the binary mask matrix of dimensions r × c. The mutual information +I(y | xs) [11] is defined as: +I(y; xs) = ExsKL (p(y | xs) || p(y)) +(4) += ExsEy|xs log p(y | xs) − log p(y) +(5) += ExsEy|xs log p(y | xs) − C, +(6) +where C is a constant independent of the sampling pattern s, and p(y | xs) and p(y) are the +conditional and the marginal distribution of the response variable respectively. According to equation +6, we can estimate the mutual information I(y | xs) if we are able to estimate the value of p(y | xs). +Since, we do not have access to the true conditional distribution p(y | xs), we can approximate the +expected conditional log-likelihood by learning a probabilistic model q(y | xs; λ) parameterized by +λ. However, learning a model for every sampling pattern s is infeasible even for moderately high +dimensions. To address this issue we draw upon the works of [12, 28], where the authors show that +at optimality, a single model qval(y | xs; λ) trained with independently generated sampling patterns +that are drawn independent of the data x, y, is equivalent to a conditional distribution of y for each +sampling pattern. This approach is in contrast to approaches that explicitly model x [58] and has +been used in other applications [29]. As such, we train a model qval by minimizing the following loss +function: +L(λ) = −Ex,yEs∼π log qval(y | xs; λ), +where π is a distribution over the sampling pattern that is independent of the data x, y. In EMRT, +distribution π is a one-dimensional distribution and the KSPACE-NET model (Section 5) is used as +qval. The under-sampled data xs is created by masking the fully-sampled matrix x with a mask +s ∈ {0, 1}r×c. This masking ensures that the same model can be used as the input’s dimensions are +fixed. This process is summarized in Algorithm 1. +10 + +Algorithm 2 Learning the sampling pattern s∗ +Input: Validation data set Dval = {(xi, yi)} +Nval +i=1, model qval(y | x; λ∗) with parameters λ, acceler- +ation factor α, number of candidate sampling patterns to generate N, and prior distribution π over +the sampling patterns +Return: Sampling pattern s∗ +for j ∈ {1, . . . , N} do +Sample sj ∼ π such that r×c +∥s∥0 = α +Estimate the mutual information score in eq. (7) as follows +�V (sj) = +1 +Nval +Nval +� +i=1 +log qval(yi | xi +sj; λ∗) +(9) +end for +Let s∗ = arg maxj∈{1,...,N} �V (sj) +After training qval, EMRT uses it to define a scoring function V : {0, 1}r×c → R, for each sampling +pattern s that estimates the mutual information between that subset of the k-space up to a constant +eq. (6). Specifically, +V (s) = ExEy|x log qval(y | xs; λ). +(7) +The higher the score achieved by an sampling pattern the higher its diagnostic signal. Therefore the +objective of Equation 3 can be rewritten as +s∗ = arg max +s∈{0,1}r×c V (s), with r × c +∥s∥0 += α. +(8) +In practice s∗ is approximated by a Monte Carlo search within the space of all sampling patterns. N +candidate sampling patterns are drawn from the prior distribution π. Each drawn pattern is scored by +the scoring function V and the pattern with the highest score is selected as s∗. The details of the full +algorithm are provided in Algorithm 2. +6.2 +Training the Direct k-Space Classifier +For inference during test time, we use the KSPACE-NET classifier qval(y | xs∗; λ∗), trained using +Algorithm 1, along with the optimized sampling pattern s∗. As specified in Algorithm 1, during the +training of this classifier, for every mini-batch we randomly sample a different sampling pattern from +the distribution π. Through our experiments, we found that this is in-fact the key to training a reliable +classifier. We also explored retraining a classifier using data xs∗, obtained from a fixed classification +optimized sampling pattern s∗. We compare these two approaches in section 7. To summarize, the +classifier qval(y | xs; λ∗) is trained with randomly sampled under-sampling patterns, however at test +time we make inferences with a fixed under-sampling pattern. +7 +Experiments +We evaluate the efficacy of EMRT by comparing its performance to several benchmark models across +multiple clinical tasks. Our experiments are structured to answer the following questions in order. +Q1. Can we infer the presence/absence of the disease directly from the k-space data as accurately +as the state-of-the-art image-based model trained on images reconstructed from the full k-space +data? Q2. Using EMRT, how much can we under-sample the k-space input before we start to lose +disease inference accuracy in comparison to the state-of-the-art image-based model trained on images +reconstructed from the full K-space data? Q3. For the same under-sampling factor, how much better +(or worse) is the disease inference accuracy of the EMRT model in comparison to the image-based +model trained on images reconstructed from the under-sampled k-space data using state-of-the-art +image reconstruction method? Q4. Is there any benefit of learning the sampling pattern using EMRT +that seeks to maximize the disease inference signal as compared to the sampling patterns proposed in +the literature that optimize accurate image reconstruction or any heuristic based sampling pattern? +11 + +Knee MR +Abdominal MR +Brain MR +Mensc. Tear +ACL Sprain +CS-PCA +Enlg. Ventricles +Mass +Train slices +29100 (11%) +29100 (3.6%) +6649 (5%) +11002 (1.61%) +11002 (1.98%) +Val slices +6298 (11%) +6298 (2.4%) +1431 (4.5%) +2362 (1.52%) +2362 (2.03%) +Test slices +6281 (11%) +6281 (3%) +1462 (6%) +2366 (2.58%) +2366 (2.70%) +Table 1: Dataset statistics: Number of slices in the training, validation and test splits for each task. +Numbers in bracket are the percentages of slices in which the disease is visible (positive examples). +7.1 +Datasets +Efficacy of EMRT is assessed by comparing its performance to a variety of benchmark models on +multiple clinical tasks across multiple anatomies. In particular we train and test our models to identify +pathologies for three anatomies, namely knee MR scans, brain MR scans, and abdominal MR scans. +See Table 1 for the description of data statistics for each of the three anatomies. +Knee MR Scans. We use k-space data of the MR scans of the knees provided as part of the FASTMRI +dataset [70] along with slice level annotations provided by the FASTMRI+ dataset [74]. The dataset +consists of multi-coil and single-coil coronal proton-density weighting scans, with and without +fat suppression, acquired at the NYU Langone Health hospital system. Further sequence details +are available in [70]. The training, validation, and test sets consist of 816, 176, and 175 volumes +respectively. The clinical task we solve is to predict whether a two-dimensional slice has a Meniscal +Tear and/or an ACL Sprain. +Brain MR Scans. We use the annotated slices of the MR scans of the brain also provided by the +FASTMRI dataset [70] and then obtain the k-space data for these annotated slices using the FASTMRI+ +dataset [74]. A total of 1001 volumes were annotated in the FASTMRI+ dataset out of a total of 5847 +volumes that were present in the FASTMRI dataset. Each brain examination included a multi-coil +single axial series (either T2-weighted FLAIR, T1-weighted without contrast, or T1-weighted with +contrast). The training, validation, and test sets consist of 700, 150, & 151 volumes respectively. We +predict whether a two-dimensional slice has Enlarged Ventricles and/or Mass (includes Mass and +Extra-axial Mass as in [74]). +Abdominal MR Scans. The clinical task for the abdominal MR scans is the identification of a +clinically significant prostate cancer (CS-PCA), which is defined as a lesion within the prostate for +which a radiologist assigns a Prostate Imaging Reporting And Data system (PI-RADS) score [63] +of 3 or more. We use the retrospectively collected bi-parametric abdominal MR scans performed +clinically at NYU Langone Health hospital system. It consists of scans from 313 subjects who were +referred due to suspected prostate cancer. The scans were performed on a 3 Tesla Siemens scanner +with a 30-element body coil array. Examinations included an axial T2-weighted TSE and an axial +diffusion-weighted EPI sequence using B values of 50 and 1000. For our experiments we only used +the data obtained using the T2-weighted sequence. For each scan volume, a board-certified abdominal +radiologist examined each slice to identify the presence of lesion and assigned a PI-RAD score to it. A +slice is said to have CS-PCA, if there exists at least one lesion in it with a PI-RADS score of 3 or more. +We split the data into 218, 48 and 47 volumes for the training, validation and test sets, respectively. +During the splits we make sure that scans from the same patient appear only in one of the three splits. +Since the data for these scans is acquired using multiple coils, following [70], we emulate it to be +coming from a single coil using the emulated single-coil (ESC) method [61]. This results in a single +k-space matrix that is provided as an input to EMRT. The primary motivation behind doing this was +simplicity on our way to prove our hypothesis. In future work we will propose models that work +directly with the multi-coil data. +7.2 +Exp 1: Disease Inference Directly from k-space +Our first set of experiments tests the feasibility of inferring a disease directly from the k-space data +by comparing the performance of KSPACE-NET to a DL model that uses high-fidelity images as +input. Towards that end, we train the KSPACE-NET model to solve the binary task of inferring the +presence/absence of the disease using the full k-space matrix ˜xmc that is emulated to be coming +12 + +5% +8% +10% +12.5% +Sampling Rate +86 +88 +90 +AUROC +ACL +5% +8% +10% +12.5% +Sampling Rate +90 +91 +92 +93 +AUROC +Meniscal Tear +5% +8% +10% +12.5% +Sampling Rate +82 +83 +84 +85 +AUROC +CS-PCA +5% +8% +10% +12.5% +Sampling Rate +90 +92 +94 +AUROC +Enlarged Ventricles +5% +8% +10% +12.5% +Sampling Rate +85.0 +87.5 +90.0 +92.5 +AUROC +Mass +Figure 5: Performance of EMRT against MODELRSS: Top panel shows AUROC on the test set of the +EMRT (red) at different sampling factors in comparison to the AUROC of MODELRSS (black) trained +using the fully-sampled k-space data. +from a single coil using the ESC algorithm [61] as input. Performance of the KSPACE-NET model is +compared against the image-based deep learning models trained to infer presence of the disease from +images reconstructed using the RSS method from full k-space data acquired using multiple coils. We +train a pre-activation ResNet-50 [21] model using these ˜mRSS images as its input. We call this model +MODELRSS. Disease inference accuracy of these models is the best one can hope to achieve from an +image-based model, because the images are reconstructed from the full k-space data and the models +are trained using a rigorous hyper-parameter search to find the best performing model configuration. +Knee AUROC +CS-PCA AUROC +Brain AUROC +Mensc. Tear +ACL Sprain +CS-PCA +Enlg. Ventricles +Mass +KSPACE-NET +93.4 ± 0.7 +90.8 ± 1.5 +84.1 ± 0.4 +92.3 ± 2.0 +91.5 ± 1.0 +MODELRSS +92.1 ± 1.0 +90.6 ± 1.01 +83.1 ± 1.6 +93.8 ± 1.3 +88.4 ± 5 +Table 2: Disease inference directly from k-space : The AUROC of the KSPACE-NET model in +comparison to a DL model trained on high-fidelity images to infer the presence/absence of specific +diseases. The results clearly show that it is indeed feasible to infer the disease directly from the +k-space data as accurately as an image-based classifier. +Table 2 provides the AUROC of the KSPACE-NET model in comparison to MODELRSS. The results +clearly show that it is indeed feasible to infer the presence of the disease directly from the k-space data +as accurately as a finely tuned DL model trained on high-fidelity images. This result is not surprising, +since transformation from k-space to image space is achieved using IFFT, which is a deterministic +and lossless operation. What is surprising is that in some cases the KSPACE-NET model performs +better than the image-based model. While this question is left for future work, we conjecture that the +reason behind this performance gap is that the KSPACE-NET model uses as input the entire complex +data where as the image-based model uses only the magnitude of the complex matrix in the image +space (as is the widespread norm in medical image analysis). Lastly, these results are particularly +impressive when one takes into account that the KSPACE-NET model takes as input the data emulated +from a single coil (which has a lower SNR) whereas MODELRSS is using the full multi-coil data. As +part of the future work we are working on extending the KSPACE-NET model to ingest multi-coil data +directly. +13 + +7.3 +Exp 2: Exploring the Limits on Under-Sampling the k-space Using EMRT +In our second set of experiments, we estimate the extent to which one can under-sample the k-space +data and still infer the presence of the disease (using the KSPACE-NET model) as accurately as +an image-based classifier using high-fidelity images as input. We sample the k-space at different +sampling rates α (∈ {5%,8%,10%,12.5%}) and train a KSPACE-NET for each α. For the given +sampling rate α, the sampling pattern is learnt using the EMRT procedure, summarized in Algorithm 1 +and Algorithm 2. +Figure 5 and table 3 give the AUC, Sensitivity, and Specificity of the EMRT model at different sampling +rates and compares its performance to the MODELRSS. We observe that at high sampling rates, the +performance of EMRT, in terms of AUC and sensitivity-specificity, does not deteriorate significantly +in comparison to the DL trained trained on high-fidelity images reconstructed using the full k-space +data. This experiment demonstrates that if the goal is to simply infer the presence/absence of the +disease, without the concern to reconstruct a high-fidelity image, then we can afford to significantly +under-sample the k-space data (as low as 5%) without any significant loss in performance. This is +in contrast to [44], which reports that in the FastMRI challenge, all submissions had reconstructed +images that started missing clinically relevant pathologies at sampling rates less than 25% of the +data. Figure 1 shows the sequence of images reconstructed from the k-space data corresponding to +the sampling patterns learnt by EMRT. One can clearly see that the pathology visible is the image +reconstructed from the full k-space is hard to discern in images generated from under-sampled data. +Furthermore, it becomes successively hard to identify the pathology as we decrease the amount of +data used. +Knee SENS/SPEC +CS-PCA SENS/SPEC +Brain SENS/SPEC +Mensc. Tear +ACL Sprain +CS-PCA +Enlg. Ventricles +Mass +EMRT +81/83 +80/81 +88/65 +86/82 +89/70 +MODELRSS +83/86 +81/82 +88/60 +78/94 +82/80 +Table 3: Performance of EMRT against MODELRSS: Test Sensitivity/Specificity of EMRT and +MODELRSS obtained using an operating point with 85% Sensitivity on the validation set. The +Sensitivity/Specificity results are reported using a sampling factor α = 5% for knee MR and 8% for +brain and prostate MR scans. See appendix A for confidence intervals. +Knee SENS/SPEC +CS-PCA SENS/SPEC +Brain SENS/SPEC +Mensc. Tear +ACL Sprain +CS-PCA +Enlg. Ventricles +Mass +EMRT +81/83 +80/81 +88/65 +86/82 +89/70 +MODELLOUPE:VARNET +81/79 +74/81 +86/54 +84/72 +74/56 +Table 4: Performance of EMRT against MODELLOUPE:VARNET: Test Sensitivity/Specificity of EMRT +and MODELLOUPE:VARNET obtained using an operating point with 85% Sensitivity on the validation set. +The Sensitivity/Specificity results are reported using a sampling factor α = 5% for knee MR and 8% +for brain and prostate MR scans. See appendix A for confidence intervals. +7.4 +Exp 3: Reconstructed Images vs Direct k-space When Under-Sampling +So far we have established that we can infer the presence/absence of a disease directly from k-space +data. In addition, when we are not concerned with reconstructing intermediate images, we only need +a fraction of the k-space data to infer the disease without compromising accuracy in comparison to a +model trained on images reconstructed from the full k-space data. When using under-sampled k-space +data however, another way to infer the disease presence is by first reconstructing an intermediate +image from the under-sampled data and then training a classifier on these images to infer the disease. +Our third set of experiments are structured to answer the following question: “how is the disease +inference accuracy impacted if we use a DL model trained on images reconstructed from the under- +sampled k-space data in comparison to the EMRT, which infers the disease directly from the k-space +data?” +14 + +5% +8% +10% +12.5% +Sampling Rate +82 +84 +86 +88 +90 +AUROC +ACL +5% +8% +10% +12.5% +Sampling Rate +86 +88 +90 +92 +AUROC +Meniscal Tear +5% +8% +10% +12.5% +Sampling Rate +77.5 +80.0 +82.5 +85.0 +AUROC +CS-PCA +5% +8% +10% +12.5% +Sampling Rate +80 +85 +90 +95 +AUROC +Enlarged Ventricles +5% +8% +10% +12.5% +Sampling Rate +70 +75 +80 +85 +90 +AUROC +Mass +Figure 6: Performance of EMRT against MODELLOUPE:VARNET: Top panel shows AUROC on the test +set of the EMRT (red) at different sampling factors in comparison to the AUROC of MODELLOUPE:VARNET +(blue). Note that for all pathologies, MR scans and all sampling rates α ∈ {5%, 8%, 10%, 12.5%} +EMRT outperforms MODELLOUPE:VARNET. +Towards that end, we compare the performance of EMRT against the image-based classifiers which +are trained using images reconstructed from the under-sampled k-space data. For the image-based +classifiers, the sampling pattern used is the one obtained by the LOUPE method [2]: a state-of-the-art +method proposed in the literature which learns a sampling pattern over the k-space such that the data +corresponding to it gives the best possible reconstructed image. Furthermore, we use the state-of-the- +art image reconstruction model, namely the VARNET model [56], to reconstruct the images from +the under-sampled k-space data. We denote this benchmark by MODELLOUPE:VARNET, identifying the +methods used for learning the sampling pattern and the method used to reconstruct the images from +the learnt sampling pattern respectively. +Figure 6 and table 4 compare the performance of the two sets of models. We observe that for all +the abnormalities and for all sampling rates, EMRT outperforms MODELLOUPE:VARNET. The bottom +panel of Figure 6 shows the sensitivity and specificity of the models obtained at 5% sampling rate for +knees, and 8% sampling rate for abdomen and brain. For a given sensitivity, EMRT has a significantly +better specificity compared to MODELLOUPE:VARNET, translating to lower number of false positive cases. +Furthermore we observe that for some pathologies, such as CS-PCA and Enlarged Ventricles, there is +a sharp decrease in the AUROC compared to EMRT, which for the most part remains stable across all +sampling factors and for all the pathologies. . +Lastly, to validate the correctness of our implementation of the image reconstruction method (VARNET +[56]) we also report the structural similarity (SSIM) metric in fig. 7, a commonly used metric to +measure reconstruction quality. Our SSIM numbers are within the ballpark of the state-of-the-art +reported in literature. Specifically, for 12.5% sampling rate, the knee reconstruction SSIM is 0.82 +compared to 0.88 reported in [56] and the brain reconstruction SSIM is 0.89 compared to 0.94 reported +in [56]. +7.5 +Exp 4: Benefits of Learning Sampling Pattern Using EMRT +In our next set of experiments we show two things. First, we show that the sampling pattern learnt by +EMRT (which optimizes the classification accuracy) is different from the ones learnt by any method +that optimizes a reconstruction metric (such as LOUPE). Second, we show the benefits of learning a +sampling pattern that explicitly optimizes the disease classification accuracy (as achieved by EMRT) +in comparison to other sampling pattern. +15 + +5% +8% +10% +12.5% +Sampling Rate +0.80 +0.82 +0.84 +0.86 +0.88 +0.90 +Structural Similarity +Brain +Prostate T2 +Knee +Prostate b50 +Figure 7: Performance of image reconstruction: Reconstruction methods are an essential com- +ponent of the indirect classification benchmark. In this figure, we plot the reconstruction per- +formance of the best performing reconstruction methods at increasing sampling rates α ∈ +{5%, 8%, 10%, 12.5%}. +Figure 8 contrasts the classification optimized sampling pattern learnt by EMRT versus the +reconstruction-optimized sampling patterns learnt by LOUPE. We clearly see that the sampling +pattern learnt by EMRT is composed of a mixture of a set of low frequencies (red lines clustered +around the center) and a set of high frequencies (red lines spread away from the center). This is +in contrast to the predominantly low frequencies selected by LOUPE, that are largely concentrated +around the center. +Next, to show the benefits of learning a sampling pattern catered towards explicitly optimizing the +disease identification accuracy, we compare the performance of EMRT against another KSPACE- +NET model that is trained to identify the disease using a fixed sampling pattern consisting of only +low frequencies (center-focused k-space lines). We denote this model by MODELCENTER. Figure 9 +compares the performance of the two sets of classifier. As evident from the figure, performance of +EMRT is better than the performance of MODELCENTER across all tasks, pointing towards the benefits +of learning the sampling pattern that optimizes the classification accuracy. The performance gap is +16 + +Ground Truth +0.125 +0.1 +0.08 +0.05 +T2 +Prostate +Prostate DWI +Knee +rain +Bc. Knee +a. Prostate T2 +b. Brain +Figure 8: Contrasting sampling patterns: Here we compare the sampling patterns learnt by +EMRT that optimizes classification accuracy versus the patterns learnt by LOUPE that optimizes the +reconstruction metric for different diseases. EMRT is learning a mix of low and high frequencies (red +lines spread across the spectrum). Whereas LOUPE predominantly is picking low frequencies (blue +lines clustered around the center). The prostate and brain sampling patterns are sampled with 8% +sampling rate while knee MR patterns are sampled at a 5% sampling rate. +5% +8% +10% +12.5% +Sampling Rate +80 +85 +90 +AUROC +EMRT (Meniscal Tear) +MODELCENTER (Meniscal Tear) +EMRT (ACL) +MODELCENTER (ACL) +5% +8% +10% +12.5% +Sampling Rate +80 +81 +82 +83 +84 +AUROC +EMRT (CS-PCa) +MODELCENTER (CS-PCa) +5% +8% +10% +12.5% +Sampling Rate +80 +90 +AUROC +EMRT (EV) +MODELCENTER (EV) +EMRT (Mass) +MODELCENTER (Mass) +Figure 9: Benefits of learning the sampling pattern: Figure shows AUROC of EMRT (which learns +a sampling pattern that optimizes the disease classification accuracy) in comparison to the AUROC of +MODELCENTER which uses a fixed sampling pattern that is center-focused. Superior performance of +EMRT across all tasks across all the sampling rates is indicative of the benefits of learning a sampling +pattern that explicitly optimizes the classification accuracy. +larger for tasks for which the frequencies learnt by EMRT are more spread away from the center of +the frequency spectrum, such as Mass in the brain scans and CS-PCA in prostate scans. +17 + +ARMS mask +LOUPE mask +DPS maskARMS mask +LOUPE mask +DPS mask5% +8% +10% +12.5% +Sampling Rate +80 +85 +90 +AUROC +EMRT (Meniscal Tear) +MODELFIXED (Meniscal Tear) +EMRT (ACL) +MODELFIXED (ACL) +5% +8% +10% +12.5% +Sampling Rate +81 +82 +83 +84 +AUROC +EMRT (CS-PCa) +MODELFIXED (CS-PCa) +5% +8% +10% +12.5% +Sampling Rate +85 +90 +95 +AUROC +EMRT (EV) +MODELFIXED (EV) +EMRT (Mass) +Fixed (Mass) +Figure 10: The Role of Random Subset Training in EMRT. Compares the classification performance +of the KSPACE-NET trained using the EMRT under-sampling patterns (dashed lines), MODELFIXED, +against EMRT (solid lines). +7.6 +Exp 5: The Role of Random Subset Training in EMRT +One of the key characteristics of the training methodology of EMRT is the way the KSPACE-NET model +qval is trained. Specifically, during the training of the classifier qval, every mini-batch is constructed +by first randomly drawing a different sampling pattern from the distribution π, and then applying the +chosen pattern to all the samples in the mini-batch (see Algorithm 1). To better understand the role +of this specialized training procedure on the performance of EMRT, we examine whether training a +KSPACE-NET classifier using different sampling patterns across different mini-batches has any benefit +compared to training a classifier trained using the same fixed sampling pattern across mini-batches. +To that end, we compare the performance of the EMRT classifier qval to a model trained with the fixed +but learnt sampling pattern. We use the sampling pattern learnt by EMRT as the input to this classifier. +The architecture of the two classifiers were identical. In Figure 10, we observe that for most sampling +rates the classifier trained using different sampling patterns across mini-batches outperforms the +classifier trained with a single fixed sampling pattern, even if the fixed pattern is learnt. Training +using the randomly chosen sampling patterns across mini-batches act as a regularizer which leads to +better generalization performance. +8 +Conclusion and Limitations +MR imaging is the gold standard of diagnostic imaging, especially in a differential diagnosis setting, +thanks to its excellent soft-tissue contrast properties. However, despite its proven diagnostic value, +this imaging modality is not used as a first-in-line tool for early identification of life threatening +diseases, primarily because of lack of accessibility of this modality at population level. This lack +of accessibility can be attributed to the need to generate high-fidelity images that are examined by +radiologists. This is so because high-fidelity image generation necessitates the use of expensive +scanning hardware to acquire large quantities of high quality k-space data and the execution of +complex and time consuming acquisition protocols to collect this data. Motivated by the goal of +improving accessibility of MR for early and accurate disease identification at the population level, +in this study we propose to skip the image reconstruction step and instead propose to infer the final +answer (presence/absence of the disease) directly from the k-space data. We hypothesize that when +image reconstruction is not a requirement, one can infer the presence/absence of the disease using a +18 + +very small tailored fraction of the k-space data. Towards that end we propose a novel deep neural +network methodology, which we call EMRT that first learns the subset of the k-space data which has +the largest diagnostic signal to infer the disease and then uses this data to directly infer the disease +without generating images. We validate our hypothesis by running a series of experiments using +small sampling rates without suffereing a significant drop in performance compared to models using +the fully-sampled k-space . Models such as EMRT that infer the presence of a disease directly from +the k-space data have the potential to bring MR scanners closer to deployment for population-level +screening of disease. +Limitations +Despite encouraging preliminary results, much work needs to be done to get us closer +to a system that can be clinically deployed. The present work is just a first step towards assessing +the feasibility of whether it is possible to accurately infer the presence of the disease from a small +tailored fraction of k-space data without generating images. There are several limitations associated +with the current work, which need to be addressed to bring us closer to developing an actual scanning +hardware that can operate outside of the specialized imaging environments and yet capture sufficient +quantity and quality of the k-space data for the subsequent ML model to infer the disease accurately. +First, the current study works with the data generated from an expensive high-field 3T scanner (the +current standard of care) which is housed in specialized imaging environments. As a result the +underlying k-space data is of very high quality. In order for these results to generalize to the data +acquired by more accessible low-field scanners, one needs to account for the noise ingrained in the +data acquired by these low-field scanners. The current work does not propose any mechanism to +account for such noise. It only focuses on establishing the limits on the quantity of data needed for +accurate diagnosis. +Second, almost all the modern day scanners acquire data in parallel using multiple coils. This not +only speeds up the data acquisition process but also increases the signal-to-noise (SNR) ratio of the +acquired signal. However, in the current feasibility study, for the sake of simplicity, we resorted to +working with the ESC data (the multi-coil data emulated to be coming from a single coil). Future +work will focus on extending the EMRT methodology for the multi-coil k-space data. We anticipate +that working with multi-coil data will only lead to an improvement in performance because of the +larger effective SNR associated with the multi-coil data. +Third, MR imaging is a 3D imaging modality, where the human clinician renders the disease diagnosis +after looking at all the slices in the volumetric image. The individual slices are seldom interpreted +in isolation. In other words the final diagnosis is at the volume-level. However, in the current +study, because of a dearth of positive cases at volume-level in our data set, we developed the EMRT +methodology to classify individual slices. Volume-level labels can be derived from labels of individual +slices within the volume using any aggregation scheme, such as majority voting or averaging the +probabilities of individual slices. However, naively aggregating slice-level labels can potentially lead +to an increase in the number of false positive volumes. As part of the future work, with the help of +additional data, we will explore extending the EMRT methodology to directly classify the volumes. +Another limitation of EMRT comes from its use of the type of k-space data. In a typical clinical +MR scan multiple volumetric images are reconstructed, each having different contrast properties, +with the goal of providing a radiologists with multiple visual facets of the same underlying anatomy. +These different contrast images are reconstructed from the k-space data corresponding to different +acquisition sequences. For instance, prostate scans are typically acquired using T2-weighted (T2) +and Diffusion-weighted (DW) sequences. However, again in the interest of simplicity, the EMRT +methodology proposed in this study uses the k-space data from a single sequence. In the future +we plan to extend this methodology to incorporate data from multiple sequences informed by what +is used in real clinical settings. Lastly, the EMRT methodology is restricted to learning only the +Cartesian sampling patterns. However, for a given disease identification accuracy, there might exist +other non-Cartesian sampling patterns which are even sparser than the corresponding Cartesian +pattern. While learning such “arbitrary” sampling patterns one needs to restrict to sample from the +subset of patterns that respect the physical constraints of the scanner. In our future work we will also +extend EMRT to learn such “arbitrary” sampling patterns. Furthermore, to facilitate further research +in this potentially high impact area, we are releasing a repository containing the data set and code for +reproducing the experiments. +19 + +References +[1] Zeynettin Akkus, Alfiia Galimzianova, Assaf Hoogi, Daniel L Rubin, and Bradley J Erickson. +Deep learning for brain mri segmentation: state of the art and future directions. Journal of +digital imaging, 30(4):449–459, 2017. +[2] Cagla Deniz Bahadir, Adrian V Dalca, and Mert R Sabuncu. Learning-based optimization of +the under-sampling pattern in mri. In International Conference on Information Processing in +Medical Imaging, pages 780–792. Springer, 2019. +[3] Tim Bakker, Herke van Hoof, and Max Welling. Experimental design for mri by greedy policy +search. Advances in Neural Information Processing Systems, 33, 2020. +[4] Juergen Biederer, Yoshiharu Ohno, Hiroto Hatabu, Mark L Schiebler, Edwin JR van Beek, Jens +Vogel-Claussen, and Hans-Ulrich Kauczor. Screening for lung cancer: Does mri have a role? +European journal of radiology, 86:353–360, 2017. +[5] John Brodersen and Volkert Dirk Siersma. Long-term psychosocial consequences of false- +positive screening mammography. The Annals of Family Medicine, 11(2):106–115, 2013. +[6] Louise Clare Brown, Hashim U Ahmed, Rita Faria, Ahmed El-Shater Bosaily, Rhian Gabe, +Richard S Kaplan, Mahesh Parmar, Yolanda Collaco-Moraes, Katie Ward, Richard Graham +Hindley, Alex Freeman, Alexander Kirkham, Robert Oldroyd, Chris Parker, Simon Bott, Nick +Burns-Cox, Tim Dudderidge, Maneesh Ghei, Alastair Henderson, Rajendra Persad, Derek J +Rosario, Iqbal Shergill, Mathias Winkler, Marta Soares, Eldon Spackman, Mark Sculpher, and +Mark Emberton. Multiparametric MRI to improve detection of prostate cancer compared with +transrectal ultrasound-guided prostate biopsy alone: the PROMIS study. Health technology +assessment (Winchester, England), 22(39):1–176, 7 2018. +[7] Mark A Brown and Richard C Semelka. MRI: basic principles and applications. John Wiley & +Sons, 2011. +[8] Iztok Caglic, Viljem Kovac, and Tristan Barrett. Multiparametric mri-local staging of prostate +cancer and beyond. Radiology and oncology, 53(2):159–170, 2019. +[9] Elizabeth K Cole, John M Pauly, Shreyas S Vasanawala, and Frank Ong. Unsupervised mri +reconstruction with generative adversarial networks. arXiv preprint arXiv:2008.13065, 2020. +[10] Albert Comelli, Navdeep Dahiya, Alessandro Stefano, Federica Vernuccio, Marzia Portoghese, +Giuseppe Cutaia, Alberto Bruno, Giuseppe Salvaggio, and Anthony Yezzi. Deep learning-based +methods for prostate segmentation in magnetic resonance imaging. Applied Sciences, 11(2):782, +2021. +[11] Thomas M Cover. Elements of information theory. John Wiley & Sons, 1999. +[12] Ian Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: A unified framework for +model explanation. arXiv preprint arXiv:2011.14878, 2020. +[13] Arjun D Desai, Andrew M Schmidt, Elka B Rubin, Christopher Michael Sandino, Marianne Su- +san Black, Valentina Mazzoli, Kathryn J Stevens, Robert Boutin, Christopher Re, Garry E +Gold, et al. Skm-tea: A dataset for accelerated mri reconstruction with dense image labels for +quantitative clinical evaluation. In Thirty-fifth Conference on Neural Information Processing +Systems Datasets and Benchmarks Track (Round 2), 2021. +[14] D Eldred-Evans, P Burak, MJ Connor, E Day, M Evans, F Fiorentino, M Gammon, F Hosking- +Jervis, N Klimowska-Nassar, W McGuire, AR Padhani, AT Prevost, D Price, H Sokhi, H Tam, +M Winkler, and HU Ahmed. Population-Based Prostate Cancer Screening With Magnetic +Resonance Imaging or Ultrasonography: The IP1-PROSTAGRAM Study. Jama Oncology, +7(3):395 – 402, 2021. +[15] David Eldred-Evans, Paula Burak, Martin J Connor, Emily Day, Martin Evans, Francesca +Fiorentino, Martin Gammon, Feargus Hosking-Jervis, Natalia Klimowska-Nassar, William +McGuire, et al. Population-based prostate cancer screening with magnetic resonance imaging +or ultrasonography: the ip1-prostagram study. JAMA oncology, 7(3):395–402, 2021. +20 + +[16] Joann G Elmore, Mary B Barton, Victoria M Moceri, Sarah Polk, Philip J Arena, and Suzanne W +Fletcher. Ten-year risk of false positive screening mammograms and clinical breast examinations. +New England Journal of Medicine, 338(16):1089–1096, 1998. +[17] Joshua J Fenton, Meghan S Weyrich, Shauna Durbin, Yu Liu, Heejung Bang, and Joy Melnikow. +Prostate-specific antigen–based screening for prostate cancer: evidence report and systematic +review for the us preventive services task force. Jama, 319(18):1914–1931, 2018. +[18] Kirema Garcia-Reyes, Niccolò M Passoni, Mark L Palmeri, Christopher R Kauffman, King- +shuk Roy Choudhury, Thomas J Polascik, and Rajan T Gupta. Detection of prostate cancer with +multiparametric mri (mpmri): effect of dedicated reader education on accuracy and confidence +of index and anterior cancer diagnosis. Abdominal imaging, 40(1):134–142, 2015. +[19] Melanie Hamilton-Basich. Hyperfine receives fda clearance for portable mri technology. AXIS +Imaging News, 2020. +[20] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, +Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of acceler- +ated mri data. Magnetic resonance in medicine, 79(6):3055–3071, 2018. +[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual +networks. In European conference on computer vision, pages 630–645. Springer, 2016. +[22] Nils Hendrix, Ward Hendrix, Kees van Dijke, Bas Maresch, Mario Maas, Stijn Bollen, Alexander +Scholtens, Milko de Jonge, Lee-Ling Sharon Ong, Bram van Ginneken, et al. Musculoskeletal +radiologist-level performance by using deep learning for detection of scaphoid fractures on +conventional multi-view radiographs of hand and wrist. European Radiology, pages 1–14, 2022. +[23] Lukas Hirsch, Yu Huang, Shaojun Luo, Carolina Rossi Saccarelli, Roberto Lo Gullo, Isaac +Daimiel Naranjo, Almir GV Bitencourt, Natsuko Onishi, Eun Sook Ko, Doris Leithner, et al. +Radiologist-level performance by using deep learning for segmentation of breast cancers on mri +scans. Radiology: Artificial Intelligence, 4(1):e200231, 2021. +[24] Lukas Hirsch, Yu Huang, Shaojun Luo, Carolina Rossi Saccarelli, Roberto Lo Gullo, +Isaac Daimiel Naranjo, Almir GV Bitencourt, Natsuko Onishi, Eun Sook Ko, Dortis Lei- +thner, et al. Deep learning achieves radiologist-level performance of tumor segmentation in +breast mri. arXiv preprint arXiv:2009.09827, 2020. +[25] Iris AM Huijben, Bastiaan S Veeling, and Ruud JG van Sloun. Deep probabilistic subsampling +for task-adaptive compressed sensing. In International Conference on Learning Representations, +2019. +[26] Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee, and Jin Keun Seo. Deep +learning for undersampled mri reconstruction. Physics in Medicine & Biology, 63(13):135007, +2018. +[27] Dragan Ilic, Mia Djulbegovic, Jae Hung Jung, Eu Chang Hwang, Qi Zhou, Anne Cleves, +Thomas Agoritsas, and Philipp Dahm. Prostate cancer screening with prostate-specific antigen +(psa) test: a systematic review and meta-analysis. Bmj, 362, 2018. +[28] Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, and Rajesh Ranganath. Have +we learned to explain?: How interpretability methods can learn to encode predictions in their +interpretations. In International Conference on Artificial Intelligence and Statistics, pages +1459–1467. PMLR, 2021. +[29] Neil Jethani, Mukund Sudarshan, Ian Connick Covert, Su-In Lee, and Rajesh Ranganath. +Fastshap: Real-time shapley value estimation. In International Conference on Learning Repre- +sentations, 2022. +[30] Kyong Hwan Jin, Michael Unser, and Kwang Moo Yi. Self-supervised deep active accelerated +mri. arXiv preprint arXiv:1901.04547, 2019. +21 + +[31] Patricia M Johnson, Angela Tong, Awani Donthireddy, Kira Melamud, Robert Petrocelli, Paul +Smereka, Kun Qian, Mahesh B Keerthivasan, Hersh Chandarana, and Florian Knoll. Deep +learning reconstruction enables highly accelerated biparametric mr imaging of the prostate. +Journal of Magnetic Resonance Imaging, 56(1):184–195, 2022. +[32] Veeru Kasivisvanathan, Antti S Rannikko, Marcelo Borghi, Valeria Panebianco, Lance A +Mynderse, Markku H Vaarala, Alberto Briganti, Lars Budäus, Giles Hellawell, Richard G +Hindley, et al. Mri-targeted or standard biopsy for prostate-cancer diagnosis. New England +Journal of Medicine, 378(19):1767–1777, 2018. +[33] TP Kilpeläinen, TLJ Tammela, L Määttänen, P Kujala, Ulf-Håkan Stenman, M Ala-Opas, +TJ Murtola, and A Auvinen. False-positive screening results in the finnish prostate cancer +screening trial. British journal of cancer, 102(3):469–474, 2010. +[34] Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael +Rabbat, Aaron Defazio, Matthew J Muckley, Daniel K Sodickson, C Lawrence Zitnick, et al. +Advancing machine learning for mr image reconstruction with an open competition: Overview +of the 2019 fastmri challenge. Magnetic resonance in medicine, 84(6):3054–3070, 2020. +[35] Sandra Labus, Martin M Altmann, Henkjan Huisman, Angela Tong, Tobias Penzkofer, +Moon Hyung Choi, Ivan Shabunin, David J Winkel, Pengyi Xing, Dieter H Szolar, et al. +A concurrent, deep learning–based computer-aided detection system for prostate multiparamet- +ric mri: a performance study involving experienced and less-experienced radiologists. European +Radiology, pages 1–13, 2022. +[36] Jennifer Elston Lafata, Janine Simpkins, Lois Lamerato, Laila Poisson, George Divine, and +Christine Cole Johnson. +The economic impact of false-positive cancer screens. +Cancer +Epidemiology and Prevention Biomarkers, 13(12):2126–2132, 2004. +[37] Erik G Larsson, Deniz Erdogmus, Rui Yan, Jose C Principe, and Jeffrey R Fitzsimmons. Snr- +optimality of sum-of-squares reconstruction for phased-array magnetic resonance imaging. +Journal of Magnetic Resonance, 163(1):121–123, 2003. +[38] Dongwook Lee, Jaejun Yoo, and Jong Chul Ye. Deep residual learning for compressed sensing +mri. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages +15–18. IEEE, 2017. +[39] Michael Lustig, David Donoho, and John M Pauly. Sparse mri: The application of compressed +sensing for rapid mr imaging. Magnetic Resonance in Medicine: An Official Journal of the +International Society for Magnetic Resonance in Medicine, 58(6):1182–1195, 2007. +[40] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. +IEEE signal processing magazine, 25(2):72–82, 2008. +[41] Maria Adele Marino, Thomas Helbich, Pascal Baltzer, and Katja Pinker-Domenig. Multipara- +metric mri of the breast: A review. Journal of Magnetic Resonance Imaging, 47(2):301–315, +2018. +[42] Michael G Marmot, DG Altman, DA Cameron, JA Dewar, SG Thompson, and Maggie Wilcox. +The benefits and harms of breast cancer screening: an independent review. British journal of +cancer, 108(11):2205–2240, 2013. +[43] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks +through ffts. arXiv preprint arXiv:1312.5851, 2013. +[44] Matthew J Muckley, Bruno Riemenschneider, Alireza Radmanesh, Sunwoo Kim, Geunu Jeong, +Jingyu Ko, Yohan Jun, Hyungseob Shin, Dosik Hwang, Mahmoud Mostapha, et al. Results of +the 2020 fastmri challenge for machine learning mr image reconstruction. IEEE Transactions +on Medical Imaging, 40(9):2306–2317, 2021. +[45] Jordan Nasri. Office-based, point-of-care, low-field mri system to guide prostate interventions: +Recent developments. UROLOGY, 2021. +22 + +[46] Anwar R Padhani and Baris Turkbey. Detecting prostate cancer with deep learning for mri: a +small step forward, 2019. +[47] Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, and Michal Drozdzal. Active +mr k-space sampling with reinforcement learning. In International Conference on Medical +Image Computing and Computer-Assisted Intervention, pages 23–33. Springer, 2020. +[48] Ardeshir R Rastinehad, Baris Turkbey, Simpa S Salami, Oksana Yaskiv, Arvin K George, +Mathew Fakhoury, Karin Beecher, Manish A Vira, Louis R Kavoussi, David N Siegel, +et al. Improving detection of clinically significant prostate cancer: magnetic resonance imag- +ing/transrectal ultrasound fusion guided prostate biopsy. The Journal of urology, 191(6):1749– +1754, 2014. +[49] Michael P Recht, Jure Zbontar, Daniel K Sodickson, Florian Knoll, Nafissa Yakubova, Anuroop +Sriram, Tullie Murrell, Aaron Defazio, Michael Rabbat, Leon Rybak, et al. Using deep learning +to accelerate knee mri at 3 t: results of an interchangeability study. American Journal of +Roentgenology, 215(6):1421–1429, 2020. +[50] Oren Rippel, Jasper Snoek, and Ryan P Adams. Spectral representations for convolutional +neural networks. arXiv preprint arXiv:1506.03767, 2015. +[51] Andrew B Rosenkrantz, Fang-Ming Deng, Sooah Kim, Ruth P Lim, Nicole Hindman, Thais C +Mussi, Bradley Spieler, Jason Oaks, James S Babb, Jonathan Melamed, et al. Prostate cancer: +multiparametric mri for index lesion localization—a multiple-reader study. American Journal +of Roentgenology, 199(4):830–837, 2012. +[52] V Sathiyamoorthi, AK Ilavarasi, K Murugeswari, Syed Thouheed Ahmed, B Aruna Devi, and +Murali Kalipindi. A deep convolutional neural network based computer aided diagnosis system +for the prediction of alzheimer’s disease in mri images. Measurement, 171:108838, 2021. +[53] Li Shen, Laurie R Margolies, Joseph H Rothstein, Eugene Fluder, Russell McBride, and Weiva +Sieh. Deep learning to improve breast cancer detection on screening mammography. Scientific +reports, 9(1):1–12, 2019. +[54] Susan Slatkoff, Stephen Gamboa, Adam J Zolotor, Anne L Mounsey, and Kohar Jones. Psa +testing: When it’s useful, when it’s not. The Journal of family practice, 60(6):357, 2011. +[55] Anita Slomski. Avoiding unnecessary prostate biopsies with mri. JAMA, 317(12):1206–1206, +2017. +[56] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa +Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated +mri reconstruction. In International Conference on Medical Image Computing and Computer- +Assisted Intervention, pages 64–73. Springer, 2020. +[57] Andreas Stang and Karl-Heinz Jöckel. The impact of cancer screening on all-cause mortality: +what is the best we can expect? Deutsches Ärzteblatt International, 115(29-30):481, 2018. +[58] Mukund Sudarshan, Wesley Tansey, and Rajesh Ranganath. Deep direct likelihood knockoffs. +Advances in neural information processing systems, 33:5036–5046, 2020. +[59] Glen B Taksler, Nancy L Keating, and Michael B Rothberg. Implications of false-positive +results for future cancer screenings. Cancer, 124(11):2390–2398, 2018. +[60] JE Thompson, PJ Van Leeuwen, Daniel Moses, Ron Shnier, Phillip Brenner, Warick Delprado, +M Pulbrook, Maret Böhm, Anne M Haynes, Andrew Hayen, et al. The diagnostic performance +of multiparametric magnetic resonance imaging to detect significant prostate cancer. The +Journal of urology, 195(5):1428–1435, 2016. +[61] Mark Tygert and Jure Zbontar. Simulating single-coil mri from the responses of multiple coils. +Communications in Applied Mathematics and Computational Science, 15(2):115–127, 2020. +[62] Christopher JD Wallis, Masoom A Haider, and Robert K Nam. Role of mpmri of the prostate in +screening for prostate cancer. Translational andrology and urology, 6(3):464, 2017. +23 + +[63] Jeffrey C Weinreb, Jelle O Barentsz, Peter L Choyke, Francois Cornud, Masoom A Haider, +Katarzyna J Macura, Daniel Margolis, Mitchell D Schnall, Faina Shtern, Clare M Tempany, +et al. Pi-rads prostate imaging–reporting and data system: 2015, version 2. European urology, +69(1):16–40, 2016. +[64] Tomer Weiss, Sanketh Vedula, Ortal Senouf, Oleg Michailovich, Michael Zibulevsky, and Alex +Bronstein. Joint learning of cartesian under sampling andre construction for accelerated mri. In +ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing +(ICASSP), pages 8653–8657. IEEE, 2020. +[65] Sidney J Winawer, Robert H Fletcher, L Miller, Fiona Godlee, MH Stolar, CD Mulrow, +SH Woolf, SN Glick, TG Ganiats, JH Bond, et al. Colorectal cancer screening: clinical +guidelines and rationale. Gastroenterology, 112(2):594–642, 1997. +[66] David J Winkel, Angela Tong, Bin Lou, Ali Kamen, Dorin Comaniciu, Jonathan A Disselhorst, +Alejandro Rodríguez-Ruiz, Henkjan Huisman, Dieter Szolar, Ivan Shabunin, et al. A novel +deep learning based computer-aided diagnosis system improves the accuracy and efficiency of +radiologists in reading biparametric magnetic resonance images of the prostate: results of a +multireader, multicase study. Investigative radiology, 56(10):605–613, 2021. +[67] Tien Yin Wong and Neil M Bressler. Artificial intelligence with deep learning technology looks +into diabetic retinopathy screening. Jama, 316(22):2366–2367, 2016. +[68] JS Wysock, N Mendhiratta, F Zattoni, X Meng, M Bjurlin, WC Huang, H Lepor, +AB Rosenkrantz, and SS. Taneja. Predictive Value of Negative 3T Multiparametric Mag- +netic Resonance Imaging of the Prostate on 12-core Biopsy Results. BJU Int., 118(4):515–520, +2016. +[69] Sunghwan Yoo, Isha Gujrathi, Masoom A Haider, and Farzad Khalvati. Prostate cancer detection +using deep convolutional neural networks. Scientific reports, 9(1):1–10, 2019. +[70] Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J +Muckley, Aaron Defazio, Ruben Stern, Patricia Johnson, Mary Bruno, et al. fastmri: An open +dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018. +[71] Jinwei Zhang, Hang Zhang, Alan Wang, Qihao Zhang, Mert Sabuncu, Pascal Spincemaille, +Thanh D Nguyen, and Yi Wang. Extending loupe for k-space under-sampling pattern optimiza- +tion in multi-coil mri. In International Workshop on Machine Learning for Medical Image +Reconstruction, pages 91–101. Springer, 2020. +[72] Min Zhang, Geoffrey S Young, Huai Chen, Jing Li, Lei Qin, J Ricardo McFaline-Figueroa, +David A Reardon, Xinhua Cao, Xian Wu, and Xiaoyin Xu. Deep-learning detection of cancer +metastases to the brain on mri. Journal of Magnetic Resonance Imaging, 52(4):1227–1236, +2020. +[73] Zizhao Zhang, Adriana Romero, Matthew J Muckley, Pascal Vincent, Lin Yang, and Michal +Drozdzal. Reducing uncertainty in undersampled mri reconstruction with active acquisition. In +Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages +2049–2058, 2019. +[74] Ruiyang Zhao, Burhaneddin Yaman, Yuxin Zhang, Russell Stewart, Austin Dixon, Florian +Knoll, Zhengnan Huang, Yvonne W Lui, Michael S Hansen, and Matthew P Lungren. fastmri+: +Clinical pathology annotations for knee and brain fully sampled multi-coil mri data. arXiv +preprint arXiv:2109.03812, 2021. +[75] Bo Zhu, Jeremiah Z Liu, Stephen F Cauley, Bruce R Rosen, and Matthew S Rosen. Image +reconstruction by domain-transform manifold learning. Nature, 555(7697):487–492, 2018. +24 + +A +Classification Metrics +A.1 +Knee Results +Sampling Rate +Pathologies +NPV / PPV (ARMS) +NPV / PPV (Recon) +NPV / PPV (RSS) +100% +ACL +99.2 ± 0.2 / 14.5 ± 0.9 +Meniscal Tear +97.4 ± 0.3 / 44.9 ± 4.7 +12.5% +ACL +99.1 ± 0.1 / 13.1 ± 1.5 +98.8 ± 0.3 / 10.9 ± 1.7 +Meniscal Tear +97 ± 0.3 / 42.3 ± 1.7 +96.9 ± 0.5 / 10.9 ± 1.7 +10% +ACL +99.3 ± 0.2 / 12.7 ± 1.3 +98.9 ± 0.2 / 11.5 ± 2 +Meniscal Tear +97.6 ± 0.5/ 41.1 ± 2 +97.1 ± 0.6 / 33.9 ± 2.5 +8% +ACL +99. ± 0.3 / 13 ± 1.2 +99 ± 0.2 / 11.1 ± 1.6 +Meniscal Tear +97.8 ± 0.4 / 41.1 ± 2 +97.1 ± 0.4 / 33.9 ± 2.4 +5% +ACL +99.1 ± 0.1 / 13.3 ± 0.7 +98.8 ± 0.3 / 11.5 ± 1.6 +Meniscal Tear +97. ± 0.3 / 39.8 ± 1.3 +96.8 ± 0.5 / 34.4 ± 2.9 +Table 5: Knee NPV/PPV Results +Sampling Rate +Pathologies +Sens / Spec (ARMS) +Sens / Spec (Recon) +Sens / Spec (RSS) +100% +ACL +81.1± 4.4 / 82.2± 2.2 +Meniscal Tear +82.8± 2.2 / 86± 2.7 +12.5% +ACL +80.9 ± 4.6 / 79.7 ± 4.2 +75.5 ± 8.2 / 76.2 ± 7.3 +Meniscal Tear +82.2 ± 2.5 / 84.8 ± 0.8 +81.4 ± 3.1 / 78 ± 2.8 +10% +ACL +80 ± 2.6 / 80.7 ± 1.1 +77 ± 4.1 / 77.2 ± 4.9 +Meniscal Tear +81 ± 1.9 / 83.4 ± 1.2 +82.8 ± 3.7 / 78 ± 2.3 +8% +ACL +78.2 ± 7.3 / 80.5 ± 2 +80.2 ± 4.2 / 75.6 ± 4.6 +Meniscal Tear +80.6 ± 2.2 / 84.4 ± 0.9 +82.8 ± 2.8 / 78.1 ± 2.4 +5% +ACL +84.8 ± 3.5 / 78.1 ± 2.5 +73.9 ± 8.2 / 78.1 ± 6.3 +Meniscal Tear +80.8 ± 3.4 / 84 ± 1.1 +81 ± 3.2 / 78.9 ± 2.8 +Table 6: Knee Sensitivity / Specificity Results +25 + +A.2 +Brain Results +Sampling Rate +Pathologies +NPV / PPV (ARMS) +NPV / PPV (Recon) +NPV / PPV (RSS) +100% +Enlarged Ventricles +99.6 ± 0.2 / 18.3 ± 9.7 +Mass +99.5± 0.3 / 8.1± 0.9 +12.5% +Enlarged Ventricles +99.5 ± 0.1 / 15.3 ± 7.1 +99.3 ± 0.3 / 5.9 ± 1.5 +Mass +99.5 ± 0.2 / 8.3 ± 1.4 +98.8 ± 0.2/ 3.8 +10% +Enlarged Ventricles +99.5 ± 0.1 / 11.3 ± 4 +99.4 ± 0.1/ 8.1 ± 2.5 +Mass +99.4 ± 0.2 / 6.7 ± 1.4 +99.4 ± 0.2/ 5.1 ± 1.1 +8% +Enlarged Ventricles +99.6 ± 0.1 / 9.3 ± 3.7 +99.4 ± 0.2/ 5.1 ± 1.1 +Mass +99.6 ± 0.1 / 6.8 ± 1.3 +98.7 ± 0.2/ 4.4 ± 0.8 +5% +Enlarged Ventricles +99.5 ± 0.1 / 9.1 ± 2.2 +99.4 ± 0.2/ 6.5 ± 2.2 +Mass +99.5 ± 0.3 / 7 ± 1.2 +98.7 ± 0.2 / 4.4 ± 0.6 +Table 7: Brain NPV/PPV Results +Sampling Rate +Pathologies +Sens / Spec (ARMS) +Sens / Spec (Recon) +Sens / Spec (RSS) +100% +Enlarged Ventricles +84.9 ± 7 / 85.8 ± 7.9 +Mass +86.2 ± 6.9 / 72.4 ± 3.8 +12.5% +Enlarged Ventricles +83.3 ± 2.2 / 84.3 ± 7.5 +83.4 ± 8.1 / 62.2 ± 11.7 +Mass +85.6 ± 4.7 / 73 ± 4.8 +83.3 ± 5 / 38.8 ± 12.2 +10% +Enlarged Ventricles +83.9 ± 4.1 / 79 ± 10.4 +84.1 ± 5 / 71.9 ± 10.3 +Mass +85.9 ± 4.9 / 65.3 ± 7.8 +73.9 ± 4.6 / 56.3 ± 7.7 +8% +Enlarged Ventricles +88.2 ± 3.7 / 74.1 ± 8.2 +88.5 ± 3.7 / 54.2 ± 11.1 +Mass +90.0 ± 2.1/64.7 ± 5.5 +74.2 ± 5.4 / 53.5 ± 11.1 +5% +Enlarged Ventricles +86.2 ± 4.5/75.4 ± 7.4 +84.8 ± 7.9 / 63.1 ± 14.8 +Mass +87.8 ± 7.7/66.3 ± 7.2 +73.4 ± 3.9 / 55.2 ± 7.5 +Table 8: Brain Sensitivity / Specificity Results +A.3 +Prostate Results +Sampling Rate +Pathologies +Sens / Spec (ARMS) +Sens / Spec (Recon) +Sens / Spec (RSS) +100% +CS-PCa +93.3 ± 0.5 / 59.3 ± 5.4 +12.5% +CS-PCa +91.1 ± 9.6 / 59.2 ± 1.9 +90 ± 9.6 / 57.9 ± 1.9 +10% +CS-PCa +88 ± 8.1 / 64.7 ± 5.1 +86 ± 8.1 / 54.4 ± 2.3 +8% +CS-PCa +91.3 ± 5.3 / 60.8 ± 2.1 +89 ± 5.3 / 54.3 ± 2.1 +5% +CS-PCa +88.5 ± 4.4 / 62.9 ± 1.5 +88.6 ± 4.4 / 47 ± 1.5 +Table 9: Prostate Sensitivity/Specificity Results +26 + +Sampling Rate +Pathologies +NPV / PPV (ARMS) +NPV / PPV (Recon) +NPV / PPV (RSS) +100% +CS-PCa +99.2 ± 0.0 / 14.5 ± 1.6 +12.5% +CS-PCa +98.7 ± 0.2 /13.4 ± 1.8 +98.7 ± 0.6 / 12.2 ± 1.8 +10% +CS-PCa +99 ± 0.3 /12.8 ± 5 +98.8 ± 0.6 / 11.7 ± 5 +8% +CS-PCa +98.7 ± 0.6 / 13.8 ± 2.1 +97 ± 0.3 /11.8 ± 2.1 +5% +CS-PCa +98.9 ± 0.6 / 12.1 ± 1.5 +96.9 ± 0.1 /10 ± 1.5 +Table 10: Prostate NPV/PPV Results +27 + diff --git a/6tFKT4oBgHgl3EQf_i4o/content/tmp_files/load_file.txt b/6tFKT4oBgHgl3EQf_i4o/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..7d99e3f1e85a52b554ae2fdfb2136c19a2fca832 --- /dev/null +++ b/6tFKT4oBgHgl3EQf_i4o/content/tmp_files/load_file.txt @@ -0,0 +1,1211 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf,len=1210 +page_content='On the Feasibility of Machine Learning Augmented Magnetic Resonance for Point-of-Care Identification of Disease Raghav Singhal1∗ Mukund Sudarshan1,∗ Anish Mahishi1 Sri Kaushik1 Luke Ginnochio2 Angela Tong2 Hersh Chandarana2 Daniel Sodickson2 Rajesh Ranganath1,3 Sumit Chopra1,2 Abstract Early detection of many life-threatening diseases (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=', prostate and breast cancer) within at-risk population can improve clinical outcomes and reduce cost of care.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' While numerous disease-specific “screening" tests that are closer to Point-of-Care (POC) are in use for this task, their low specificity results in unnecessary biopsies, leading to avoidable patient trauma and wasteful healthcare spending.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' On the other hand, despite the high accuracy of Magnetic Resonance (MR) imaging in disease diagnosis, it is not used as a POC disease identification tool because of poor accessibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' The root cause of poor accessibility of MR stems from the requirement to reconstruct high-fidelity images, as it necessitates a lengthy and complex process of acquiring large quantities of high-quality k-space measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' In this study we explore the feasibility of an ML-augmented MR pipeline that directly infers the disease sidestepping the image reconstruction process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' We hypothesise that the disease classification task can be solved using a very small tailored subset of k-space data, compared to image reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' Towards that end, we propose a method that performs two tasks: 1) identifies a subset of the k-space that maximizes disease identification accuracy, and 2) infers the disease directly using the identified k-space subset, bypassing the image reconstruction step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' We validate our hypothesis by measuring the performance of the proposed system across multiple diseases and anatomies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' We show that comparable performance to image-based classifiers, trained on images reconstructed with full k-space data, can be achieved using small quantities of data: 8% of the data for detecting multiple abnormalities in prostate and brain scans, and 5% of the data for detecting knee abnormalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' To better understand the proposed approach and instigate future research, we provide an extensive analysis and release code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' 1 Introduction Early and accurate identification of several terminal diseases, such as breast cancer [42], prostate cancer [27], and colon cancer [65], within the at-risk population followed by appropriate intervention leads to favorable clinical outcomes for patients by reducing mortality rates [57] and reducing cost of care.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' In the current standard-of-care this goal is accomplished by subjecting at-risk but otherwise ∗Equal Contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' 1 Department of Computer Science, New York University, New York, NY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' 2 Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, New York, NY, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' 3 Center for Data Science, New York University, New York, NY, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQf_i4o/content/2301.11962v1.pdf'} +page_content=' Correspondence to: Raghav Singhal , +where < xµQij +µνxν > is the expectation value of the quadratic form of the +operator Q. +Following the same argument as [2], we find that the length of the +string squared is proportional to the above expectation value +< xµQij +µνxν > ∝ l2 +s. +This implies that the string’s length depends on the geometry of the +non commutative space i.e. depends on the string theory in question, and +is determined by the R-matrix of the quantized group. +The procedure on a static spacetime is as follows: +1. Determine the symmetry group of the theory and find the corre- +sponding quantum group. +2. Use the product presented in section 3 instead of the usual product +and Jackson’s derivative instead of the usual derivative. +3. Use the corresponding formulae to relate back to the original man- +ifold as we did in section 4, this usually leads to infinite series of +higher derivatives in the Lagrangian. +6 +Symmetries and theories on dynamical +spacetimes +The first step of q-quantization is to replace the symmetry group with a +quantum group which is a deformation of its universal enveloping algebra. +This gives more symmetries than the commutative theory by definition. +In field and string theory, symmetries are classified into spacetime sym- +metries and internal symmetries, spacetime symmetries relates directly to +the ambient manifold on which the field/string theory is defined while +internal symmetries are additional structure on the manifold. While on +static spacetime (disregarding gravity) only the internal symmetry group +is to be q-deformed, the spacetime symmetry group must contribute to +the R-matrix if dynamic spacetimes are to be studied, the deformations +of the spacetime symmetry should lead to effects on the gravitational as- +pects of the theory like changes in curvature, singularities, etc. Similar +studies of non commutativity’s effects on gravity are found in [] but uses +the canonical non commutativity, using q-deformations to study gravity +is a subject of future research. +7 + +7 +Conclusion and outlook +The results presented in this paper showed that a product of functions +on a q-deformed space at least for small deformations exists and is well +defined, we give an explicit formula in the paper. We also showed that +field and string theory can be defined on q-deformed manifolds but hav- +ing enlarged set of symmetries and extra features depending on the theory +and the manifold in question. +A possible direction of future research is to study the enlarged set +of symmetries due to q-deformations as well as their mathematical and +the phenomenological implications. Another direction is to study more +complicated field/string theories and find ways to define higher spin fields +on such spaces. +Acknowledgments +We would like to thank Dr.Ivan Kolar for the useful discussions on the +topic. +References +[1] Seiberg, N. and Witten, E. (1999) “String theory and noncommu- +tative geometry,” Journal of High Energy Physics, 1999(09), pp. +032–032. +[2] Szabo, R. (2003) “Quantum field theory on noncommutative spaces,” +Physics Reports, 378(4), pp. 207–299. +[3] Doplicher, S., Fredenhagen, K. and Roberts, J.E. (1995) “The quan- +tum structure of spacetime at the Planck scale and Quantum Fields,” +Communications in Mathematical Physics, 172(1), pp. 187–220. +[4] Ahluwalia, D.V. (1994) “Quantum measurement, gravitation, and +locality,” Physics Letters B, 339(4), pp. 301–303. +[5] C. S. Chu and P. M. Ho, Noncommutative open string and D-brane, +Nucl. Phys. B 550, 151 (1999) [hep-th/9812219]. +[6] B. Jurco, S. Schraml, P. Schupp and J. Wess, Enveloping algebra +valued gauge transformations for non-Abelian gauge groups on non- +commutative spaces, Eur. Phys. J. C17, 521 (2000) [hep-th/0006246]. +[7] Chaichian, M. and Demichev, A.P. Introduction to quantum groups. +Singapore: World Scientific (1996). +[8] A. Klimyk and K. Schmudgen, Quantum Groups and Their Repre- +sentations, Springer (1997). +[9] Hu, N.H. and Pei, Y.F. (2008) “Notes on 2-parameter Quantum +Groups I,” Science in China Series A: Mathematics, 51(6), pp. 1101–1110. +[10] Hu, N. and Pei, Y. (2012) “Notes on two-parameter quantum groups, +(II),” Communications in Algebra, 40(9), pp. 3202–3220. +8 + +[11] Wulkenhaar, R. (2006) “Field theories on deformed spaces,” Journal +of Geometry and Physics, 56(1), pp. 108–141. +[12] Grosse, H., Madore, J. and Steinacker, H. (2001) “Field theory on +the Q-deformed fuzzy sphere I,” Journal of Geometry and Physics, +38(3-4), pp. 308–342. +[13] Grosse, H., Madore, J. and Steinacker, H. (2002) “Field theory on +the Q-deformed Fuzzy Sphere II: Quantization,” Journal of Geome- +try and Physics, 43(2-3), pp. 205–240. +[14] BARDEK, V., DOREˇSI´C, M. and MELJANAC, S. (1994) “An ex- +ample of Q-deformed field theory,” International Journal of Modern +Physics A, 09(23), pp. 4185–4194. +[15] Minahan, J., Naseer, U. and Thull, C. (2021) “Conformal field the- +ories on deformed spheres, anomalies, and supersymmetry,” SciPost +Physics, 10(3). +9 + diff --git a/CdE1T4oBgHgl3EQfWAQw/content/tmp_files/load_file.txt b/CdE1T4oBgHgl3EQfWAQw/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..632224f6afee42fc6621f7a8e270ee1d2764ce5a --- /dev/null +++ b/CdE1T4oBgHgl3EQfWAQw/content/tmp_files/load_file.txt @@ -0,0 +1,214 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf,len=213 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='03108v1 [hep-th] 8 Jan 2023 Fields and strings on non commutative q-deformed spaces Poula Tadros Department of Applied Physics, Aalto University School of Science, FI-00076 Aalto, Finland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' email:poulatadros9@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='com Abstract We study scalar field and string theory on non commutative q-deformed spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' We define a product of functions on a non commutative algebra of functions resulting from the q-deformation analog to the Moyal prod- uct for canonically non commutative spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' We then give the general procedure to define scalar field and classical string theories on the men- tioned spaces, we argue that the resulting theories will have enlarged sets of both spacetime and internal symmetries which can be used to study gravitational effects due to the q-deformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 1 Introduction Non commutative geometry was introduced in string theory in [1] where it was shown that the coordinates of the endpoints of strings on D-branes in presence of Neveu-Schwartz field is non commutative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In field theory it was even older where Yang-Mills theory on non commutative torus was introduced [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The main motivation to introduce non commutative space times is field theory is explained in [3,4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In quantum mechanics Heisenberg uncertainty principle states that at small distance scales there is a large uncertainty in momentum measurement i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' energy can reach very high values in small space distance (close to the Planck scale), but according to the general theory of relativity, high energy in sufficiently small distance scale creates a black hole preventing measurement of position to be fully certain i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' there is uncertainty in position measurement in small scales, this can only be achieved by introducing non commutativity in space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Notice that this implies non locality in the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Since the introduction of non commutativity in field and string theories a lot of progress has been made in all directions including classical and quantum field theories, theories of gravity and string theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' However, the non commutativity used is the canonical non commutativity which does not capture the mathematical structure of the given field or string theory and it is clear that it was imposed by hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In this article we use another type of non commutativity, the q-deformation, to study classical scalar field theory and the consequences on string theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In section 2, we review the most popular types of non commutativity on space times and motivate the choice of q-deformation as the non commutativity of choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In section 3, we define a product of functions on q-deformed spaces sim- ilar to the Moyal product on canonically non commutative spaces and 1 show the procedure failed for Lie-type non commutativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In section 4, we study scalar field theory on q-deformed space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In section 5, we study string theory on the same space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In section 6, we discuss the symmetries of the non commutative theories, we show that there are more symmetries in the non commutative theories than the corresponding commutative ones, then use this to argue that we can define theories with dynamical spacetimes by quantizing the spacetime symmetry group of the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 2 Types of non commutativity This section is dedicated to review three types of non commutativity of space times 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='1 Canonical non commutativity It is the simplest type and is the one used in physics literature, it was introduced in [5], it is defined by imposing the following commutation relations on the space time [xµ, xν] = iθµν, where xµ are the space time coordinates and θµν is a constant, anti sym- metric matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Canonical non commutativity corresponds to smearing of the space time, it can be easily seen from solutions of polynomial equations on the commutative space compared to its non commutative counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' As an example consider the two dimensional Euclidean space with co- ordinates x and y with the commutation relation [x, y] = k, where k is a positive constant, and consider the polynomial equation (x − y)2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In the usual commutative space, the solution to the above equation is x = y which is a straight line with slope = 1 and passing through the origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' However, in the corresponding non commutative space the equa- tion can be written as x2 − 2yx + y2 = k whose solutions are two parallel straight lines separated by a distance proportional to k, when k = 0 the two straight lines coincide and we recover the solution on the commutative space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Note that this procedure is valid regardless of whether or not there are additional mathematical structures on the space, the smearing is carried out the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' That is why we need more complicated non commuta- tivity to use in physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='2 Lie-type non commutativity In this case the coordinates has a Lie algebra structure i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' the commuta- tion relations can capture a Lie algebra structures if defined on the space 2 time for example like in field theories [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The commutation relations are given by [xµ, xν] = if µν ρ xρ, where f µν ρ are the structure constants of the defined Lie algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' How- ever, this type is not useful because Lie structures are rigid i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' any small deformation of a Lie algebra is isomorphic to the Lie algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' This leads to difficulties in defining products of functions on the resulting non com- mutative space as we will see in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='3 q-deformations This type was introduced to solve the rigidity problem for Lie algebras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The main idea is to replace Lie group with a flexible structure which is called quantum groups, for more details on the theory of quantum groups see [7,8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The commutation relations are given by xµxν = 1 q Rµν στxσxτ, where q is a parameter and Rµν στ is the R-matrix of the quantum group defined on the space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In this space a Lie algebra is replaced by a non commutative Hopf alge- bra with deformation parameter q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Hopf algebras are considered deforma- tions of the universal enveloping algebra of the Lie group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The resulting space is deformed according to the Lie group on the space and on the pa- rameter q, this is the simplest way to deform a space time while capturing the full structure of the space, other more complicated approaches can be studied such as deforming with more than one parameter [9,10] but it is beyond the scope of the article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 3 Moyal-like product on q-deformed spaces Here, we define a non commutative product of functions on q-deformed spaces i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' non commutative spaces in the R-matrix formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='1 Moyal product on canonically non commuta- tive spaces We begin with reviewing the original Moyal product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' On Canonically non commutative spaces, the algebra of functions is replaced by a non commutative C∗ algebra, the Moyal product is the product of functions on the non commutative algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Its formula can be derived as follows: Consider two functions f(x) and g(x), their Fourier transforms are f(x) = � dDk (2π)D ¯ f(k)eikixi, 3 g(x) = � dDk′ (2π)D ¯ g(k)eikjxj, The product on the non commutative space is f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Using Baker-Campbell-Hausdorff formula we get f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxjei/2kikjθij = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj(1 + ∞ � n=1 ( i 2)n 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (kikjθij)n) = f(x)g(x) + ∞ � n=1 ( i 2)n 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='θi1j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='θinjn∂i1∂i2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='∂inf∂j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='∂jng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='2 Product of functions on q-deformed space We follow the same procedure to define a product on q-deformed spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The non commutativity is given by xµxν = 1 q Rµν στxσxτ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' It can be written as a commutation relation as [xµ, xν] = Qµν στxσxτ, where Qµν στ = 1 q Rµν στ − δµ τ δν σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Note that at q = 1 we have Qµν στ = 0 and we recover commutativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Now again two functions f(x) and g(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' the product of their Fourier transforms is f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj, and using Baker-Campbell-Hausdorff formula we get f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj exi+xj+1/2[xi,xj]+1/12[xi,[xi,xj]]−1/12[xj,[xi,xj]]+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=', after some calculations we have [xi, [xi, xj]] = (Qij nlQil ab + Qij cbQim na)xnxaxb, [xi, [xi, xj]] = (Qij nlQjl ab + Qij cbQjm na )xnxaxb, Substituting we get f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj 4 exiki+xjk′ j+1/2Qij mnxmxnkik′ j+1/12kik′ j(Qij nlQil ab−Qij cbQim na −Qij nlQjl ab+Qij cbQjm na )xnxaxb+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='. In string theory it is reasonable to assume that the parameter q is close to 1 since the operator Q is related to the string length which is assumed to be very small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In field theory the assumption is reasonable as well since in this case Q is related to the area of the space time where general relativity breaks i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' quantum gravity scale which is assumed to be very small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Thus, the series converges and all the exponentials are well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' If we ignore the higher orders in the last exponent we get a formula similar to the Moyal product: f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k′)eikixieik′ jxje1/2Qij mnxmxnkik′ j = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieik′ jxj(1 + ∞ � n=1 (1/2)n 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (Qij mnxmxnkik′ j)n) = f(x)g(x)+ ∞ � p=1 (1/2)p 1 p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='xm1xl1Qi1j1 m1l1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='xmpxlpQ ipjp mplp∂i1∂i2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='∂ipf(x)∂j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='∂jpg(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The formula captures the mathematical structures on the space time in the form of the Q operator and the deformation is subsequently trans- formed into these structures leading to additional symmetries at least in string theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' From this procedure we can see that Lie-type non commutativity presents difficulties in defining the product since the product would be of the form f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj e1/2(fij m xmkik′ j)+1/12kik′ j(fij nlfil ab−fij cbfim na )+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='. The series in the exponential generally diverges and the product can not be defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 4 Scalar field theory on q-deformed space In this section we study massive scalar field theory on a flat non dynam- ical q-deformed non commutative space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' While there are attempts to define field theories on deformed spaces, the used non commutativity is usually the canonical type or the use of quantum groups is limited to defining differential structure on a specific examples of spaces [11-14], con- formal field theory also was studied on deformed spaces [15], however the deformations considered are introduced by hand and does not introduce non commutativity i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' the deformed manifold is another manifold with no additional structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The Lagrangian of the theory is L = ∂µφ∂µφ − m2φ2, 5 where φ is the scalar field which we will assume is infinitely differentiable, and m is the mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Now we perform the deformation quantization: The first step is to perform the q-quantization of the symmetry group in this case U(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' To do this we write its universal enveloping algebra which is a C∗ algebra of functions generated by the function z ∈ U(U(1)) → e(iz) ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' We notice that the algebra is commutative then its defor- mations are equivalent to itself, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' no contribution from the symmetry group to non commutativity and the product on the non commutative space is equivalent to the product on the original manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The second step is to replace the manifold on which the field theory is defined with a non commutative locally compact topological space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' On this manifold the derivatives are q-deformed into Jackson derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The new q-deformed Lagrangian will be Lq = DqµφDµ q φ − m2φ2, Now we relate the theory on the non commutative topological space to the theory on the commutative manifold (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' transforming the non commutative theory back to the commutative manifold) using the formula Dqµ(f(x)) = ∂µf + ∞ � k=1 (q − 1)k (k + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' xk µf (k+1)(x), where f (k) is the k th ordinary derivative of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The resulting Lagrangian on the commutative manifold is Lq = ∂µφ∂µφ − m2φ2 + 2∂µφ ∞ � k=1 (q − 1)k (k + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' xµkφ(k+1) + ∞ � l,m=1 (q − 1)(l+m) (m + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (l + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='φ(l+1)xk µxµlφ(m+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The first two terms of the Lagrangian is the non commutative origi- nal theory and the rest are the contributions of non commutativity from replacing the non commutative topological space with the original com- mutative manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The theory is q-deformed i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' if q = 1 then we recover the original theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The additional terms are non local as expected and contain an infinite series of higher (ordinary) derivatives of the field φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 5 String theory on q-deformed space String theory follows the same q-quantization procedure as field theory but with richer geometry since the fundamental object is one dimensional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 6 Here we establish the connection between the Q operator defined above and the length of the string, then give the general procedure of defining a string theory on q-deformed space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The uncertainty in position in case of q-deformed spaces can be cal- culated to be ∆xi∆xj ≥ 1 2 < xµQij µνxν >, where < xµQij µνxν > is the expectation value of the quadratic form of the operator Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Following the same argument as [2], we find that the length of the string squared is proportional to the above expectation value < xµQij µνxν > ∝ l2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' This implies that the string’s length depends on the geometry of the non commutative space i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' depends on the string theory in question, and is determined by the R-matrix of the quantized group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' The procedure on a static spacetime is as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Determine the symmetry group of the theory and find the corre- sponding quantum group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Use the product presented in section 3 instead of the usual product and Jackson’s derivative instead of the usual derivative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Use the corresponding formulae to relate back to the original man- ifold as we did in section 4, this usually leads to infinite series of higher derivatives in the Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 6 Symmetries and theories on dynamical spacetimes The first step of q-quantization is to replace the symmetry group with a quantum group which is a deformation of its universal enveloping algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' This gives more symmetries than the commutative theory by definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' In field and string theory, symmetries are classified into spacetime sym- metries and internal symmetries, spacetime symmetries relates directly to the ambient manifold on which the field/string theory is defined while internal symmetries are additional structure on the manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' While on static spacetime (disregarding gravity) only the internal symmetry group is to be q-deformed, the spacetime symmetry group must contribute to the R-matrix if dynamic spacetimes are to be studied, the deformations of the spacetime symmetry should lead to effects on the gravitational as- pects of the theory like changes in curvature, singularities, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Similar studies of non commutativity’s effects on gravity are found in [] but uses the canonical non commutativity, using q-deformations to study gravity is a subject of future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 7 7 Conclusion and outlook The results presented in this paper showed that a product of functions on a q-deformed space at least for small deformations exists and is well defined, we give an explicit formula in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' We also showed that field and string theory can be defined on q-deformed manifolds but hav- ing enlarged set of symmetries and extra features depending on the theory and the manifold in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' A possible direction of future research is to study the enlarged set of symmetries due to q-deformations as well as their mathematical and the phenomenological implications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Another direction is to study more complicated field/string theories and find ways to define higher spin fields on such spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Acknowledgments We would like to thank Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='Ivan Kolar for the useful discussions on the topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' References [1] Seiberg, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Witten, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (1999) “String theory and noncommu- tative geometry,” Journal of High Energy Physics, 1999(09), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 032–032.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [2] Szabo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2003) “Quantum field theory on noncommutative spaces,” Physics Reports, 378(4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 207–299.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [3] Doplicher, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=', Fredenhagen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Roberts, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (1995) “The quan- tum structure of spacetime at the Planck scale and Quantum Fields,” Communications in Mathematical Physics, 172(1), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 187–220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [4] Ahluwalia, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (1994) “Quantum measurement, gravitation, and locality,” Physics Letters B, 339(4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 301–303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Chu and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Ho, Noncommutative open string and D-brane, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' B 550, 151 (1999) [hep-th/9812219].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [6] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Jurco, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Schraml, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Schupp and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Wess, Enveloping algebra valued gauge transformations for non-Abelian gauge groups on non- commutative spaces, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' C17, 521 (2000) [hep-th/0006246].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [7] Chaichian, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Demichev, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Introduction to quantum groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Singapore: World Scientific (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Klimyk and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' Schmudgen, Quantum Groups and Their Repre- sentations, Springer (1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [9] Hu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Pei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2008) “Notes on 2-parameter Quantum Groups I,” Science in China Series A: Mathematics, 51(6), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 1101–1110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [10] Hu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Pei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2012) “Notes on two-parameter quantum groups, (II),” Communications in Algebra, 40(9), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 3202–3220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 8 [11] Wulkenhaar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2006) “Field theories on deformed spaces,” Journal of Geometry and Physics, 56(1), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 108–141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [12] Grosse, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=', Madore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Steinacker, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2001) “Field theory on the Q-deformed fuzzy sphere I,” Journal of Geometry and Physics, 38(3-4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 308–342.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [13] Grosse, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=', Madore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Steinacker, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2002) “Field theory on the Q-deformed Fuzzy Sphere II: Quantization,” Journal of Geome- try and Physics, 43(2-3), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 205–240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [14] BARDEK, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=', DOREˇSI´C, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and MELJANAC, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (1994) “An ex- ample of Q-deformed field theory,” International Journal of Modern Physics A, 09(23), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 4185–4194.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' [15] Minahan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=', Naseer, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' and Thull, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' (2021) “Conformal field the- ories on deformed spheres, anomalies, and supersymmetry,” SciPost Physics, 10(3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} +page_content=' 9' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'} diff --git a/CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf b/CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c72d93ab0a26407086a1043b77cdc0d7800abdd9 --- /dev/null +++ b/CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:341cdc3f07e2612e2ebaad4beb8a5d263273b0bf35559101ee465d3f870330ae +size 2082119 diff --git a/CtAzT4oBgHgl3EQfwf4q/vector_store/index.pkl b/CtAzT4oBgHgl3EQfwf4q/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..72b25d6215e63b6c3e6c0977d2f9848dfdb35160 --- /dev/null +++ b/CtAzT4oBgHgl3EQfwf4q/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcebe0c2304c962070655ed02182c6dc3c801383cee428717f3cb245db863581 +size 857708 diff --git a/CtE4T4oBgHgl3EQf5w5z/vector_store/index.pkl b/CtE4T4oBgHgl3EQf5w5z/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..778242ad3d8d64a021fe698b805a5e85b0bfbd98 --- /dev/null +++ b/CtE4T4oBgHgl3EQf5w5z/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7101816dd329ed779542a57fdbf1918a453199c77f836a9f439861a3c974b40 +size 88937 diff --git a/DNE4T4oBgHgl3EQfew0W/content/tmp_files/2301.05101v1.pdf.txt b/DNE4T4oBgHgl3EQfew0W/content/tmp_files/2301.05101v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..00c805a2318349407c2fac1b9f091989fb390113 --- /dev/null +++ b/DNE4T4oBgHgl3EQfew0W/content/tmp_files/2301.05101v1.pdf.txt @@ -0,0 +1,2687 @@ +Folding interpretations +Mikołaj Bojańczyk (University of Warsaw) +Abstract +We study the polyregular string-to-string functions, whi ch +are certain functions of polynomial output size that can be +described using automata and logic. We describe a system of +combinators that generates exactly these functions. Unlike +previous systems, the present system includes an iteration +mechanism, namely fold. Although unrestricted fold can +define all primitive recursive functions, we identify a type +system (inspired by linear logic) that restricts fold so that it +defines exactly the polyregular functions. We also present +related systems, for quantifier-free functions as well as for +linear regular functions on both strings and trees. +ACM Reference Format: +Mikołaj Bojańczyk (University of Warsaw). 2023. Folding interpreta- +tions. In Proceedings of ACM Conference (Conference’17). ACM, New +York, NY, USA, 24 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn +1 +Introduction +This paper is about transducers that compute string-to-string +functions. (We also have some results on trees, but trees will +be discussed only at the end of the paper. ) We are interested +in two classes of functions: the linear regular functions1, +which have linear output size, and the polyregular functions, +which have polynomial output size. Both classes can be de- +scribed by many equivalent models, and have robust closure +properties. +Let us begin with the more established class of linear +regular functions. Two typical example functions from this +class are: +(︀1, 2, 3⌋︀ ↦ (︀1, 2, 3, 1, 2, 3⌋︀ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +duplicate +(︀1, 2, 3⌋︀ ↦ (︀3, 2, 1⌋︀ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +reverse +. +The linear regular functions can be described by many equiv- +alent models, including: deterministic two-way automata +with output [23, Note 4], mso transductions [13, Section 4], +1These are usually called the regular functions in the literature, but we add +the word “linear” to distinguish them from the polyregular functions. +Permission to make digital or hard copies of all or part of this work for +personal or classroom use is granted without fee provided that copies are not +made or distributed for profit or commercial advantage and that copies bear +this notice and the full citation on the first page. Copyrights for components +of this work owned by others than ACM must be honored. Abstracting with +credit is permitted. To copy otherwise, or republish, to post on servers or to +redistribute to lists, requires prior specific permission and/or a fee. Request +permissions from permissions@acm.org. +Conference’17, July 2017, Washington, DC, USA +© 2023 Association for Computing Machinery. +ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 +https://doi.org/10.1145/nnnnnnn.nnnnnnn +streaming string transducers [1, Section 3], an extension of +regular expressions [3, Section 2], and a calculus based on +combinators [7, Theorem 6.1]. The many equivalent models, +as well as the robustness and good decidability properties +of the underlying class, are comparable to similar properties +for the regular languages, which also have many equivalent +descriptions, including automata, logic and regular expres- +sions. For this reason, the linear regular functions have been +intensively studied in the last decade. +The second class is the polyregular functions, which ex- +tended the linear regular functions by allowing polynomial +growth, including functions such as the squaring operation +(︀1, 2, 3⌋︀ ↦ (︀1, 2, 3, 1, 2, 3, 1, 2, 3⌋︀. +Similarly to the linear regular functions, the polyregular func- +tions can also be described by multiple models, including: +string-to-string pebble transducers, which are introduced +in [14, Section 1] based on [15, Definition 1.5] and [21, Sec- +tion 3.1], as well as an imperative programming language [5, +Section 3], a functional programming language [5, Section +4], and a polynomial extension of mso transductions [9, Def- +inition 2]. For a survey of the polyregular functions, see [6]. +Combinators. This paper studies the linear regular and +polyregular functions by using systems based on prime func- +tions and combinators. This approach dates back to the +Krohn-Rhodes Theorem [19, p. 454], and was first applied to +linear regular functions in [7], by describing them in terms +of certain prime functions, such as +1 + Σ × Σ∗ → Σ∗ +list constructor, +and combinators such as +Σ → Γ +Γ → Δ +Σ → Δ +function composition. +This system is further extended in [5, p. 64] to cover the +polyregular functions, by adding extra prime functions of +non-linear output size, such as the squaring operation. +The systems in [5, 7] have no constructions for iteration; +because of this design decision, the hard part is proving com- +pleteness: every function of interest can be derived in the +system. One reason for avoiding iteration is to have a mini- +mal system. Another reason is that iteration constructions +are powerful, and as we find out in this paper, it is hard to add +them while retaining soundness (only functions of interest +can be derived). +The fold combinator. In this paper, we take the opposite +approach, by studying an iteration construction, namely the +arXiv:2301.05101v1 [cs.LO] 12 Jan 2023 + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +fold combinator. This combinator can be written as a rule +1 → Γ +Γ × Σ → Γ +Σ∗ → Γ +fold. +The assumption of this rule can be seen as a deterministic +automaton with input alphabet Σ and state space Γ, given by +its initial state and transition function. In the conclusion of +the rule, we have the function that maps an input string to +the last state of the run of the automaton. The input alphabet +and the state space need not be finite, e.g. the state space Γ +could be the set 1∗ which represents the natural numbers. +Folding is a fundamental construction in functional pro- +gramming languages. For example, the fold combinator arises +canonically from the inductive definition of the list type [18, +Section 3]. Unfortunately, there is a price to pay for the +power and elegance of the fold combinator: one can use it +to derive all primitive recursive functions [18, Section 4.1]. +Therefore, without any further restrictions, the fold com- +binator falls outside the scope of automata techniques, or +any other techniques that can be used to decide semantic +properties of programs, such as the halting problem. +This paper is devoted to identifying restrictions on the +fold combinator that tame its expressive power. These restric- +tions are presented as a typing system, which ensures that +applications of fold will stay in the class of polyregular func- +tions. In particular, the resulting class of functions shares the +decidability properties of the polyregular functions, e.g. one +can decide if a function produces a nonempty output for at +least one input. +There are two main contributions in the paper. +Quantifier-free interpretations. The first contribution +is to identify the quantifier-free interpretations as an im- +portant class of functions in the context of fold. These are +functions on structures in which the universe of the output +is a subset of the universe of the input (in particular, the +output size is linear), and all relations in the output structure +are defined using quantifier-free formulas. +In Theorem 3.2 we show that applying the fold combi- +nator to a quantifier-free interpretation yields a function +that, although not necessarily quantifier-free, is at least lin- +ear regular. This result subsumes several existing results, in +particular those about mso definability of streaming trans- +ducers [2, 3]. Although quantifier-free interpretations are +rather weak, they can describe most natural transformations +that are used as primes in the calculi from [5, 7]; the remain- +ing primes can then be derived using fold. +Having identified the importance of quantifier-free func- +tions, in Theorem 4.1, we present a system of prime functions +and combinators that derives exactly the quantifier-free func- +tions. The completeness proof of the system is the longest +proof in the paper. The quantifier-free system does not al- +low fold; fold is used in the next part of the paper, about +polyregular functions. +Safe fold. The second main contribution is a type system +that tames the power of fold. This system uses a type con- +structor ! and bears certain similarities to the parsimonius +calculus of Mazza [20, Section 2.2]. The latter is part of a field +called implicit computational complexity, which seeks to de- +scribe complexity classes using type systems. An influential +example of this kind is a system of Bellantoni and Cook [4], +which characterizes polynomial time. The present paper can +be seen as part of implicit computational complexity, which +targets regular languages instead of Turing complete models, +such as logarithmic space or polymomial time. For a more +detailed discussion of the connections between regular lan- +guages and 𝜆-calculus, including a pioneering applicaton of +linear types, see [22]. +The usual application of ! is to restrict duplication, and +this paper is no exception, as in the following example: +𝑥 ↦ (𝑥,𝑥) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +not allowed +!𝑥 ↦ (!𝑥,𝑥) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +allowed +. +However, apart from restricting duplication, ! is also used +in this paper to restrict another, more mysterious, resource, +namely quantifiers. The idea is that our system uses ! used +to describe functions that are not necessarily quantifier-free, +but are similar enough to quantifier-free functions so that +the fold combinator can be applied to them. +The second main contribution of this paper is Theorem 5.3, +which characterizes the polyregular functions using certain +prime functions and combinators, in which the types involve +! and one of the combinators is fold. In Theorem 6.1 we also +show that if we further restrict duplication +!𝑥 ↦ (!𝑥,𝑥) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +not allowed +!𝑥 ↦ (𝑥,𝑥) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +allowed +, +then the resulting system derives exactly the linear functions. +Finally, we also show that the results about the linear case +can be extended from strings to trees without much difficulty. +2 +Interpretations +In this section, we describe the polyregular functions. Among +several equivalent definitions of the polyregular functions, +our point of departure in this paper will be a definition that +uses mso interpretations [9, Section 2]. +2.1 +Definition of mso interpretations +We assume that the reader is familiar with basic notions of +monadic second-order logic mso, see [17] for an introduc- +tion. We only describe the notation that we use. A vocabulary +consists of a finite set of relation names, each one with an +associated arity in {0, 1, . . .}. Note that we allow nullary re- +lations, i.e. relations of arity zero; such a relation takes no +arguments and is “true” or “false” in each structure. A struc- +ture over such a vocabulary consists of a finite nonempty +set, called the universe of the structure, and an interpretation + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +of the vocabulary, which associates to each relation name +in the vocabulary a relation over the universe of matching +arity. The syntax and semantics of first-order logic and mso +are defined in the usual way. Whenever we speak of a class +of structures, all structures in the class must be over the +same vocabulary, and the class must be closed under isomor- +phism. The structures considered in this paper will be used +to describe finite strings and similar objects, such as pairs of +strings, or strings of pairs of strings. +Intuitive description. We begin with an intuitive de- +scription of string-to-string mso intepretations. Following +the classical Büchi-Elgot-Trakhtenbrot correspondence of +automata and mso logic, we view strings as structures. +Definition 2.1. A string in Σ∗ is viewed as a structure whose +universe is the string positions, equipped with the relations +𝑥 ≤ 𝑦 +⧸︀ +order on positions +𝑎(𝑥) +⧸︀ +𝑥 has label 𝑎 ∈ Σ +. +A string-to-string mso interpretation transforms strings +using the above representation, such that the positions of +the output string are represented by 𝑘-tuples of positions +in the input string, for some 𝑘 ∈ {0, 1, . . . }. The order2 on +output positions is defined by a formula +𝜑(𝑥1, . . .,𝑥𝑘 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +first output +position +,𝑦1, . . .,𝑦𝑘 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +second output +position +) +with 2𝑘 free variables, while the labels of the output positions +are defined by formulas with 𝑘 free variables, one for each +letter in the output alphabet. Finally, not all 𝑘-tuples of input +positions need to participate in the output string; there is a +formula with 𝑘 free variables, called the universe formula, +which selects those that do. All of these formulas need to be +consistent – every 𝑘-tuple of positions in the input string +that satisfies the universe formula must satisfy exactly one +of the label formulas, and these 𝑘-tuples need to be linearly +ordered by the order formula. Consistency is decidable, since +it boils down to checking if some mso formula is true in all +strings, which in turn boils down to checking if automaton is +nonempty by the equivalence of mso and regular languages. +Formal definition. We now give a formal definition of +mso interpretations. The formal definition generalizes the +above intuitive description in two ways of minor importance. +First, the definition is presented not just for strings, but for +general classes of structures; we intend to apply it to mild +generalizations of strings, such as pairs of strings or strings +of strings. Second, instead of the universe being 𝑘-tuples +of some fixed dimension, it is created using a polynomial +2For reasons described in [9, Theorem 4], the string positions are equipped +with a linear order 𝑥 ≥ 𝑦 instead of successor 𝑥 = 𝑦 + 1. +functor, which is an operation on sets of the form +𝐹(𝐴) = 𝐴𝑘1 + ⋯ + 𝐴𝑘𝑛. +(1) +Typical polynomial functors include the identity functor +𝐴, or the functor 𝐴2 + 𝐴2 that produces two copies of the +square of the input set. We use the following terminology for +polynomial functors: each 𝐴𝑘𝑖 is called a component of the +polynomial functor, and 𝑘𝑖 ∈ {0, 1, . . .} is called the dimen- +sion of this component. This extra generality of polynomial +functors3 makes the definition more robust, it will be useful +in a more refined analysis of mso interpretations that will +appear in Section 5.3. In case of linear functors (where all +components have dimension at most one), the components +correspond to the copies in an mso transduction [13, p. 230]. +In an mso interpretation, the polynomial functor is used +to define the universe of the output structure; if 𝐴 is an input +structure then elements of 𝐹(𝐴) are called output candidates. +A subset of the output candidates will be the universe of +the output structure. This subset is defined using an mso +query of type 𝐹, which is a family of mso formulas, with +one formula for each component in the functor, such that +number of free variables in each formula is the dimension of +the corresponding component. Here are some examples: +𝐴0 = 1 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +a query of this type +is a formula without +free variables +𝐴4 +⃒ +a query of this type +is a formula with +four free variables +𝐴2 + 𝐴2 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +a query of this type +is two formulas with +two free variables each +The relations in the output structure are also defined using +mso queries, with a relation of arity 𝑚 defined using a query +of type +𝐹𝑚(𝐴) def= 𝐹(𝐴) × ⋯ × 𝐹(𝐴) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +𝑚 times +The above type is also a polynomial functor, since polynomial +functors are closed under taking products, e.g. the product of +𝐴2 and 𝐴+1 is 𝐴3 +𝐴2. The discussion above is summarized +in the following definition. +Definition 2.2 (mso interpretation). A function 𝑓 ∶ Σ → Γ +between two classes of structures is called an mso interpretation +if: +1. Universe. There is a polynomial functor 𝐹 and a mso +query of type 𝐹 such that for every input structure 𝐴 ∈ Σ, +the universe of the output structure is the subset of the +output candidates 𝐹(𝐴) defined by this query; and +2. Relations. For every relation name 𝑅 in the vocabulary +of the output class, of arity 𝑚, there is an mso query of +type 𝐹𝑚, which defines the interpretation of 𝑅 in every +output structure. +3One can reduce the polynomial functor in an mso interpretation to a +single component 𝐴𝑘, at the cost of increasing the dimension 𝑘. This works +for input structures with at least two elements. For this reason, [9] uses +interpretations with just one component. + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +A string-to-string mso interpretation is the special case of +the above definition where the input type is Σ∗ for some +finite alphabet Σ, and the output type is Γ∗ for some finite +alphabet Γ. +Example 1. Consider the squaring operation on strings +(︀1, 2, 3⌋︀ ↦ (︀1, 2, 3, 1, 2, 3⌋︀. +Suppose that the input alphabet is Σ. This function is defined +by an mso interpretation as follows. The functor 𝐹 is 𝐴2, and +the universe formula is “true”, which means that the positions +of the output string are all pairs of positions in the input +string. The order formula describes the lexicographic order +on 𝐴2. Finally, the label of an output position is inherited +from the input position on the second coordinate. ◻ +2.2 +String types +We are ultimately interested in functions that input and out- +put strings over a finite alphabet. However, to create such +functions using primes and combinators, it will be conve- +nient to have more structured types for the simpler functions, +such as pairs of strings. The idea to use such structured types +comes from [7], in particular we use the same types, as de- +scribed in the following definition. +Definition 2.3 (List types). A list type is any type constructed +using the constructors +1⟩︀ +a type with +one element +Σ1 × Σ2 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +pairs +Σ1 + Σ2 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +co-pairs, i.e. +disjoint union +Σ∗ +⃒ +lists +. +An example of a list type is +(1 + 1 + 1)∗. +This type can be seen as the type of strings over a three letter +alphabet; in this way the list types generalize strings over +finite alphabets. The generalization is minor, since elements +of a list type can be seen as strings over a finite alphabet, +which uses brackets and commas as in the following example: +((︀left 1, right 1, left 1⌋︀, 1) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +an element of the list type (1 + 1)∗ × 1 +. +Structures for list types. We will be interested in mso +interpretations that transform one list type into another. +We could simply represent list types as strings over a finite +alphabet in the way described above, and then use mso in- +terpretations on strings over a finite alphabet. The resulting +definition would be equivalent to the one that we will use in +the paper. However, we choose to use a direct representation +of list types as structures, without passing through a string +encoding. The reason is that quantifiers would be needed to +go between list types and their string encodings, and in this +paper, we will be particularly interested in quantifier-free +interpretations. +Definition 2.4. To each list type we associate a class of struc- +tures, which is defined by induction as follows. +(1) The class 1 contains only one structure; this structure +has one element in its universe and no relations. +(+) The vocabulary of the class Σ1 + Σ2 is the disjoint union +of the vocabularies of the classes Σ1 and Σ2, plus one +new nullary relation name (i.e. arity zero). A structure +in this class is obtained by taking a structure in either +of the classes Σ1 or Σ2, extending the vocabulary to the +vocabulary of the other class by using empty sets, and +interpreting the new nullary relation as “true” or “false” +depending on whether the structure is from Σ1 or Σ2. +(×) The vocabulary of the class Σ1 × Σ2 is the disjoint union +of the vocabularies of the class Σ1 and Σ2, plus one new +unary relation name (i.e. arity one). A structure in this +class is obtained by taking the disjoint union (defined +in the natural way) of two structures, one from Σ1 and +one from Σ2, and using the new unary relation name to +select the elements from the first structure. +(∗) The general idea is that a structure in the class Σ∗ is ob- +tained by taking a list (︀𝐴1, . . .,𝐴𝑛⌋︀ of nonempty4 struc- +tures in Σ, creating a new structure using disjoint union +(with a shared vocabulary), and adding a new binary +relation 𝑥 ≤ 𝑦 which holds whenever the structure con- +taining 𝑥 appears earlier in the list (or in the same place) +than the structure containing 𝑦. The problem with this +construction is that it would mix nullary relations that +come from different structures in the list. To fix this prob- +lem, each nullary relation name 𝑅() in the vocabulary +of Σ is changed into a unary relation name 𝑅(𝑥) that +selects elements 𝑥 such that the corresponding structure +satisfies 𝑅(). +If we apply the above representation to a list type +(1 + ⋯ + 1 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +𝑛 times +)∗ +then we get the representation of strings as ordered struc- +tures from Definition 2.1, with the exception that the empty +string has a universe with one element. Therefore, it is not +important if we use Definition 2.1 or 2.4 for representing +strings. +Definition 2.5. A polyregular function is a function +𝑓 ∶ Σ → Γ +4A structure is nonempty if its universe is nonempty. This leads to the +following subtle point, which arises when considering lists of lists, and +related structures. Since a list can be empty, it follows that we do not allow +lists of empty lists such as (︀(︀⌋︀, (︀⌋︀, (︀⌋︀⌋︀. This means that the list constructor, as +it is used in this paper and formalized in Definition 2.4, should be interpreted +as possibly empty lists with nonempty list items. This distinction will not +play a role for types such as (1 + 1)∗ where list elements cannot be empty, +which is the case that we really care about. + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +between list types that can be defined by an mso interpretation, +assuming that list types are viewed as classes of structures +according to Definition 2.4. +The original definition of polyregular functions [5] did not +use mso interpretations, however mso interpretations were +shown equivalent to the original definition in [9, Theorem +7]. Since the original definition was closed under compo- +sition, it follows that mso interpretations are closed under +composition (as long as the input and output classes are list +types). +3 +The fold combinator +In this section, we discuss dangers of the fold combinator +1 → Γ +Γ × Σ → Γ +Σ∗ → Γ +fold +We also explain how some of the dangers can be avoided by +using quantifier-free interpretations. +We begin this section with several examples illustrating +the usefulness of fold. +Example 2. Consider a finite automaton with a 𝑛 states and +an input alphabet of 𝑚 letters. Assuming some order on the +states and alphabet, the transition function can be seen as a +function between finite string types +(1 + ⋯ + 1) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +𝑛 times +× (1 + ⋯ + 1) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +𝑚 times +→ 1 + ⋯ + 1 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +𝑛 times +. +If we apply fold to this automaton, under some chosen ini- +tial state, then we get the function that inputs a string, and +returns the last state in the run. A special case of this con- +struction is when both the states and input letters of the +automaton are elements of some finite group 𝐺, the initial +state is the group identity, and the transition function is the +group operation. By folding this transition function, we get +the group multiplication function of type 𝐺∗ → 𝐺, which is +one of the (less appealing) prime functions in the combina- +tory calculus from [5]. ◻ +Example 3. There are two symmetric list constructors +1 + Σ∗ × Σ → Σ∗ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +lists are constructed by adding +letters to the right of the list +1 + Σ × Σ∗ → Σ∗ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +lists are constructed by adding +letters to the left of the list +. +If we apply fold to the two corresponding automata, then we +get the reverse and identity functions on lists, respectively. +The fold combinator corresponds in a canonical way to the +first list constructor, which is why it is sometimes called fold +right. ◻ +3.1 +On the dangers of folding +We now present two examples which show how the fold com- +binator, without any further restrictions, can define functions +that are not polyregular. More generally, one can use fold to +derive any primitive recursive function [18, Section 4.1]. +Example 4. [Iterating duplication] Consider an automaton +where the input alphabet is 1, and the states are 1∗. We view +the states as natural numbers, with the list 1𝑛 of length 𝑛 +representing the number𝑛. The initial state in this automaton +is 1, and the transition function is +(1𝑛, 1) ∈ 1∗ × 1 +↦ +12𝑛 ∈ 1∗. +This is an example of a polyregular function, in fact it is a +linear regular function. However, if we apply fold to it, then +we get the function +1𝑛 ∈ 1∗ +↦ +12𝑛 ∈ 1∗. +which is not polyregular because of exponential growth. ◻ +Example 5. [Subtraction] As illustrated in Example 4, we +run into trouble if iterate duplication. But we can also run +into trouble when the transition function does not create any +new elements. Consider an automaton where the input al- +phabet is 1+1, and the state space is the integers, represented +as the list type +1∗ +⟩︀ +represents +{−1, −2, . . .} ++ +1∗ +⟩︀ +represents +{0, 1, . . .} +The initial state is zero, and the transition function incre- +ments or decrements the state depending on which of the +two input letters from 1 + 1 it gets. This transition function +is easily seen to be polyregular, and it has the property that +the output size is at most the input size, assuming that the +input letter contributes to the input size. However, by fold- +ing this automaton, we get a function that subsumes integer +subtraction and is therefore not polyrergular. Using similar +ideas, one could simulate two-counter machines. ◻ +3.2 +Quantifier-free interpretations and their folding +As the two above examples show, we have to be careful when +applying fold. Clearly we must avoid duplication (Example 4). +This can be done by requiring the polynomial functor in +the interpretation to be the identity, thus ensuring that the +output is no larger than the input. It is less clear how to +avoid the problem with Example 5. Our solution is to use +quantifier-free interpretations, as defined below. +Definition 3.1. A quantifier-free interpretation is the spe- +cial case of mso interpretations where the polynomial functor +is the identity 𝐹(𝐴) = 𝐴 and all formulas are quantifier-free. +One could consider interpretations in which the formulas +are quantifier-free, but the functor is not necessarily the +identity; such interpretations will not be useful in this paper. +The transition function in Example 5 is not quanitifier-free, +since decrementing a number, which corresponds to remov- +ing a list element, is not a quanitifier-free operation. The + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +following theorem is the first main contribution of this paper: +fold can be safely applied to quantifier-free interpretations. +Theorem 3.2. Let Σ and Γ be any classes of structures, not +necessarily list types. If the transition function +𝛿 ∶ Γ × Σ → Γ +in the assumption of the fold combinator is a quantifier-free +interpretation, then the function in the conclusion is a linear +mso interpretation. +Proof +Consider an automaton as in the assumption of the theo- +rem. For an input to this automaton (︀𝐴1, . . .,𝐴𝑛⌋︀, and 𝑖 ∈ +{0, . . .,𝑛} we write 𝐵𝑖 ∈ Γ for the state of the automaton af- +ter reading the first 𝑖 input letters. The state 𝐵0 is the initial +state, which is given by the assumption to the fold combina- +tor, and the state 𝐵𝑛 is the last state, which is the output of the +function in the conclusion of the fold combinator. Our goal +is to compute the last state using a linear mso interpretation. +Since the functor in 𝛿 is the identity, the output candidates +are simply the elements of the input structure. Therefore, +the universe of 𝐵𝑛 is contained in the disjoint union of the +universe of 𝐵𝑛−1 and the universe of 𝐴𝑛. By unfolding the +induction, the universe of 𝐵𝑛 is contained in the universe +of the first state 𝐵0 and the input structure 𝐴 = (︀𝐴1, . . .,𝐴𝑛⌋︀. +Therefore, to prove that the fold is an mso interpretation, it +will be enough to show that an mso formula can tell us: (a) +which elements of 𝐵0 +𝐴 belong to the output structure; and +(b) which relations of the output structure are satisfied by +which tuples from 𝐵0 + 𝐴. The answers to these questions +will be contained in the quantifier-free theory of the tuple, +as defined below. +Definition 3.3. Let 𝐴 be a structure and let ¯𝑎 be a list of +distinguished elements, which need not belong to the universe +of 𝐴. The quantifier-free theory of a ¯𝑎 in 𝐴 is the following +information: which distinguished elements are in the universe, +and which quantifier-free formulas are satisfied by those dis- +tinguished elements that are in the universe. +Using the above terminology, to prove that the fold is +definable in mso, we need to show that for each tuple in +𝐵0 + 𝐴, we can define in mso the corresponding quantifier- +free theory in the output structure 𝐵0. This will be done +in the following claim. The key property used by the claim +is the following continuity property of quantifier-free inter- +pretations: the quantifier-free theory of a tuple of output +candidates in the output structure is uniquely determined +by the quantifier-free theory of the same tuple in the input +structure. +In the following claim, we consider a function which in- +puts structures with tuples of 𝑘 distinguished elements, and +has finitely many possible output values (quanitifier-free +theories, in the case of the claim). Such a function is called +mso definable if for every chosen output value, there is an +mso formula with 𝑘 free variables that selects inputs which +give chosen output. +Claim 3.4. For every 𝑘 ∈ {1, 2, . . .} and every tuple ¯𝑏 of ele- +ments in 𝐵0, the following function is mso definable: +● Input. A structure 𝐴 ∈ Σ∗ with elements ¯𝑎 ∈ 𝐴𝑘. +● Output. The quantifier-free theory of ¯𝑎¯𝑏 in 𝐵𝑛. +Proof +By the continuity property mentioned earlier in this proof, +the quantifier-free theory of ¯𝑎¯𝑏 in 𝐵𝑛 is uniquely determined +by the quantifier-free theory of ¯𝑎¯𝑏 in the structure (𝐵𝑛−1,𝐴𝑛), +which in turn is uniquely determined (by compositionality) +by the quantifier-free theories of ¯𝑎¯𝑏 in the two individual +structures 𝐵𝑛−1 and 𝐴𝑛. Therefore, we can think of these +quantifier-free theories as being computed by a finite au- +tomaton, where the initial state is the quantifier-free theory +of ¯𝑏 in 𝐵0, and the input string is +(︀qf theory of ¯𝑎 in 𝐴1, . . ., qf theory of ¯𝑎 in 𝐴𝑛⌋︀. +By the continuity property, one can design a transition func- +tion for this automaton, which does not depend on the input +structure 𝐴 or the tuple ¯𝑎, such that its state after reading +the first 𝑖 letters is the quantifier-free theory of ¯𝑎¯𝑏 in 𝐵𝑖. The +state space of this automaton is finite, since there are finitely +may quantifier-free theories once the vocabulary and num- +ber of arguments have been fixed. Since finite automata can +be simulated in mso, it follows that the last state in the run +of this automaton, which is the theory in the conclusion of +the claim, can be defined in mso. ◻ +We now use the claim to complete the proof of the lemma. +The output candidates of the mso interpretation are defined +by the polynomial functor +𝐹(𝐴) = 𝐴 + 1 + ⋯ + 1 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +size of initial state 𝐵0 +. +In other words, the output candidates are elements of the in- +put list and the initial state. By the above claim, the quantifier- +free theory of a single output candidate in the output struc- +ture can be defined in mso, and since this theory tells us if +the output candidate is present in the universe output struc- +ture, we can use it to define the universe. Similarly, if we +want to know if a tuple of output candidates satisfies some +relation from the output vocabulary, then we can find this +information using mso as in the above claim. ◻ +On its own, the theorem above does not solve all of the +problems with fold. One issue is that the theorem only sup- +ports one application of fold, since the folded function is no +longer quantifier-free and cannot be folded again. Another +issue is that applying the theorem stays within the class of +functions that do not increase the output size, while we will +also be interested in folding functions that increase the size. + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +These problems will be addressed later in the paper, by de- +veloping a suitable type system. Before continuing, we give +some applications of the theorem. +Example 6. Consider a transition function of a finite au- +tomaton as in Example 2. In a list type of the form 1 + ⋯ + 1, +the component of the disjoint union that is used can be ac- +cessed by a quantifier-free formula without free variables, +since it is represented using nullary relations. Therefore, the +transition function is a quantifier-free interpretation, and +so we can apply Theorem 3.2 to conclude that the fold is an +mso transduction. This corresponds to the inclusion +regular languages +⊆ +mso. +Applying Theorem 3.2 to prove this inclusion is not the right +way to prove it, since the inclusion itself is used in the proof +of the theorem. ◻ +In Example 6, we applied the fold combinator to a finite +automaton. In the following example, we give a more inter- +esting application, where the state space is infinite. +Example 7. [Streaming string transducers] Define a simple +streaming string transducer, simple sst for short, as follows. It +has two finite alphabets Σ and Γ, called the input and output +alphabets. It has a configuration space, which is a list type of +the form +Δ = (Γ∗)𝑘1 + ⋯ + (Γ∗)𝑘𝑚. +In other words, the set of configurations is obtained by ap- +plying some polynomial functor to the set of strings over +the output alphabet. The idea is that a configuration consists +of a state, which is one of the 𝑚 components, and a register +valuation which is a tuple of strings over the output alphabet. +The configurations of the transducer are updated according +to the following three functions, which are required to be +quantifier-free, according the the representation of the input +and output alphabets that was used in Example 6: +1 → Δ +⧹︀ +initial +Δ × Σ → Δ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +transition function +Δ → Γ∗ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +final +. +The semantics of the transducer is the function of type Σ∗ → +Γ∗ that is obtained by folding the first two functions, and +post-composing with the final function. By Theorem 3.2, this +function is an mso transduction. +The model described above subsumes (and in fact, is equiv- +alent to) the classical model of sst [1, Section 3], with the only +difference (which is why we call our model simple) being +that our model allows the input letter to be used only once +(as opposed to a constant number of times) in the registers. +This is because string concatenation, which is the operation +used to update registers in an sst, is a quantifier-free opera- +tion. Therefore, Theorem 3.2 can be seen as subsuming the +implication +copyless sst ⊆ deterministic mso transductions +proved in [1, Theorem 3]. The same idea will work for trees, +as we will see in Section 6.1. ◻ +Example 8. [Graphs] As mentioned in Theorem 3.2, the +folded automaton need not operate on classes that are list +types. For instance, we could adapt Example 7 to transducers +in which the registers, instead of storing strings, store graphs +with 𝑘 distinguished vertices, as in Courcelle’s algebras for +treewidth [12, Section 1.4]. We could still apply Theorem 3.2, +since the corresponding operations on graphs are quantifier- +free. Similar ideas would also work for cliquewidth. ◻ +4 +Deriving quantifier-free functions +As we have shown in Theorem 3.2, the fold combinator can +be safely applied to quantifier-free interpretations. Before +discussing the fold combinator, we take a minor detour in +this section, and present a complete system for the quantifier- +free interpretations. +A few examples. We begin with examples and non-examples +of quantifier-free interpretations operating on list types. +Example 9. [Commutativity of product] Consider the func- +tion of type +Σ1 × Σ2 → Σ2 × Σ1, +which swaps the order in a pair. Like all examples in this +section, this is actually an infinite family of functions, one +for every choice of Σ1 and Σ2. The function is a quantifier- +free interpretation. The only change between the input and +output concerns the unary relation from the definition of the +product class Σ1 × Σ2 which tells us if an element is from the +first coordinate; this relation needs to be complemented. ◻ +Example 10. [List reverse and concatenation] Consider the +list reverse function of type Σ∗ → Σ∗. This is clearly a +quantifier-free interpretation – it is enough to replace the +order 𝑥 ≤ 𝑦 with its reverse 𝑦 ≤ 𝑥. A similar idea works +for the list concatenation function of type Σ∗∗ → Σ∗ which +concatenates a list of lists into a list. In the input structure, +there are two linear orders, corresponding to the inner and +outer lists. To get the output structure, we use the lexico- +graphic product of these two orders, which can be defined +in a quantifier-free way. ◻ +Example 11. [List constructor and destructor] Consider the +(left) list constructor +1 + Σ × Σ∗ → Σ∗, +that was discussed in Example 3. This is a quantifier-free +interpretation. If the input is from 1, which can be tested +in a quantifier-free way using the nullary relation from the +co-product, then the output list is created in the natural way. +Otherwise, if input is a pair from Σ × Σ∗, then the order on +the concatenated list can easily be defined by using the unary +predicate that identifies the first argument of a pair. + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +Γ × Σ ↔ Σ × Γ +commutativity of × +Γ + Σ ↔ Σ + Γ +commutativity of + +Γ × (Σ × Δ) ↔ (Γ × Σ) × Δ +associativity of × +Γ + (Σ + Δ) ↔ (Γ + Σ) + Δ +associativity of + +Γ × (Σ + Δ) ↔ (Γ × Σ) + (Γ × Δ) +distributivity +Γ1 × Γ2 → Γ𝑖 +projections +Γ𝑖 → Γ1 + Γ2 +co-projections +Γ + Γ → Γ +co-diagonal +Σ∗ × Σ → Σ∗ +append +Σ∗ → Σ∗ +reverse +Σ∗∗ → Σ∗ +concat +Σ → Σ × Γ∗ +create empty +(Σ × Γ)∗ → Σ∗ × Γ∗ +list distribute +Figure 1. The prime quantifier-free functions. +Γ1 → Σ1 +Γ2 → Σ2 +Γ1 × Γ2 → Σ1 × Σ2 +functoriality of × +Γ1 → Σ1 +Γ2 → Σ2 +Γ1 + Γ2 → Σ1 + Σ2 +functoriality of + +Γ → Σ +Γ∗ → Σ∗ +functoriality of ∗ +Γ → Σ +Σ → Δ +Γ → Δ +function composition +Figure 2. The quantifier-free combinators. +The list constructor is bijective, and therefore it has a +corresponding inverse of type +Σ∗ → 1 + Σ × Σ∗, +which we call the list destructor. The list destructor is not +a quantifier-free interpretation. The reason is that if the +input is an nonempty list, then we would need to isolate in +a quantifier-free way the elements from the head, i.e. from +the first list element, which cannot be done. ◻ +Example 12. [Diagonal] Another non-example is𝑥 ↦ (𝑥,𝑥). +This is not a quantifier-free interpretation, since the output +size is bigger than the input size. ◻ +A complete system. We now present a complete charac- +terization of quantifier-free interpretations on list types. The +system will be used as a basis for the system in the next +section, which will describe general mso interpretations. +Σ* +Σ* +Σ** +Σ* +Σ** +Σ* +Σ** +create empty +append +append +concat +wires represent types, and parallel +wires represent products, so this +cross-section represents Σ**× Σ*× Σ* +boxes represent prime functions, or +previously derived functions +input is at the top +output is at the bottom +Figure 3. A string diagram that derives the binary operation +of type Σ∗ × Σ∗ → Σ∗ for list concatenation. +Theorem 4.1. The quantifier-free interpretations between list +types are exactly those that can be derived from the prime func- +tions in Figure 1 by applying the combinators from Figure 2. +The proof of the above theorem, with completeness being +the non-trivial part, is in the appendix. +4.1 +String diagrams +We conclude this section with several example derivations of +quantifier-free functions using the system from Theorem 3.2. +To present these derivations, we use string5 diagrams based +on [11, Chapter 3], as depicted in Figure 3. +We also use string diagrams with a yellow background, +where parallel wires represent co-products. For example, +the following diagram represents the prime function from +Figure 1 that describes commutativity of +: +Σ +Γ +Here are two other examples of string diagrams, which use +dead ends, and represent projections and co-projections: +Σ +Σ +Γ +Γ +projection +Σ×Γ → Γ +co-projection +Σ → Σ×Γ +Example 13. Recall the representation of finite sets as list +types 1+⋯+1 used in Examples 2 and 6. Under this represen- +tation, every function between finite sets is derivable using +the prime functions and combinators of Theorem 3.2. This +is easily seen using string diagrams, as illustrated below: +5This is a name clash: the word “string” relates to the shape of the diagrams, +and not to the fact that they manipulate types that represent strings. + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +1 +0 +0 +0 +0 +1 +1 +2 +2 +3 +3 +1 +1 +1 +1 +1 +1 +1 +the operation for +squaring modulo 4 +The representation of finite sets as co-products is important +here. For example, the diagonal function 1 → 1 × 1 is not +derivable, as explained in Example 12. ◻ +5 +Deriving polyregular functions +We now move beyond quantifier-free functions and present +the main contribution of this paper, which is a system that +derives exactly the polyregular functions. As explained in +Example 5, we cannot simply add the fold combinator to the +system from Theorem 3.2. Another idea would be to have +two kinds of functions: quantifier-free functions, and general +polyregular functions, with the fold combinator used to go +from one kind to the other. In such a system, the only con- +tribution of fold would be to define linear regular functions, +since such are the functions in the conclusion of Theorem 3.2. +We are more ambitious, and we want the fold combinator to +be useful also for non-linear functions. +To define a system with fold, we add a new unary type +constructor. This type constructor is denoted by ! and it is +written on the left. The general idea is that an element !𝑥 +is essentially the same element as 𝑥, except that it is harder +to obtain. The type constructor is not idempotent, and so +!!𝑥 is even harder to obtain than !𝑥. The goal of this type +constructor is to restrict the application of fold in a way that +avoids the problems discussed in Section 3.1. This is done by +using the following safe fold combinator: +!𝑘1 → Γ +Γ × Σ → Γ +!𝑘(Σ∗) → Γ +safe fold +In the combinator, !𝑘 refers to 𝑘-fold application of !. When +applying the combinator, the number 𝑘 ∈ {0, 1, . . .} must be +strictly bigger than the grade of Γ, which is defined to be the +maximal nesting of !, as in the following examples: +1∗ +⟩︀ +grade zero +1+!(1+!1) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +grade two +. +For example, when Γ has grade zero, i.e. it does not use !, +then safe fold can be used in the form +!1 → Γ +Γ × Σ → Γ +!(Σ∗) → Γ +safe fold when Γ is without ! +The general idea is that the annotation with ! will disallow +certain kinds of repeated applications of fold that would lead +to functions that are not polyregular. Before giving a formal +description of the system, we begin with an example. +Example 14. [List destructor] In this example, we use safe +fold to derive a variant of the list destructor +Σ∗ → 1 + Σ∗ × Σ +that was discussed in Example 11. Consider an automaton +where the state space is the output type of the list destructor, +the initial state is 1, and the transition function is +(1+Σ*×Σ)×Σ +Σ*×Σ×Σ +Σ*×Σ +Σ×Σ* +1 +Σ*×Σ +Σ* +Σ +Σ +Σ* +1×Σ +distribute +1 +Σ +Σ* +empty +append +By applying the safe fold to this automaton, we get the list +deconstructor in a weaker type, namely +!(Σ∗) → 1 + Σ∗ × Σ. +The weaker type avoids the issues from Example 5, since +the input and output will have different numbers of !, and +therefore we will be unable to apply fold again. ◻ +5.1 +Graded types and their derivable functions +We now give a formal description of the system. The type +system is the same as previously, except that we have one +more type constructor for !. +Definition 5.1. A graded list type is any type that is con- +structed using the following type constructors +1⟩︀ +a type with +one element +Σ1 × Σ2 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +pair +Σ1 + Σ2 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +co-pair, i.e. +disjoint union +Σ∗ +⃒ +lists +!Σ. +The general idea is that ! does not change the underlying +set, but only introduces some type annotation that controls +the way fold and duplication can be applied. Apart from +safe fold, the main way of dealing with ! is the duplicating +operation +!Σ → !Σ × Σ +absorption, +which is named after the same rule in the parsimonious cal- +culus of Mazza [20, p.1]. There are also prime functions for +commuting ! with the remaining type constructors, for exam- +ple !(︀𝑥,𝑦,𝑧⌋︀ and (︀!𝑥, !𝑦, !𝑧⌋︀ are going to be equivalent in our +system; for this reason we can write !Σ∗ without specifying +the order in which the two constructors are applied. +Definition 5.2. There are two kinds of derivability for func- +tions between graded list types. +1. Strongly derivable. A function is called strongly deriv- +able if it can be derived using quantifier-free prime func- +tions and combinators from Figures 1 and 2, extended + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +to graded list types that can use !, along with four new +prime functions +!(Γ + Σ) ↔ !Γ+ !Σ +! commutes with + +!(Γ × Σ) ↔ !Γ× !Σ +! commutes with × +( !Γ)+ ↔ !(Γ+) +! commutes with ∗ +!Γ → !Γ × Γ +absorption +and two new combinators +Σ → Γ +!Σ →!Γ +functoriality of ! +!𝑘1 → Γ +Γ × Σ → Γ +!𝑘(Σ∗) → Γ +safe fold +The safe fold combinator can only be applied when Γ +has grade < 𝑘. +2. Weakly derivable. A function is called weakly deriv- +able if it is of the form 𝑥 ↦!𝑘 𝑓 (𝑥) for some 𝑘 and some +strongly derivable function 𝑓 . +In other words, a function is weakly derivable if it can be +strongly derived for a sufficiently upgraded input type. For +example, the list destructor of type +Σ∗ → 1 + Σ∗ × Σ +function is not strongly derivable (Example 11), but it is +weakly derivable (Example 14). +In the following theorem, which is the main result of this +paper, we are only interested in weak derivability for func- +tions between (ungraded) string types, i.e. between types that +do not use !. The purpose of ! is to get the strong derivations. +Theorem 5.3. A function between (ungraded) list types is +polyregular if and only if it is weakly derivable. +The proof has two parts: soundness and completeness. +5.2 +Completeness +The completeness part of Theorem 5.3 is that every polyregu- +lar function can be weakly derived. Unlike the quantifier-free +system in Theorem 4.1, completeness is relatively easy. This +is because fold is a powerful combinator, and we can draw +on a prior complete system for the polyregular functions [5, +p. 64]. In the completeness proof, the polynomial growth +output size will come from a single quadratic function. +Claim 5.4. One can weakly derive the following function +(︀𝑎1, . . .,𝑎𝑛⌋︀ +↦ +(︀(︀𝑎𝑛, . . .,𝑎1⌋︀, (︀𝑎𝑛−1, . . .,𝑎1⌋︀, . . ., (︀𝑎1⌋︀⌋︀ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +call this the prefixes function +. +Proof +Consider an automaton, where the input alphabet is !Σ, the +state space is Σ∗∗×!Σ∗, the initial state is the pair of empty +lists, and the transition function is +!Σ* +Σ** +Σ** +Σ +Σ* +!Σ +!Σ* +Σ* +Σ +append +Σ* +absorption +append +dotted box +represents +functoriality +of ! +By applying fold to this automaton, we get a function of type +!!Σ∗ → Σ∗∗×!Σ∗ +which returns the output of the prefixes function on the first +output coordinate. Observe that in this proof, we applied the +fold to a transition function that already uses !. ◻ +Using the above function, in the appendix we show that +the weakly derivable functions contain an already existing +complete system for the polyregular functions [5, p. 64]. +Before discussing the soundness proof in the theorem, +let us comment on the minimality of its system. The system +inherits all of the primes and combinators from the quantifier- +free system in Theorem 4.1. In the presence of fold, some of +these primes and combinators can be derived thus leading +to a smaller system. +Theorem 5.5. The system from Theorem 5.3 remains com- +plete after removing the map combinator, as well as all prime +functions and combinators that involve the list type, and adding +1 + Σ → Σ∗ +lists of length at most one +Σ∗ × Σ∗ → Σ∗ +binary list concatenation. +5.3 +Soundness +The rest of this section is devoted to the proof of soundness +for Theorem 5.3, which is that all weakly derivable functions +are polyregular. We will define an invariant on strongly +derivable functions, which is satisfied by the prime functions, +is preserved by the combinators, and which implies that a +function is polyregular. This invariant can be seen as giving +a semantic explanation of the ! constructor and the strongly +derivable functions. +The invariant uses a more refined notion of mso interpreta- +tions, called graded mso interpretations. These interpretations +operate on graded structures, as described in the following +definition. +Definition 5.6 (Graded structure). A graded structure is a +structure, together with a grading function that assigns to each +element in the universe a grade in {0, 1, . . .}. + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +The idea is that the grade of an element is the number of +times that ! has been applied, as in the following example +( 1⟩︀ +grade +zero +, !(︀1, 1, 1⌋︀ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +grade +one +). +A graded list type can be seen as describing a class of graded +structures, with the constructor ! incrementing the grade of +all elements, and the remaining constructors treated in the +same way as in Definition 2.4. +If 𝐴 is a graded structure, we write 𝐴⋃︀ℓ for the structure +that is obtained from 𝐴 by restricting its universe to elements +that have grade at least ℓ. In the definition of a graded mso +interpretation, we use the grades to control how an mso +interpretation 𝑓 uses quantifiers. The general idea is that +𝑓 (𝐴)⋃︀ℓ depends on 𝐴⋃︀ℓ in a quantifier-free way, and on 𝐴⋃︀ℓ+1 +in an mso definable way. +Before presenting the formal definition, we introduce +some notation, in which a polynomial functor 𝐹 is applied to +a tuple of elements ¯𝑎, yielding a new (typically longer) tuple +of elements 𝐹(¯𝑎). If an input set 𝐴 for a polynomial functor 𝐹 +is equipped with some linear order, then this linear order can +be extened to a linear order on the output set 𝐹(𝐴), by using +some fixed order on the components, and ordering tuples +lexicographically. This way we can think of a polynomial +functor as transforming linearly ordered sets, i.e. lists. We +will care about lists of fixed length, which we call tuples. For +example if the polynomial functor is 𝐴 + 𝐴2, then applying +it to the tuple (1, 2) gives the tuple +(1, 2, 1, 2, (1, 1), (1, 2), (2, 1), (2, 2)) ∈ 𝐹({1, 2})6. +In the definition below, we will care about the theories of +tuples of the form 𝐹(¯𝑎), with the theories defined as in Defi- +nition 3.3, but extended to mso formulas of given quantifier +rank (the quantifier rank of an mso formula is the nesting +depth of the quantifiers, with first-order and second-order +quantifiers counted in the same way). Recall that these theo- +ries allow for distinguished elements that are not part of the +universe in a structure. Equipped with this notation, we are +ready define the graded version of mso interpretations. +Definition 5.7. A function 𝑓 ∶ Σ → Γ is called a graded mso +interpretation if there is some polynomial functor +𝐹(𝐴) += +𝐴 +⟩︀ +this is called the +quantifier-free +component ++ +𝐹0(𝐴) + ⋯ + 𝐹𝑚(𝐴) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +components from this part +of the functor are called the +downgrading components +such that the following conditions hold: +1. Universe and grades. The universe of the output struc- +ture is contained in +𝐴 + 𝐹0(𝐴⋃︀1) + 𝐹1(𝐴⋃︀2) + ⋯ + 𝐹𝑚(𝐴⋃︀𝑚 + 1). +The grades in the output structure are defined as follows: +elements from 𝐹ℓ have grade ℓ, and elements from the +quantifier-free component inherit their grade from 𝐴. +2. Continuity. For every 𝑘, ℓ ∈ {0, 1, . . .} there is some +quantifier rank 𝑟 ∈ {0, 1, . . .} such that for every in- +put structure 𝐴 and distinguished elements ¯𝑎 ∈ 𝐴𝑘, the +quantifier-free theory of the tuple 𝐹(¯𝑎) in 𝑓 (𝐴)⋃︀ℓ is +uniquely determined by the following two theories: +a. the quantifier-free theory of ¯𝑎 in 𝐴⋃︀ℓ; +b. the rank 𝑟 mso theory of ¯𝑎 in 𝐴⋃︀ℓ + 1. +If we ignore the grades, then a graded mso interpretation +is a special case of an mso interpretation. This is because the +quantifier-free type mentioned in the continuity condition +will tell us which output candidates from 𝐹(𝐴) are in the +universe of the output structure, and how the relations of +the output structure are defined on them. Therefore, the +continuity condition tells us that the output not only can be +defined in mso, but it can be defined in a way that respects +the grades. In particular, in the special case when all input +elements have nonzero grade, and all output elements have +zero grade, the continuity condition collapses to the usual +condition in an mso interpretation. In this way, graded mso +interpretations generalize ungraded mso interpretations. +Graded mso interpretations also generalize quantifier-free +interpretations – this happens in the case when all elements +in the input and output structures have grade zero. In this +case, only the quantifier-free component is useful, and all +formulas are quantifier-free. +In the appendix, we show that all strongly derivable prime +functions are graded mso interpretations. This will imply +that all weakly derivable functions are ungraded mso inter- +pretations, since the continuity condition becomes vacuous +when the input type is sufficiently upgraded. The proof is an +induction on the size of a strong derivation, with the most in- +teresting cases being composition and safe fold. Composition +is a corollary of composition closure for mso interpretations +on string types [9, Corollary 8], while safe fold is treated in +the same way as in Theorem 3.2. +6 +Linear regular functions +The last group of results from this paper concerns the linear +regular functions, i.e. polyregular functions of linear growth. +We show that a small change to the system from Theorem 5.3 +will give exactly the linear regular functions. As we will see, +superlinear growth in the system from Theorem 5.3 is not +created by the fold combinator, with the culprit instead being +!Γ → !Γ × Γ +absorption. +This function allows us to create an unbounded number +of copies of an element of Γ, as witnessed in the proof of +Claim 5.4. If we simply remove this function, then the system +will become too weak, since all other prime functions and +combinators preserve the property that the universe of the + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +output structure is contained in the universe of the input +structure. The solution is to add a weaker form of absorption +!Γ → Γ × Γ +linear absorption. +In other words, removing all occurrences of ! is the price paid +for copying. The corresponding system describes exactly the +linear regular functions, as stated in the following theorem. +Theorem 6.1. A function 𝑓 ∶ Σ → Γ between string types +is linear regular if and only if it can be weakly derived in +a system that is obtained from the one6 in Theorem 5.3 by +replacing absorption with linear absorption. +The proof for the above theorem, which is in the appen- +dix, is based on Example 7 about streaming string transduc- +ers. The idea is that linear absorption together with fold is +enough to simulate streaming string transducers, which are +expressively complete the linear regular functions. +6.1 +Tree types +It turns out that the system for linear regular functions from +Theorem 6.1 can be generalized without much further diffi- +culty to trees. This is in contrast to a prior combinator system +for trees [8, Theorem 7.1], which had an involved proof using +approximately fifty prime functions. We belive that this is +evidence for the usefulness of the fold combinator. +Consider a type for trees, defined inductively by +TΣ = 1 + TΣ × Σ × TΣ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +a tree is either a leaf, or has two +subtrees and a root label +A tree type is a type that is constructed using the types from +Definition 2.3, together with the tree type. Tree types can be +seen as structures, using the same construction as for lists +in Defintion 2.4, except that instead of one linear order, we +have two orders: the descendant order (which is not a linear +order) and the document order given by +left subtree +< +root +< +right subtree. +Define a linear regular tree function to be a function between +tree types that is defined using linear mso transductions. +Following Wilke [24], we view trees as an algebra. In this +algebra, there is an additional type constructor CΣ, which +describes contexts. A context is a tree with a distinguished +leaf (called the hole) where other trees can be inserted. This +is not a primitive type constructor, only syntactic sugar for +a certain combination of the list and tree type constructurs: +CΣ def= ((TΣ × Σ) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +the hole is in +the right subtree ++ (Σ × TΣ) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +the hole is in +the left subtree +)∗. +6One can also start with the smaller system from Theorem 5.5. +To operate on trees and contexts, we use the following oper- +ations, called Wilke’s operations,see [24, Figure 1]: +1 + TΣ × Σ × TΣ → TΣ +tree constructor +CΣ × TΣ → TΣ +replace hole by a tree +CΣ × CΣ → CΣ +context composition +1 + (TΣ × Σ) + (Σ × TΣ) → CΣ +context creation +All of these operations are quantifier-free interpretations, +and we will use them as primes. The last two operations +need not be explicitly added, since they can derived using +the system from Theorem 3.2. +Theorem 6.2. A function 𝑓 ∶ Σ → Γ between tree types is +linear regular if and only if it can be derived in a system that +is obtained from the system in Theorem 6.1 by adding the tree +type, Wilke’s operations, the prime function +!TΣ ↔ T!Σ +! commutes with T +and the following combinator +!𝑘1 → Γ +Γ × Σ × Γ → Γ +!𝑘TΣ → Γ +safe tree fold, +which can be applied whenever Γ has grade < 𝑘. +Proof (Sketch) +As in Theorem 6.1. We use the same soundness proof, except +that tree automata are used instead of string automata. For +completeness, we use a result of Alur and D’Antoni, which +says that every linear mso interpretation is computed by a +streaming tree transducer [3, Theorem 4.6]. Adjusting for +notation, a streaming tree transducer is defined in the same +way as in Example 7, except that instead of lists, registers +store trees and contexts. The registers in the transducer are +manipulated using Wilke’s operations; and thus for the same +reason as in Example 7, the corresponding tree function is +weakly derivable. This completeness proof takes into account +only functions of type TΣ → TΓ where Σ and Γ are finite +alphabets, but the extension to other tree types is easily +accomplished by encoding tree types into such trees. ◻ +Tree polyregular functions. It is natural to ask about a +polyregular system for trees. We conjecture that if we add +absorption to the system from Theorem 6.2, and possibly a +few extra prime functions, then the system will define exactly +the mso interpretations on tree types. This conjecture would +imply that tree-to-tree mso inprepretations are closed under +composition, which is an open problem. +7 +Perspectives +We finish the paper with some directions for future work. +In our proofs, we are careless about the number of times +that ! is applied. Maybe a more refined approach can give +a better understanding of the correspondence between the +nesting of ! and the resources involved, such as quantifiers + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +or copying. Alternatively, one could try to do away with ! +entirely, and use some proof system where the safety of fold +is captured by a structural property of the proof. One idea +in this direction is to look at cyclic proofs [10]. Another idea +would be to capture the structural property using the visual +language of string diagrams. +Another question that concerns string diagrams is about +the equivalence problem. Decidability of the equivalence +problem for polyrergular functions is an open problem, but +in the case of linear functions the problem is known to be +decidable [16, Theorem 1]. Maybe one can express the de- +cision procedure in terms of string diagrams, by designing +equivalences on string diagrams which identify exactly those +diagrams that describe the same function. +The system in this paper is based on combinators. A more +powerful system would also allow for variables, 𝜆, and higher- +order types. Such a system exists without fold [6, Section +4], and it is tempting to see if it can be extended with fold. +The result would be an expressive functional programming +language that can only define regular functions. +References +[1] Rajeev Alur and Pavol Černý. Expressiveness of streaming string +transducers. In Foundations of Software Technology and Theoretical +Computer Science, FSTTCS 2010, Chennai, India, volume 8 of LIPIcs, +pages 1–12. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2010. +[2] Rajeev Alur and Loris D’Antoni. Streaming Tree Transducers. J. ACM, +64(5):31:1–31:55, August 2017. +[3] Rajeev Alur, Adam Freilich, and Mukund Raghothaman. Regular +combinators for string transformations. In Computer Science Logic and +Logic in Computer Science, CSL-LICS 2014, Vienna, Austria,, pages 1–10. +ACM, 2014. +[4] Stephen Bellantoni and Stephen Cook. A new recursion-theoretic +characterization of the polytime functions. In Proceedings of the twenty- +fourth annual ACM symposium on Theory of computing, pages 283–293, +1992. +[5] Mikołaj Bojańczyk. Polyregular Functions. CoRR, abs/1810.08760, +2018. +[6] Mikołaj Bojańczyk. Transducers of polynomial growth. In Proceedings +of the 37th Annual ACM/IEEE Symposium on Logic in Computer Sci- +ence, LICS ’22, New York, NY, USA, 2022. Association for Computing +Machinery. +[7] Mikołaj Bojańczyk, Laure Daviaud, and Shankara Narayanan Krishna. +Regular and First-Order List Functions. In Logic in Computer Science, +LICS, Oxford, UK, pages 125–134. ACM, 2018. +[8] Mikołaj Bojańczyk and Amina Doumane. First-order tree-to-tree +functions. In Holger Hermanns, Lijun Zhang, Naoki Kobayashi, and +Dale Miller, editors, LICS ’20: 35th Annual ACM/IEEE Symposium on +Logic in Computer Science, Saarbrücken, Germany, July 8-11, 2020, pages +252–265. ACM, 2020. +[9] Mikolaj Bojanczyk, Sandra Kiefer, and Nathan Lhote. String-to-string +interpretations with polynomial-size output. In 46th International +Colloquium on Automata, Languages, and Programming, ICALP 2019, +July 9-12, 2019, Patras, Greece, pages 106:1–106:14, 2019. +[10] James Brotherston and Alex Simpson. Sequent calculi for induction +and infinite descent. Journal of Logic and Computation, 21(6):1177– +1216, 2011. +[11] Bob Coecke and Alex Kissinger. Picturing quantum processes. Cam- +bridge University Press, 2017. +[12] Bruno Courcelle and Joost Engelfriet. Graph Structure and Monadic +Second-Order Logic - A Language-Theoretic Approach, volume 138 of En- +cyclopedia of Mathematics and Its Applications. Cambridge University +Press, 2012. +[13] Joost Engelfriet and Hendrik Jan Hoogeboom. MSO Definable String +Transductions and Two-way Finite-state Transducers. ACM Trans. +Comput. Logic, 2(2):216–254, 2001. +[14] Joost Engelfriet and Sebastian Maneth. Two-way finite state transduc- +ers with nested pebbles. In International Symposium on Mathematical +Foundations of Computer Science, pages 234–244. Springer, 2002. +[15] Noa Globerman and David Harel. Complexity results for two-way and +multi-pebble automata and their logics. Theor. Comput. Sci., 169(2):161– +184, 1996. +[16] Eitan M. Gurari. The Equivalence Problem for Deterministic Two-Way +Sequential Transducers is Decidable. SIAM J. Comput., 11(3):448–452, +1982. +[17] Jörg Flum Heinz-Dieter Ebbinghaus. Finite Model Theory. Springer +Monographs in Mathematics. Springer, 2nd edition, 2006. +[18] Graham Hutton. A tutorial on the universality and expressiveness of +fold. Journal of Functional Programming, 9(4):355–372, 1999. +[19] Kenneth Krohn and John Rhodes. Algebraic theory of machines. i. +prime decomposition theorem for finite semigroups and machines. +Transactions of the American Mathematical Society, 116:450–450, 1965. +[20] Damiano Mazza. Simple parsimonious types and logarithmic space. In +24th EACSL Annual Conference on Computer Science Logic (CSL 2015). +Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2015. +[21] Tova Milo, Dan Suciu, and Victor Vianu. Typechecking for XML +transformers. J. Comput. Syst. Sci., 66(1):66–97, 2003. +[22] Lê Thành Dung Nguyên, Camille Noûs, and Pierre Pradic. Comparison- +free polyregular functions. In 48th International Colloquium on Au- +tomata, Languages, and Programming, ICALP 2021, July 12-16, 2021, +Glasgow, Scotland (Virtual Conference), pages 139:1–139:20, 2021. +[23] J. C. Shepherdson. The reduction of two-way automata to one-way +automata. IBM Journal of Research and Development, 3(2):198–200, +April 1959. +[24] Thomas Wilke. An algebraic characterization of frontier testable tree +languages. Theoretical Computer Science, 154(1):85–106, 1996. +A +The quantifier-free system +In this part of the appendix, we prove Theorem 4.1. In the +proof, a derivable function is a function that can be derived +using the system from Theorem 4.1. In other parts of the +paper, derivable functions will refer to other systems. +The proof of Theorem 4.1 has two parts: soundness (i.e. all +derivable functions are quantifier-free interpretations) and +completeness (i.e. all quantifier-free interpretations are deriv- +able). +A.1 +Soundness +To prove soundness of the system, we show that all prime +functions from Figure 1 are quantifier-free interpretations, +and that the class of quantifier-free interpretations is closed +under applying all combinators from Figure 2. +We only discuss one case, namely the combinator +Σ → Γ +Σ∗ → Γ∗ +functoriality of ∗, +which is also known as the map combinator. The difficulty +with this combinator is that in the structure that represents +a list of elements (︀𝐴1, . . .,𝐴𝑛⌋︀ ∈ Σ, as per Definition 2.4, the + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +nullary predicates from the structures𝐴1, . . .,𝐴𝑛 are replaced +by unary predicates. However, since the same replacement is +done for the output list, it follows that a straightforward syn- +tactic construction can be applied to transform the quantifier- +free interpretation from the assumption of the combinator +into a quantifier-free interpretation from the conclusion. +The rest of the soundness proof is left to the reader. +A.2 +Completeness +The rest of this section is devoted to the completeness proof. +We begin with some notation and preparatory lemmas that +will be used in the proof. +Zero type. We will use an extended system, which has an +additional type called 0. This type represents a class that +contains one structure, and that structure has an empty uni- +verse. (This class is terminal, in the sense that every class of +structures admits a unique quantifier-free interpretation to +0.) The corresponding prime functions are +Σ → Σ × 0 +add 0 +0 → Σ∗ +create an empty list +One should not confuse 0 with the empty class ∅ (which +anyway is not part of our type system). For example, +0 + Σ ≠ Σ = ∅ + Σ. +The extended system with 0 is equivalent to the original +system, since we can view 0 as 1∗, but with only the empty +list used. In particular, the extended system is conservative +in the following sense: if a function between types that do +not use 0 is derivable in the extended system, then it is also +derivable in the non-extended system. For this reason, we +can do the completeness proof in the extended system, which +will be slightly more convenient. From now on, list types +can use 0. +Disjunctive normal form. It will be useful to consider +list types in a certain normal form, which is achieved using +distributivity. We say that a list type is in disjunctive normal +form if it is of the form +∐ +𝑖∈𝐼 +∏ +𝑗∈𝐼𝑗 +Σ𝑖,𝑗 +where each Σ𝑖,𝑗 is one of the types 0 or 1, or a list Σ∗ where +Σ is in disjunctive normal form. In other words, the list type +does not contain any product of co-products. +In our proof, the main advantage of this normal form +concerns nullary relations. Recall that the nullary relations +in Definition 2.4, appear only in the co-product, and they +are removed when applying the list constructor. Therefore, +if a type in disjunctive normal form is not a co-product type, +then its vocabulary contains no nullary relations. +The following lemma shows that every list type admits +a derivable isomorphism with some list type in disjunctive +normal form. Here, a derivable isomorphism is a derivable +function that has a derivable inverse. +Lemma A.1. Every list type admits a derivable isomorphism +with some list type in disjunctive normal form. +Proof +Using distributivity and functoriality. ◻ +Thanks to the already proved soundness part of the theo- +rem, the derivable isomorphism is also quantifier-free. There- +fore, to prove completeness of the system, it is enough to +prove completeness only for functions where both the input +and output types are in disjunctive normal form. From now +on, we only consider list types in disjunctive normal form. +Safe pairing. The last issue to be discussed before the +completeness proof concerns pairing functions. Suppose that +𝑓 ∶ Σ → Γ1 × Γ2 +is a quantifier-free interpretation. In the completeness proof, +we will want to show that it is derivable. A natural idea +would be to use an inductive argument to derive the two +quantifier-free interpretations +𝑓𝑖 ∶ Σ → Γ𝑖 +that arise from 𝑓 by projecting it onto the two output coordi- +nates, and to then pair these two derivations into a derivation +of 𝑓 . Unfortunately, combining these two derviations would +require some kind of pairing combinator, or a duplicating +function of type Σ → Σ × Σ, none of which are available in +our system (because they would be unsound). +For these reasons, we need to be a bit careful with pairing. +The crucial observation is that pairing is not always unsound, +because some functions can be paired. For example, the two +functions 𝑓1 and 𝑓2 described above can be paired, because +they use disjoint parts of the input structure. More formally, +the universe formulas are disjoint, i.e. no element can be +selected by both universe formulas. This view will be used in +the completeness proof. To formalize it, we use the following +lemma. +Lemma A.2. Let Σ be a list type in disjunctive normal form, +and let 𝜑(𝑥) be a quantifier-free formula over its vocabulary. +There is a list type, denoted by Σ⋃︀𝜑, and a quantifier-free in- +terpretation +Σ +Σ⋃︀𝜑 +projection of 𝜑 +such that the following conditions are satisfied. +1. For every quantifier-free interpretation 𝑓 ∶ Σ → Γ, such +that the universe formula of 𝑓 is contained in 𝜑 (which +means that the universe formula of 𝑓 implies the formula +𝜑), there is a a decomposition +Σ +Γ +Σ⋃︀𝜑 +𝑓 +projection of 𝜑 +𝑓 ⋃︀𝜑 + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +where 𝑓 ⋃︀𝜑 is a quantifier-free interpetation. +2. Safe pairing. Suppose that 𝜑1, . . .,𝜑𝑛 are formulas as +in the assumption of the lemma, which are pairwise +disjoint. Then one can derive the function +Σ +(Σ⋃︀𝜑1) × ⋯ × (Σ⋃︀𝜑𝑛) +that produces all projections in parallel. +Proof +The purpose of the type 0 is this lemma; with the type used +Σ⋃︀𝜑 when 𝜑 selects no elements. The lemma is proved by +induction on the structure of the type Σ. +● Suppose that Σ is the zero type 0. In this case, the +formula 𝜑 must be equivalent to “false”. We define 0⋃︀𝜑 +to be the same type 0, and the projection is the identity. +The safe pairing condition holds because of the prime +function Σ → Σ × 0. +● Suppose that Σ is the unit type 1. In this case, the +formula 𝜑 is equivalent to either “false” or “true”, since +the unique structure in 1 has a universe that has only +one element. We define 1⋃︀𝜑 to be the 0 or 1, depending +on which of the two cases holds, with the projection +being the unique function 1 → 1⋃︀𝜑. The safe pairing +condition is proved using the prime function Σ → +Σ × 0, since the list of quantifier-free formulas in the +condition can have at most one formula that is not +“false”. +● Consider a list type of the form Σ∗. The main observa- +tion in the proof is that there is a bijective correspon- +dence between quantifier-free formulas over the vo- +cabularies of Σ and Σ∗. This correspondence is defined +as follows: for every formula 𝜑 over the vocabulary +of Σ, there is a formula 𝜑∗ over the vocabulary of Σ∗ +such that for every list +𝐴 = (︀𝐴1, . . .,𝐴𝑛⌋︀ ∈ Σ∗, +an element 𝑎 ∈ 𝐴𝑖 is selected by 𝜑∗ in the entire list +𝐴 if and only if 𝑎 is selected by 𝜑 in the list element +𝐴𝑖. It is not hard to see that such a formula exists, and +furthermore, every formula over the vocabulary of Σ∗ +is of equivalent to a formula of the form 𝜑∗. +Therefore, in the case when the type is a list Σ∗, we can +assume that the formula over the vocabulary of Σ∗ is +of the form 𝜑∗ for some formula 𝜑 over the vocabulary +of Σ. Define +Σ∗⋃︀𝜑∗ +def= +(Σ⋃︀𝜑)∗, +with the projection function for 𝜑∗ being the result of +applying the map combinator to the projection func- +tion for 𝜑. The safe pairing property is proved by using +the induction assumption, and using the function +(Σ1 × ⋯ × Σ𝑛)∗ → Σ∗ +1 × ⋯ × Σ∗ +𝑛, +which can easily be seen to be derivable. +● The case when Σ is a co-product Σ1 + Σ2 is proved +similarly to the list case. Here, we use a bijective cor- +respondence between quantifier-free formulas 𝜑 over +the vocabulary of Σ with pairs (𝜑1,𝜑2), where 𝜑𝑖 is a +quantifier-free formula over the vocabulary of Σ𝑖. +● The case when Σ is a product Σ1 × Σ2 is proved simi- +larly to the co-product case. Again, there is a bijective +correspondence between quantifier-free formulas 𝜑 +over the vocabulary of Σ with pairs (𝜑1,𝜑2), where 𝜑𝑖 +is a quantifier-free formula over the vocabulary of Σ𝑖. +For the existence of such a bijective correspondence, +we use the assumption that the type is in disjunctive +normal form. Thanks to the assumption, the vocabu- +lary has no nullary relations; if there would be nullary +relations then there could be some communication +between the two coordinates in the product. +◻ +Completeness. Consider a quantifier-free interpretation +𝑓 ∶ Σ → Γ. +Let 𝜑 be the universe formula of 𝑓 , and let Σ⋃︀𝜑 be the type +obtained by applying Lemma A.2. We write dom𝑓 for this +type. The corresponding function in the decomposition as +in item 1 is then +𝑓 ⋃︀dom𝑓 ∶ dom𝑓 → Γ. +. We will use the following terminology for this decomposi- +tion: the type Σ⋃︀𝜑 will be called the reduced domain of 𝑓 , the +projection will be called the domain reduction of 𝑓 , and the +function 𝑔 will be called reduced 𝑓 . Here is a diagram that +displays this terminology +Σ +Γ +reduced domain of 𝑓 +𝑓 +domain reduction of 𝑓 +reduced 𝑓 +Because the domain reduction is derivable, and derivable +functions are closed under composition, it is enough to show +that for every quantifier-free interpretation, its reduced ver- +sion is derivable. This will be shown in the following lemma. +Lemma A.3. For every quantifier-free interpretation +𝑓 ∶ Σ → Γ +with universe formula 𝜑, one can derive the function +𝑓 ⋃︀𝜑 ∶ Σ⋃︀𝜑 → Γ +from item 1 in Lemma A.2. +Proof +The lemma is proved by structural induction on the input +and output types. In the induction step, we will replace either +the input or output type by a simpler one. The induction +step is shown in Sections A.2.2–A.2.5 below, which consider +the following cases: + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +A.2.1 the input type is a co-product; +A.2.2 the output type is a co-product; +A.2.3 the output type is a product; +A.2.4 the input type is 0 or 1; +A.2.5 the input type is a list; +A.2.6 the input type is a product. +These cases are exhaustive, i.e. at least one of them always +applied, but they are not disjoint. When applying some case, +we assume that none of the previous cases can be applied. +The induction basis corresponds to case A.2.4. +A.2.1 +The input type is a co-product. In the represen- +tation of the co-product type from Definition 2.4, the infor- +mation about whether the structure comes from the first or +second case is stored in a nullary predicate. Therefore, by +a straightforward syntactic manipulation of quantifier-free +interpretations, from a quantifier-free interpetation +𝑓 ∶ Σ1 + Σ2 → Γ, +we can obtain two quantifier-free interpretations +𝑓1 ∶ Σ1 → Γ +𝑓2 ∶ Σ2 → Γ +which describe the behaviour of 𝑓 on inputs from Σ1 and Σ2, +respectively. Let 𝜑 be the universe formula of 𝑓 , and let 𝜑1 +and 𝜑2 be the universe formulas of 𝑓1 and 𝑓2. By induction +assumption, we can derive +𝑓𝑖⋃︀𝜑𝑖 ∶ Σ𝑖⋃︀𝜑𝑖 → Γ +and derive their reduced versions. Since by definition we +have +(Σ1 + Σ2)⋃︀𝜑 = Σ1⋃︀𝜑1 + Σ2⋃︀𝜑2, +we can combine these two derivations into a derivation 𝑓 ⋃︀𝜑, +by using the combinator +Δ1 → Γ +Δ2 → Γ +Δ1 + Δ2 → Γ +cases, +which itself can be derived using functoriality of + and the +co-diagonal. +A.2.2 +The output type is a co-product. Consider a func- +tion +𝑓 ∶ Σ → Γ1 + Γ2 +whose output type is a co-product. In this case, we assume +that the previous case cannot be applied, i.e. the input type +is not a co-product. +To produce the output structure, we need to define the +nullary predicate that says which of the two cases in the +output type is used. In a quantifier-free interpretation, this +nullary predicate is defined by a quantifier-free formula, with +no free variables, which is evaluated in the input structure. +Since there are no nullary predicates in the input structure +(because otherwise, the input type would be a co-product, +and we could apply the case from the previous section), it +follows that this quantifier-free formula is either “true” or +“false”. This means that the function 𝑓 must always use the +same variant Γ1 or Γ2 in the co-product from the output +type, regardless of the choice of input structure. Therefore, +we can replace 𝑓 by a corresponding function of type Σ → +Γ𝑖, apply the induction assumption, and conclude by using +composition and the co-projection. +A.2.3 +The output type is a product. Consider a function +𝑓 ∶ Σ → Γ1 × Γ2 +whose output type is a product. We split this function into +two quantifier-free interpretations +𝑓1 ∶ Σ → Γ1 +𝑓2 ∶ Σ → Γ2, +which produce the two coordinates in the output of 𝑓 . These +two functions must have disjoint universe formulas, since +otherwise the same element in the output structure would +belong to both coordinates of a pair. We can apply the induc- +tion assumption, and then combine these derivations into a +derivation of 𝑓 by using safe pairing from Lemma A.2. +A.2.4 +The input type is 0 or 1. By cases A.2.2 and A.2.3, +we can assume that the output type of the unique function +in the family is either 0, 1, or a list type Γ∗. +When the output type is 0 or 1, then we are dealing with +a quantifier-free interpretation which has one of the types +0 → 0 +0 → 1 +1 → 0 +1 → 1. +There is no quantifier-free interpretation of the type 1 → 0, +and for the remaining types there is exactly one quantifier- +free interpretation, which is easily seen to be derivable. +We are left with the case when the output type is Γ∗. If +the input type is 0, then the quantifier-free interpretation +necessarily produces the empty list, and it is therefore deriv- +able. If the input type is 1, then the function always produces +the same output, which is either the empty list, in which +case it can be derived using the list constructor, or a single- +ton list (︀𝐴⌋︀ for some fixed structure 𝐴 ∈ Γ. In the singleton +case, we can use the induction assumption to derive the func- +tion 1 ↦ 𝐴, and pack the result as a list using the list unit +operation. +A.2.5 +The input type is a list. We now arrive at the most +interesting case in the proof, which is when the input type is +a list Σ∗. Because the previously studied cases A.2.2 and A.2.3 +cannot be applied, the output type is one of 0, 1, or Γ∗. When +the output type is 0, there is only one possible function, +which is easily derivable. The output type 1 is impossible, +since the function could not handle an empty list on the +input. We are left with a list-to-list function. To prove the +inductive step for such functions, we use the analysis from +the following claim. +Claim A.4. For every quantifier-free interpretation +𝑓 ∶ Σ∗ → Γ∗ + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +one can find quantifier-free interpretations +𝑓1, . . ., 𝑓𝑘 ∶ Σ∗ → Γ∗ +with disjoint universe formulas such that 𝑓 is equal to +𝐴 ∈ Σ∗ +↦ +𝑓1(𝐴)⋯𝑓𝑘(𝐴) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +list concatenation +and each 𝑓𝑖 has one of the following properties: +1. all output lists of 𝑓𝑖 have length at most one. +2. there is some quantifier-free interpretation +𝑔 ∶ Σ → Γ∗ +such that 𝑓𝑖 is equal to +(︀𝐴1, . . .,𝐴𝑛⌋︀ ↦ 𝑔(𝐴1)⋯𝑔(𝐴𝑛) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +list concatenation +3. as in item 2, but with reverse list order 𝑔(𝐴𝑛)⋯𝑔(𝐴1). +Before proving the claim, we use it to complete the in- +duction step of the lemma in the present list-to-list case. +Apply Claim A.4 to the function 𝑓 , yielding a decomposition +into functions 𝑓1, . . ., 𝑓𝑘. The induction assumption can be +applied to these functions, since item 1 in the claim gives +a smaller output type (namely Γ instead of Γ∗ for the only +list element), while the remaining two items give smaller +input types. Finally, these derivations can be combined into a +derivation of 𝑓 , using the pairing operation from Lemma A.2, +the function for list concatenation from Example ??, and the +prime function +(Σ × Γ)∗ → Σ∗ × Γ∗ +list distribute +which is used to separate the domains of the functions 𝑓1, . . ., 𝑓𝑘 +from the input list. It remains to prove the claim. +Proof (of Claim A.4) +Consider the universe formula 𝜑(𝑥) of 𝑓 . Decompose this +formula as a finite union +𝜑(𝑥) = ⋁ +𝜎∈Φ +𝜎(𝑥) +of quantifier-free theories as in Definition 3.3, i.e. quantifier- +free formulas that specify all relations satisfied by 𝑥. Take +some input structure in Σ∗. For elements of this structure +that satisfy the universe formula, there are two orders: the +input order that describes the order in the input list +𝐴 = (︀𝐴1, . . .,𝐴𝑛⌋︀ ∈ Σ∗ +and the output order that describes the order in the output +list +𝑓 (𝐴) = (︀𝐵1, . . .,𝐵𝑚⌋︀ ∈ Γ∗. +In the proof of the claim, we will analyze the relationship +between these two orders. Both of these orders are reflex- +ive, total, and transitive, but not necessarily anti-symmetric, +since two elements may belong to the same list element. +For an element 𝑎 in an input structure 𝐴 ∈ Σ∗ that satisfies +the universe formula 𝜑(𝑥), the unary theory of 𝑎 is defined +to be the unique quantifier-free theory 𝜎 ∈ Φ that is satis- +fied by 𝑎. If 𝑎 is strictly smaller than 𝑏 in the input order, +then by compositionality, the output order on 𝑎 and 𝑏 will +be uniquely determined by the unary theories of the two +individual elements 𝑎 and 𝑏. This means that exactly of the +following three implications must hold +𝑎 is strictly before 𝑏 +in the output order +𝑎 is strictly before 𝑏 +in the input order +and the unary theories +of 𝑎 and 𝑏 are 𝜎 and 𝜏 +𝑎 is equivalent to 𝑏 +in the output order +𝑎 is strictly after 𝑏 +in the output order +𝜎<𝜏 +𝜎∼𝜏 +𝜎>𝜏 +Depending on which implication holds, we write one of +𝜎 < 𝜏 +𝜎 ∼ 𝜏 +𝜎 > 𝜏. +Before continuing, we make two cautionary remarks about +the notation involving the relations < and > described above. +The first cautionary remark is that < and > describe relations +that are not necessarily converses of each other, since 𝜎 < 𝜏 +and 𝜏 > 𝜎 do not mean the same thing; one of these condi- +tions could be true without the other one being true. The +second cautionary remark is that 𝜎 < 𝜏 is not necessarily ob- +tained from some partial order by looking at strictly growing +pairs. For example, we could have both 𝜎 < 𝜏 and 𝜏 < 𝜎. +To prove the claim, we make five observations about the +relations <, > and ∼. In these observations, we use partial +equivalence relations; a partial equivalence relation is defined +to be a binary relation that is symmetric and transitive but +not necessarily reflexive. Equivalence classes of partial equiv- +alence relations are defined in the expected way; the only +difference is that some elements of the domain might not +belong to any equivalence class. +1. The first observation is that 𝜎 ∼ 𝜏 is a partial equiva- +lence relation. It is easy to see that the relation 𝜎 ∼ 𝜏 +is transitive. We now argue that it is symmetric. (This +is not immediately obvious.) Suppose that 𝜎 ∼ 𝜏. Con- +sider a list in 𝐴 ∈ Σ∗ with four distinguished elements +𝑎1 +⟩︀ +unary +type 𝜎 +< +𝑎2 +⟩︀ +unary +type 𝜏 +< +𝑎3 +⟩︀ +unary +type 𝜎 +< +𝑎4 +⟩︀ +unary +type 𝜏 +with the order relationship describing the input order. +From the assumption on 𝜎 ∼ 𝜏 we can conclude that +three pairs (depicted by lines in the following diagram) + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +belong to the same elements in the output list: +𝑎1 +𝑎2 +𝑎3 +𝑎4. +𝜎∼𝜏 +𝜎∼𝜏 +𝜎∼𝜏 +Since belonging to the the same element in the output +list is a transitive relation, we can deduce that 𝑎2 and +𝑎3 belong to the same element in the output list, thus +establishing 𝜏 ∼ 𝜎. +2. The next observation is that (𝜎 < 𝜏 ∧𝜏 < 𝜎) is a partial +equivalence relation. It is symmetric by definition, and +it is transitive because each of the two conjuncts is +transitive. +3. By the same proof as in the previous item, (𝜎 > 𝜏 ∧𝜏 > +𝜎) is a partial equivalence relation. +4. We now show that the equivalence classes of the par- +tial equivalence relations described in the first three +observations are disjoint, and give a partition of +Φ = Φ1 ∪ ⋯ ∪ Φ +of all unary types in Φ. For every𝜎 ∈ Φ, we have exactly +one of the cases 𝜎 ∼ 𝜎, 𝜎 < 𝜎, or 𝜎 > 𝜎. This proves +that every 𝜎 belongs to exactly one of the equivalence +classes in the previous three items. +5. The last observation is that the order on equivalence +classes in the previous item can be chosen so that for +all 𝑖 < 𝑗 we have +𝜎 ∈ Φ𝑖 and 𝜏 ∈ Φ𝑗 +⇒ +𝜎 < 𝜏. +Let Φ𝑖 and Φ𝑗 be different equivalence classes from the +previous item. For every 𝜎 ∈ Φ𝑖 and 𝜏 ∈ Φ𝑗 we have +exactly one of the three cases +𝜎 < 𝜏 +or +𝜎 > 𝜏 +or +𝜎 ∼ 𝜏. +The third case cannot hold, since otherwise Φ𝑖 and +Φ𝑗 would be in the same equivalence class from the +first observation. Therefore, one of the two first cases +must hold. A short analysis, which is left to the reader, +also shows that which of the two cases holds (first +or second) does not depend on the choice of the 𝜎 +and 𝜏. This means that there is an unambiguous order +relationship between Φ𝑖 and Φ𝑗, and this relationship +can be used to prove item 5 of the claim. +Let Φ1, . . ., Φ𝑚 be as in the last of the above observations. +We know that for every input structure 𝐴 ∈ Σ∗, the output +list can be decomposed as +𝑓 (𝐴) = 𝑓1(𝐴)⋯𝑓𝑛(𝐴) +where 𝑓𝑖 is the function obtained from 𝑓 by restricting the +output elements to those that have type from Φ𝑖 in the in- +put structure. To complete the proof of the claim, we will +show that each function 𝑓𝑖 has one of the three kinds in the +statement of the claim. +Suppose first that Φ𝑖 is an equivalence class defined by +𝜎 ∼ 𝜏 as in the first observation. This means that all outputs +produced by 𝑓𝑖 are equivalent in the output order. Hence this +𝑓𝑖 is of kind 1 as in the statement of the claim. +Suppose now that Φ𝑖 is an equivalence class defined by +(𝜎 < 𝜏 ∧ 𝜏 < 𝜎) as in the second observation. This means +that for every input list 𝐴 ∈ Σ∗, if we take two elements 𝑎 +and 𝑏 that have unary theory in Φ𝑖, then +𝑎 is strictly before 𝑏 in the input order +𝑎 is strictly before 𝑏 in the output order +Hence this 𝑓𝑖 is of kind 2 as in the statement of the claim. +A symmetric argument works for an equivalence class +defined by (𝜎 > 𝜏 ∧ 𝜏 > 𝜎), except that this time the output +order is reversed, giving a function as in item 3 of the lemma. +◻ +A.2.6 +The input type is a product. The final case in the +proof of Lemma A.3 is when the input type is a product. +Since all types are in disjunctive normal form, the input type +is a product +Σ = Σ1 × ⋯ × Σ𝑚 +where each Σ𝑖 is either 1 or a list. (The type 0 can be removed +from a product.) Because the previously studied cases A.2.2 +and A.2.3 about output types that are products or co-products +cannot be applied, the output type is either 0, 1, or a list type +Γ∗. +If the output type is 0, then the function is easily derivable. +Consider now the case when the output type is 1. It cannot +be the case that each of the input types Σ1, . . ., Σ𝑚 is a list, +since the quantifier-free interpretation would be unable to +handle the case when all lists are empty. Therefore, one of +the input types is the unit type 1, and the conclusion of the +lemma can be proved by using 1 → 1. +We are left with the case when the ouput type is of the form +Γ∗. Here, we proceed in the same way as in Section A.2.5, +with the corresponding version of Claim A.4 being the fol- +lowing claim. The proof of the claim, which uses a similar +analysis of unary quantifier-free theories as in Claim A.4, is +left to the reader. +Claim A.5. For every quantifier-free interpretation +𝑓 ∶ Σ1 × ⋯ × Σ𝑚 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +Σ +→ Γ∗ +one can find quantifier-free interpretations +𝑓1, . . ., 𝑓𝑘 ∶ Σ1 × Σ2 → Γ∗ +with disjoint universe formulas such that 𝑓 is equal to +𝐴 ∈ Σ +↦ +𝑓1(𝐴)⋯𝑓𝑘(𝐴) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +list concatenation +and each 𝑓𝑖 has one of the following properties: +1. all output lists of 𝑓𝑖 have length at most one; or + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +𝐺∗ → 𝐺 +group multiplication +Σ → Σ × Σ +diagonal +Σ∗ → 1 + Σ × Σ∗ +list destructor +(Σ + Γ)∗ → (Σ∗ + Γ∗)∗ +block +Σ∗ → (Σ∗ × Σ∗)∗ +split +Figure 4. Additional polyregular prime functions from [5]. +2. 𝑓𝑖 factors through the projection +Σ1 × ⋯ × Σ𝑚 → Σ𝑗 +for some 𝑗 ∈ {1, . . .,𝑚}. +This completes the last of the cases in the induction step, +and thus also the proof of the lemma, which also completes +the proof of Theorem 4.1. ◻ +B +Completeness for polyregular functions +In this section, we prove the completeness of the system in +Theorem 5.3, i.e. we show that every polyregular function +can be weakly derived. This implication is the less interesting +one, since our system is designed to be powerful, i.e. it should +be easy to derive functions in it. We will deduce the com- +pleteness of our system with fold from another completeness +result that uses a system without fold. +We begin by describing the system that we reduce to. +It has all of the combinators from Figure 2, and its prime +functions are contained in those from Figure 1 plus certain +additional functions that are described in Figure 4. The first +three primes from Figure 4 have already been discussed in +the paper, so we only explain the block and split functions. +The split function of type +Σ∗ → Σ∗ × Σ∗ +outputs all possible ways of splitting the input list into (prefix, +suffix) pairs, as explained in the following example: +(︀1, 2, 3⌋︀ +↧ +(︀((︀⌋︀, (︀1, 2, 3⌋︀), ((︀1⌋︀, (︀2, 3⌋︀), ((︀1, 2⌋︀, (︀3⌋︀), ((︀1, 2, 3⌋︀, (︀⌋︀)⌋︀. +The other additional function is the block function of type +(Σ + Γ)∗ → (Σ∗ + Γ∗)∗, +which blocks the elements of the input list into maximal +blocks of same type, as illustrated in the following example +that uses numbers for elements of Σ and letters for elements +of Γ: +(︀1, 2,𝑎, 3, 4, 5,𝑏,𝑐⌋︀ +↧ +(︀(︀1, 2⌋︀, (︀𝑎⌋︀, (︀3, 4, 5⌋︀, (︀𝑏,𝑐⌋︀⌋︀. +Theorem B.1. [5, p. 64] A function between list types is +polyregular if and only if it can be derived using the prime +functions and combinators from the quantifier-free system +Theorem 4.1, plus the prime functions from Figure 4. +In contrast to the system with fold from this paper, the +system from the above theorem was designed to be minimal, +and therefore, the completeness proof for the system with +fold will be a simple corollary of completeness of the system +from the above theorem. Thanks to Theorem B.1, to prove the +completeness result for our system with fold, it is enough to +show that (a) all prime functions in Theorem B.1 are weakly +derivable; and (b) the combinators in Theorem B.1 preserve +the weakly derivable functions. +Combinators. Consider first (b), about the combinators. +The combinators are those from Figure 2. There is one com- +binator for function composition, and three combinators +for functoriality. The combinators for functoriality are dealt +with using the prime functions about ! commuting with the +remaining constructors. The combinator for function com- +position is explained in the following diagram: +Σ +Γ +Δ +!𝑘Σ +!ℓΓ +!𝑘+ℓΣ +derivable +weakly derivable +upgrading +Prime functions. Consider now (a), about the prime func- +tions. Clearly all prime functions in the quantifier-free sys- +tem are weakly derivable, since they are even strongly deriv- +able. Weak derivability of the additional functions for group +multiplication and the list destructor was already discussed +in Examples 2 and 14. The diagonal function can easily be +weakly derived using absorption. We are left with the split +and block function. +Lemma B.2. Split and block are weakly derivable. +Proof +To weakly derive the split function, we use the prefixes func- +tion from Claim 5.4. If we take a list +(︀𝑎1, . . .,𝑎𝑛⌋︀ ∈ Σ∗, +and then apply prefixes, reverse, followed prefixes again, +then the output is a list in Σ∗∗∗ of length 𝑛 whose 𝑖-th ele- +ment is +(︀(︀𝑎1, . . .,𝑎𝑛⌋︀, (︀𝑎1, . . .,𝑎𝑛−1⌋︀, . . ., (︀𝑎1, . . .,𝑎𝑖⌋︀⌋︀. +(2) +Since weakly derivable functions are closed under compo- +sition, this output can be produced by a weakly derivable +function. Since weakly derivable functions are also closed + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +under map, to complete the proof that split is weakly deriv- +able, it remains to show that a weakly derivable function +can transform the 𝑖-th element in (2) into the corresponding +element in the output of split, namely +((︀𝑎1, . . .,𝑎𝑖⌋︀, (︀𝑎𝑖+1, . . .,𝑎𝑛⌋︀). +(3) +This is done as follows: using the list deconstructor, we split +the list in (2) into its head and tail. The head is reversed, while +the tail is transformed so that each list element is replaced +by its own head. +We now turn to the block function. One approach is to +derive the block function from split – thus showing that +it is not needed in the system. This is shown in [5, p.90]. +However, since we will later use a system that uses block but +not split, we show how to derive block directly. To compute +the block function, we use an automaton where the input +alphabet is Σ + Γ, the state space is +Δ = (Σ∗ + Γ∗)∗ × (Σ∗ + Γ∗) +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +most recent block +and the transition function is illustrated in the following +diagram (by symmetry, we only draw the left half): +(Σ* + Γ*)* × (Σ* + Γ*) × (Σ + Γ) +(Σ* + Γ*)* × (Σ* + Γ*) +(Σ* + Γ*)* × Σ* × Σ +(Σ* + Γ*)* × Γ* × Σ +(Σ* + Γ*)* × Σ* +(Σ* + Γ*)* × Σ* +(Σ* + Γ*)* +(Σ* + Γ*)* +(Σ* + Γ*)* +Σ* +Γ* +Σ*+Γ* +Σ* +Σ* +Σ +Σ +append +co-projection +append +unit +distribute +distribute-1 +In the diagram, the unit function is the function 𝑥 ↦ (︀𝑥⌋︀ +which can be derived as in Figure 3. If we set the initial state +of the above automaton to be a pair of empty lists (the second +one having type, say, Σ∗), then after reading a list in !(Σ+Γ)∗, +its state will store the output of the block operation, except +that the last list element will be held separately and will need +to be added using append. ◻ +B.1 +A smaller system +A corollary of the completeness proof is Theorem 5.5, which +shows that certain primes and combinators can be removed +from the system in Theorem 5.3, while keeping it complete. +We remove the map combinator, as well as all quantifier-free +functions from Figure 1 that involve the list type, namely +the functions +Σ∗ × Σ → Σ∗ +append +Σ∗ → Σ∗ +reverse +Σ∗∗ → Σ∗ +concat +Σ → Σ × Γ∗ +create empty +(Σ × Γ)∗ → Σ∗ × Γ∗ +list distribute +In their place, we have only two functions +1 + Σ → Σ∗ +lists of length at most one +Σ∗ × Σ∗ → Σ∗ +binary list concatenation. +We will show that the smaller system remains complete, +because it can weakly derive the removed functions, and +furthermore, the weakly derivable functions in the smaller +system are closed under the map combinator. +Proof (of Theorem 5.5) +Consider first the prime functions that are removed from the +smaller system. The append function can be (strongly) de- +rived in the smaller system. Using append, we can (strongly) +derive the left list constructor, whose safe folding gives the +list reversal in type +!Σ∗ → Σ∗. +is obtained by composing a co-projection with the right +list constructor. Applying the safe fold combinator to the left +list constructor (after swapping the order of its arguments) +shows that the reverse function can be derived in type +!Σ∗ → Σ, +and hence it is weakly derivable. The concat function is +derived in type +!Σ∗∗ → Σ∗ +by folding binary list concatenation. To weakly derive the +create empty function, we observe that for every type Σ we +can derive the unique function +!Σ → 1, +and this derivation can be used together with absorption to +derive the create empty function in type +!Σ → Σ × Γ∗. +Finally, the list distribute function can be derived in type +!(Σ × Γ)∗ → Σ∗ × Γ∗ +by a straightforward application of safe fold. +Finally, we can also eliminate the map combinator (func- +toriality of ∗), since using safe fold we obtain a version of +the map combinator in type +Γ → Σ +!Γ∗ → Σ∗ +weak map, + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +1 + Σ → Σ∗ +lists of length at most one +Σ∗ × Σ∗ → Σ∗ +binary list concatenation +!(Γ + Σ) ↔ !Γ+ !Σ +! commutes with + +!(Γ × Σ) ↔ !Γ× !Σ +! commutes with × +( !Γ)∗ ↔ !(Γ∗) +! commutes with ∗ +!Γ → !Γ × Γ +absorption +Γ × Σ ↔ Σ × Γ +commutativity of × +Γ + Σ ↔ Σ + Γ +commutativity of + +Γ × (Σ × Δ) ↔ (Γ × Σ) × Δ +associativity of × +Γ + (Σ + Δ) ↔ (Γ + Σ) + Δ +associativity of + +Γ × (Σ + Δ) ↔ (Γ × Σ) + (Γ × Δ) +distributivity +Γ1 × Γ2 → Γ𝑖 +projections +Γ𝑖 → Γ1 + Γ2 +co-projections +Γ + Γ → Γ +co-diagonal +!𝑘1 → Γ +Γ × Σ → Γ +!𝑘Σ∗ → Γ +safe fold +Γ → Σ +Σ → Δ +Γ → Δ +function composition +Γ1 → Σ1 +Γ2 → Σ2 +Γ1 × Γ2 → Σ1 × Σ2 +functoriality of × +Γ1 → Σ1 +Γ2 → Σ2 +Γ1 + Γ2 → Σ1 + Σ2 +functoriality of + +Γ → Σ +!Γ →!Σ +functoriality of ! +Figure 5. +A complete system for weakly deriving the +polyregular functions. The safe fold combinator can only +be applied when the type Γ has grade < 𝑘. +which is strong enough to replace the usual map combinator +in the completeness proof of the system in Theorem 5.3. +Summing, up we can reduce the system as stated in the +following theorem. ◻ +For easier reference, the system in the above theorem is +described in Figure 5. +C +Soundness for polyregular functions +In this section, we prove the soundness implication in The- +orem 5.3. We prove that every strongly derivable function +is a graded mso interpretations. The prime functions from +Figure 1 are quantifier-free, and therefore they are a special +case of graded mso interpretations. The extra prime func- +tions from Theorem 5.3, namely absorption and those about ! +commuting with the remaining type constructors, are easily +seen to be the graded mso interpretations. The combinators +for functoriality are also easily seen to preserve graded mso +interpretations. There are two interesting cases, namely the +combinators for function composition and safe fold. +C.1 +Function composition +We first show that the graded mso interpretations are closed +under composition, as long as the input and output types are +graded list types. Consider two graded mso interpretations +Σ +Γ +Δ. +𝑓1 +𝑓2 +We want to show that their composition +𝑓2 ○ 𝑓1 ∶ Σ → Δ +is a graded mso interpretation. Let the corresponding poly- +nomial functors be 𝐹1 and 𝐹2. The key tool is the following +lemma. +Lemma C.1. For every 𝑘,𝑟 ∈ {0, 1, . . .}, the following func- +tion is mso definable. +Input A structure 𝐴 ∈ Σ with distinguished elements ¯𝑎 ∈ 𝐴𝑘. +Output The rank 𝑟 mso theory of the tuple 𝐹(¯𝑎) in 𝑓1(𝐴). +Proof +This lemma reduces to closure under composition of mso +interpretations for list types [9, Corollary 8]. The result that +we reduce to is non-trivial, and it depends on the fact that +the input and output types are list types. ◻ +Thanks to the above lemma, we can use a standard com- +position construction, with the polynomial functor for the +composition being the composition 𝐹2○𝐹1 of the correspond- +ing polynomial functors. +C.2 +Safe fold +We are left with showing that graded mso interpretations are +closed under the safe fold combinator. All of the conceptual +pieces are already in place, and we will simply show that the +proof of Theorem 3.2 works, with minor adjustments to take +into account the added generality of graded structures. +Suppose Γ is a type where all grades are < 𝑘, and we apply +the safe fold combinator to graded mso interpretations of +types +!𝑘1 → Γ +and +Γ → Σ → Γ, +yielding a function of type +!𝑘Σ → Γ. +By choice of 𝑘, in the resulting function every element in the +input structure has strictly bigger grade than every element +in the ouput structure. For such functions, the continuity +condition in Definition 5.7 becomes trivial, and there is no + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +difference between graded and un-graded mso interpreta- +tions. Therefore, in order to prove the soundess of fold, it +is enough to show that following lemma, that applying fold +to a graded mso interpretation yields an (ungraded) mso +interpretation. +Lemma C.2. For every graded mso interpretation +𝛿 ∶ Γ × Σ → Γ, +between graded list types, and every 𝐵0 ∈ Γ, the following +function is an (ungraded) mso interpretation +𝐴 = (︀𝐴1, . . .,𝐴𝑛⌋︀ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +list of structures in Σ, +with the grades forgotten +↦ +𝐵𝑛 +⃒ +defined based on 𝐴 +as in the proof of Calim 3.4 +. +Proof +Let 𝑚 be the maximal grade that appears in Γ, and let the +polynomial functor in the transition function 𝛿 be +𝐹(𝐴) = 𝐹0(𝐴) + ⋯ + 𝐹𝑚(𝐴) + 𝐴. +By the continuity condition for the graded mso interpretation +𝛿, the elements of grade ℓ in 𝐵𝑛 are the disjoint union of two +sets: +1. grade ℓ elements in 𝐵𝑛−1 or 𝐴𝑛; or +2. 𝐹ℓ applied to grade > ℓ elements in 𝐵𝑛−1 or 𝐴𝑛. +By unfolding the inductive definition of 𝐵𝑛−1 in the first item +of the above description, we see that the elements of grade ℓ +in 𝐵𝑛 are the disjoint union of two sets: +1*. grade ℓ elements in 𝐵0 or 𝐴1, . . .,𝐴𝑛; or +2*. 𝐹ℓ applied to grade > ℓ elements in 𝐵𝑖−1 or 𝐴𝑖 for some +𝑖 ∈ {1, . . .,𝑛}. +We will represent the elements that satisfy 1* or 2* as a subset +of 𝐺ℓ(𝐴) for some polynomial functor 𝐺ℓ. This functor is +defined as follows by induction on ℓ, in reverse order𝑚, . . ., 0. +Suppose that we want to define 𝐺ℓ and assume that we have +already defined 𝐺ℓ′ for ℓ′ > ℓ. (In the induction basis of ℓ = 𝑚 +the assumption is empty.) To represent the elements in item +1*, we use the functor +𝐴 + 1 + ⋯ + 1 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +number of elements +in 𝐵0 that have grade ℓ +. +A tempting idea for item 2* is to use the functor +𝐻ℓ(𝐴) = 𝐹ℓ(𝐺ℓ+1(𝐴) + ⋯ +𝐺𝑚(𝐴) + 𝐴 +⟩︀ +represents elements of grade > 𝑚 +in the input structure +). +Unfortunately, this idea is not correct. The reason is that in +item 2*, there is a dijsoint union ranging over 𝑖 ∈ {1, . . .,𝑛}, +and the disjointness of this union is not taken into account +by 𝐻ℓ. The problem is that the universe of the structures +𝐵0, . . .,𝐵𝑛 are not disjoint, and the functor 𝐻ℓ can incorrectly +identify elements that are obtained by applying 𝐹ℓ to the +same elements that appear in both 𝐵𝑖 and 𝐵𝑗 for 𝑖 ≠ 𝑗. To +eliminate this problem, we will add an explicit identifier for +the index 𝑗 to the functor. To view the index 𝑖 as an element +of the input structure 𝐴𝑖, we use the first element in the +universe of the corresponding list element 𝐴𝑖. Here, we when +refer to the first element in the universe, we mean the natural +linear order on the universe in a structure from a graded list +type, which arises from the ordered nature of lists and pairs. +Therefore, instead of 𝐻ℓ(𝐴), to represent item 2* we use the +product 𝐴 × 𝐻ℓ(𝐴), with the 𝐴 part representing the index 𝑖. +Summing up, the functor 𝐺ℓ that describes elements in each +𝐵𝑖 is +𝐺ℓ(𝐴) = 𝐴 + 1 + ⋯ + 1 +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +number of elements +in 𝐵0 that have grade ℓ ++𝐴 × 𝐻ℓ(𝐴). +In the rest of this proof, we will view the universe of 𝐵𝑛 +as being a subset of +𝐺(𝐴) = 𝐺0(𝐴) + ⋯ +𝐺𝑚(𝐴), +with𝐺ℓ(𝐴) representing the elements of grade ℓ. The polyno- +mial functor𝐺(𝐴) will be the polynomial functor for the mso +interpretation in the conclusion of the lemma. To conclude +the proof of the lemma, we need to show that in mso we can +define which elements of 𝐺(𝐴) belong to the universe of 𝐵𝑛, +and what relations from the output vocabulary are satisfied +by tuples of such elements. In other words, we need to define +in mso the quantifier-free theory of tuples from 𝐺(𝐴) in the +output structure. This is done in the following claim, which +completes the proof of the lemma. +Claim C.3. For every ℓ,𝑘 ∈ {0, 1, . . .} the following function +is mso definable: +● Input. A structure 𝐴 ∈ Σ∗ with elements ¯𝑎 ∈ 𝐴𝑘. +● Output. The quantifier-free theory of 𝐺(¯𝑎) in 𝐵𝑛⋃︀ℓ. +Furthermore, the output depends only on 𝐴 and ¯𝑎 restricted to +elements of grade at least ℓ. +Proof +Fix some ℓ and 𝑘 as in the statement of the claim. The claim is +proved by induction on ℓ, in reverse order 𝑚, . . ., 0. Suppose +that we want to prove the claim for some grade ℓ, and assume +that it has already been proved for strictly bigger grades. +We use the same idea as in the proof of Claim 3.4. Con- +sider a finite automaton, in which the states are all possible +theories that arise by taking some 𝑘-tuple ¯𝑎, and returning +the quantifier-free theory of 𝐺(¯𝑎) in some structure from Γ. +This set of states is finite, since the length of the tuple and +the vocabulary are fixed. +We will design an automaton with this set of states, to- +gether with an input string (which will be called the advice +string), so that it satisfies the following invariant: after read- +ing the first 𝑖 letters of the advice string, the state of the +automaton is the quantifier-free theory of 𝐺(¯𝑎) in 𝐵𝑖⋃︀ℓ. +The initial state of the automaton is determined by the +invariant, it must be the quantifier-free theory of 𝐺(¯𝑎) in +𝐵0. Since the universe of 𝐵0 is equal to 𝐺(∅), it follows that + +Folding interpretations +Conference’17, July 2017, Washington, DC, USA +the initial state does not depend on the tuple ¯𝑎 or the input +structure 𝐴. +We now describe the transition function of the automaton, +as well as the advice string. By unfolding the definition of +the graded mso interpretation 𝛿, there is some quantifier +rank 𝑠 such that the state of the automaton after reading 𝑖 +letters is uniquely determined by the following four pieces +of information: +1. the quantifier-free theory of 𝐺(¯𝑎) in 𝐵𝑖−1, +2. the quantifier-free theory of 𝐺(¯𝑎) in 𝐴𝑖, +3. the rank 𝑠 mso theory of 𝐺(¯𝑎) in 𝐵𝑖−1⋃︀ℓ + 1, +4. the rank 𝑠 mso theory of 𝐺(¯𝑎) in 𝐴𝑖⋃︀ℓ + 1. +The first piece of information is the previous state of the +automaton. The remaining infomration will be the stored +in the advice string; i.e. the 𝑖-th letter of the advice string +will contain the information described the last three items +above. Note that the advice string can be computed in mso, +by the induction assumption. Therefore, since the automaton +can be simulated in mso, it follows that the last state of this +automaton can be defined in mso, thus proving the claim. ◻ +◻ +D +Proof of Theorem 6.1 +In this section, we prove that the system in Theorem 5.3 is +sound and complete with respect to linear regular functions. +Soundness. The soundness proof follows the same lines +as the soundness proof in Theorem 5.3. The general idea is +that we use graded mso interpretations where all components +have dimension at most one. This, however, on its own is +not going to be enough. To see why, let us compare the two +absorption functions +!Σ → Σ×!Σ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +not allowed +!Σ → Σ × Σ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +allowed +. +Both of them have linear size increase – each element of the +input structure contributes two copies to the output structure. +What is wrong with the function that is not allowed? The +problem is that one of the copies has the same grade, and +the other has lower grade. In the presence of folding, we +can get an unbounded number of copies, by spawning a new +lower grade copy in each iteration. This phenomenon will +not occur in the allowed function, since both copies have +lower grade. The phenomenon discussed above is formalised +in the following definition: +Definition D.1. A linear graded mso interpretation is a +graded mso interpretation in which the underlying functor +is linear, i.e. all components have dimension one, and which +furthermore satisfies the following downgrading condition: if +an element of the input structure has at least two copies in +the output structure, then all of the copies have strictly lower +grade. +In the definition above, the copies of an element in the +output structure are defined in the natural way; this defini- +tion makes sense when the functor is linear. For example, if +the functor is +𝐴 + 𝐴 + 𝐴 + 1 + 1 +then each input element spawns at most three copies. The +components of dimension zero, of which there are two in +the above example, are not counted as copies of any input +elment. +To prove completeness of the system from Theorem 6.1, +we show that all functions that are strongly derived in it +are linear graded mso interpretations. The proof is a simple +inducton on the derivation. The most interesting cases are +composition and folding. For composition, we simply observe +that the condition on lower grades from Definition D.1 is +preserved under composition. +We are left with folding. where we use the following +lemma, which is the same as Lemma C.2 except that the +functions in the assumption and conclusion are required to +be linear. In the assumption, we use linearity as defined in +Definition D.1, in particular the downgrading condition is +assumed; in the conclusion we have an ungraded function, +and therefore only the linearity of the functor and not the +downgrading condition are assumed. +Lemma D.2. For every linear graded mso interpretation +𝛿 ∶ Γ × Σ → Γ, +between graded list types, and every 𝐵0 ∈ Γ, the following +function is an (ungraded) linear mso interpretation +𝐴 = (︀𝐴1, . . .,𝐴𝑛⌋︀ +)︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ +list of structures in Σ, +with the grades forgotten +↦ +𝐵𝑛 +⃒ +defined based on 𝐴 +as in the proof of Calim 3.4 +. +Proof +We use the same proof as in Lemma C.2. However, there is +one difficulty, which is that the functor 𝐺 defined in that +proof is not linear, even if 𝛿 is linear. This is because of the +product 𝐴 × 𝐻ℓ(𝐴) which is used to code indexes. In fact, +the functor 𝐺 can have arbitrarily high dimension. How- +ever, thanks to the downgrading condition on 𝛿, one show +by induction that for every grade ℓ there is some constant +𝑐ℓ ∈ {0, 1, . . .} such that for every grade ℓ element 𝑎 in the +input structure, there are at most 𝑐ℓ elements in the output +structure which use 𝑎. Here, we say that an element uses +𝑎 if it belongs to 𝐺(𝐴) but not to 𝐺(𝐴 ∖ {𝑎}). Using this +property, we can turn 𝐺 into a linear functor. ◻ +This finishes the soundness proof. Below, we give two +completeness proofs. +First completeness proof. This proof uses the sst model +from Example 7, which is complete for linear regular func- +tions, in the case where the input and output types are strings +over finite alphabets [1, Theorem 3]. In Example 7, we show + +Conference’17, July 2017, Washington, DC, USA +Mikołaj Bojańczyk (University of Warsaw) +how to weakly derive every sst that uses each input letter +at most once. To get the general form of sst, where an input +letter can be used a constant number of times, it is enough +to generalize the model from Example 7 so that the initial +function is weakly derivable, and the transition function can +be derived in type +Δ×!𝑘Σ → Δ +for some 𝑘. With these relaxations, we get all copyless sst, +and retain weak derivability. This proof works only for func- +tions of string-to-string type (admittedly, this is the case that +we really care about), and for this reason we also present a +second proof, which can also handle types such as strings of +strings or pairs of strings. +Second completeness proof. In this proof, similarly to the +completeness proof from Theorem 5.3, we reduce to a known +complete system. In the case of linear mso interpretations, the +corresponding known system is from [7]. It is the same as in +Theorem B.1, except that the split function is removed. In the +completeness proof of Theorem 5.3, only the proof for split +used general absorption (as opposed to linear absorption). +Therefore, the system with linear absorption is complete for +the linear regular functions. +This completes the second completeness proof, and thus +also the proof of the theorem. + diff --git a/DNE4T4oBgHgl3EQfew0W/content/tmp_files/load_file.txt b/DNE4T4oBgHgl3EQfew0W/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb6dbac9ace94a15a460fb78d1ead84233a70f68 --- /dev/null +++ b/DNE4T4oBgHgl3EQfew0W/content/tmp_files/load_file.txt @@ -0,0 +1,1568 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf,len=1567 +page_content='Folding interpretations Mikołaj Bojańczyk (University of Warsaw) Abstract We study the polyregular string-to-string functions, whi ch are certain functions of polynomial output size that can be described using automata and logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We describe a system of combinators that generates exactly these functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Unlike previous systems, the present system includes an iteration mechanism, namely fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Although unrestricted fold can define all primitive recursive functions, we identify a type system (inspired by linear logic) that restricts fold so that it defines exactly the polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We also present related systems, for quantifier-free functions as well as for linear regular functions on both strings and trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM Reference Format: Mikołaj Bojańczyk (University of Warsaw).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Folding interpreta- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Proceedings of ACM Conference (Conference’17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM, New York, NY, USA, 24 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1145/nnnnnnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='nnnnnnn 1 Introduction This paper is about transducers that compute string-to-string functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (We also have some results on trees, but trees will be discussed only at the end of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ) We are interested in two classes of functions: the linear regular functions1, which have linear output size, and the polyregular functions, which have polynomial output size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Both classes can be de- scribed by many equivalent models, and have robust closure properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let us begin with the more established class of linear regular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Two typical example functions from this class are: (︀1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3⌋︀ ↦ (︀1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3⌋︀ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ duplicate (︀1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3⌋︀ ↦ (︀3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 1⌋︀ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ reverse .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The linear regular functions can be described by many equiv- alent models, including: deterministic two-way automata with output [23, Note 4], mso transductions [13, Section 4], 1These are usually called the regular functions in the literature, but we add the word “linear” to distinguish them from the polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Copyrights for components of this work owned by others than ACM must be honored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Abstracting with credit is permitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Request permissions from permissions@acm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Conference’17, July 2017, Washington, DC, USA © 2023 Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM ISBN 978-x-xxxx-xxxx-x/YY/MM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='$15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='00 https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1145/nnnnnnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='nnnnnnn streaming string transducers [1, Section 3], an extension of regular expressions [3, Section 2], and a calculus based on combinators [7, Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The many equivalent models, as well as the robustness and good decidability properties of the underlying class, are comparable to similar properties for the regular languages, which also have many equivalent descriptions, including automata, logic and regular expres- sions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For this reason, the linear regular functions have been intensively studied in the last decade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The second class is the polyregular functions, which ex- tended the linear regular functions by allowing polynomial growth, including functions such as the squaring operation (︀1, 2, 3⌋︀ ↦ (︀1, 2, 3, 1, 2, 3, 1, 2, 3⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Similarly to the linear regular functions, the polyregular func- tions can also be described by multiple models, including: string-to-string pebble transducers, which are introduced in [14, Section 1] based on [15, Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5] and [21, Sec- tion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1], as well as an imperative programming language [5, Section 3], a functional programming language [5, Section 4], and a polynomial extension of mso transductions [9, Def- inition 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For a survey of the polyregular functions, see [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Combinators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This paper studies the linear regular and polyregular functions by using systems based on prime func- tions and combinators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This approach dates back to the Krohn-Rhodes Theorem [19, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 454], and was first applied to linear regular functions in [7], by describing them in terms of certain prime functions, such as 1 + Σ × Σ∗ → Σ∗ list constructor, and combinators such as Σ → Γ Γ → Δ Σ → Δ function composition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This system is further extended in [5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 64] to cover the polyregular functions, by adding extra prime functions of non-linear output size, such as the squaring operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The systems in [5, 7] have no constructions for iteration;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' because of this design decision, the hard part is proving com- pleteness: every function of interest can be derived in the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' One reason for avoiding iteration is to have a mini- mal system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Another reason is that iteration constructions are powerful, and as we find out in this paper, it is hard to add them while retaining soundness (only functions of interest can be derived).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The fold combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this paper, we take the opposite approach, by studying an iteration construction, namely the arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='05101v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='LO] 12 Jan 2023 Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) fold combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This combinator can be written as a rule 1 → Γ Γ × Σ → Γ Σ∗ → Γ fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The assumption of this rule can be seen as a deterministic automaton with input alphabet Σ and state space Γ, given by its initial state and transition function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the conclusion of the rule, we have the function that maps an input string to the last state of the run of the automaton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The input alphabet and the state space need not be finite, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the state space Γ could be the set 1∗ which represents the natural numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Folding is a fundamental construction in functional pro- gramming languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, the fold combinator arises canonically from the inductive definition of the list type [18, Section 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Unfortunately, there is a price to pay for the power and elegance of the fold combinator: one can use it to derive all primitive recursive functions [18, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, without any further restrictions, the fold com- binator falls outside the scope of automata techniques, or any other techniques that can be used to decide semantic properties of programs, such as the halting problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This paper is devoted to identifying restrictions on the fold combinator that tame its expressive power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' These restric- tions are presented as a typing system, which ensures that applications of fold will stay in the class of polyregular func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In particular, the resulting class of functions shares the decidability properties of the polyregular functions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' one can decide if a function produces a nonempty output for at least one input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There are two main contributions in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Quantifier-free interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The first contribution is to identify the quantifier-free interpretations as an im- portant class of functions in the context of fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' These are functions on structures in which the universe of the output is a subset of the universe of the input (in particular, the output size is linear), and all relations in the output structure are defined using quantifier-free formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 we show that applying the fold combi- nator to a quantifier-free interpretation yields a function that, although not necessarily quantifier-free, is at least lin- ear regular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This result subsumes several existing results, in particular those about mso definability of streaming trans- ducers [2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Although quantifier-free interpretations are rather weak, they can describe most natural transformations that are used as primes in the calculi from [5, 7];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the remain- ing primes can then be derived using fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Having identified the importance of quantifier-free func- tions, in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, we present a system of prime functions and combinators that derives exactly the quantifier-free func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The completeness proof of the system is the longest proof in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The quantifier-free system does not al- low fold;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' fold is used in the next part of the paper, about polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Safe fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The second main contribution is a type system that tames the power of fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This system uses a type con- structor !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and bears certain similarities to the parsimonius calculus of Mazza [20, Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The latter is part of a field called implicit computational complexity, which seeks to de- scribe complexity classes using type systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' An influential example of this kind is a system of Bellantoni and Cook [4], which characterizes polynomial time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The present paper can be seen as part of implicit computational complexity, which targets regular languages instead of Turing complete models, such as logarithmic space or polymomial time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For a more detailed discussion of the connections between regular lan- guages and 𝜆-calculus, including a pioneering applicaton of linear types, see [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The usual application of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is to restrict duplication, and this paper is no exception, as in the following example: 𝑥 ↦ (𝑥,𝑥) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ not allowed !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥 ↦ (!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥,𝑥) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ allowed .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, apart from restricting duplication, !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is also used in this paper to restrict another, more mysterious, resource, namely quantifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The idea is that our system uses !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' used to describe functions that are not necessarily quantifier-free, but are similar enough to quantifier-free functions so that the fold combinator can be applied to them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The second main contribution of this paper is Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, which characterizes the polyregular functions using certain prime functions and combinators, in which the types involve !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and one of the combinators is fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 we also show that if we further restrict duplication !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥 ↦ (!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥,𝑥) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ not allowed !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥 ↦ (𝑥,𝑥) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ allowed , then the resulting system derives exactly the linear functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finally, we also show that the results about the linear case can be extended from strings to trees without much difficulty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2 Interpretations In this section, we describe the polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Among several equivalent definitions of the polyregular functions, our point of departure in this paper will be a definition that uses mso interpretations [9, Section 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 Definition of mso interpretations We assume that the reader is familiar with basic notions of monadic second-order logic mso, see [17] for an introduc- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We only describe the notation that we use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A vocabulary consists of a finite set of relation names, each one with an associated arity in {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Note that we allow nullary re- lations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' relations of arity zero;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' such a relation takes no arguments and is “true” or “false” in each structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A struc- ture over such a vocabulary consists of a finite nonempty set, called the universe of the structure, and an interpretation Folding interpretations Conference’17, July 2017, Washington, DC, USA of the vocabulary, which associates to each relation name in the vocabulary a relation over the universe of matching arity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The syntax and semantics of first-order logic and mso are defined in the usual way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Whenever we speak of a class of structures, all structures in the class must be over the same vocabulary, and the class must be closed under isomor- phism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The structures considered in this paper will be used to describe finite strings and similar objects, such as pairs of strings, or strings of pairs of strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Intuitive description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We begin with an intuitive de- scription of string-to-string mso intepretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Following the classical Büchi-Elgot-Trakhtenbrot correspondence of automata and mso logic, we view strings as structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A string in Σ∗ is viewed as a structure whose universe is the string positions, equipped with the relations 𝑥 ≤ 𝑦 ⧸︀ order on positions 𝑎(𝑥) ⧸︀ 𝑥 has label 𝑎 ∈ Σ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A string-to-string mso interpretation transforms strings using the above representation, such that the positions of the output string are represented by 𝑘-tuples of positions in the input string, for some 𝑘 ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The order2 on output positions is defined by a formula 𝜑(𝑥1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑥𝑘 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ first output position ,𝑦1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑦𝑘 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ second output position ) with 2𝑘 free variables, while the labels of the output positions are defined by formulas with 𝑘 free variables, one for each letter in the output alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finally, not all 𝑘-tuples of input positions need to participate in the output string;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' there is a formula with 𝑘 free variables, called the universe formula, which selects those that do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' All of these formulas need to be consistent – every 𝑘-tuple of positions in the input string that satisfies the universe formula must satisfy exactly one of the label formulas, and these 𝑘-tuples need to be linearly ordered by the order formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consistency is decidable, since it boils down to checking if some mso formula is true in all strings, which in turn boils down to checking if automaton is nonempty by the equivalence of mso and regular languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Formal definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now give a formal definition of mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The formal definition generalizes the above intuitive description in two ways of minor importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' First, the definition is presented not just for strings, but for general classes of structures;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' we intend to apply it to mild generalizations of strings, such as pairs of strings or strings of strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Second, instead of the universe being 𝑘-tuples of some fixed dimension, it is created using a polynomial 2For reasons described in [9, Theorem 4], the string positions are equipped with a linear order 𝑥 ≥ 𝑦 instead of successor 𝑥 = 𝑦 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' functor, which is an operation on sets of the form 𝐹(𝐴) = 𝐴𝑘1 + ⋯ + 𝐴𝑘𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (1) Typical polynomial functors include the identity functor 𝐴, or the functor 𝐴2 + 𝐴2 that produces two copies of the square of the input set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We use the following terminology for polynomial functors: each 𝐴𝑘𝑖 is called a component of the polynomial functor, and 𝑘𝑖 ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} is called the dimen- sion of this component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This extra generality of polynomial functors3 makes the definition more robust, it will be useful in a more refined analysis of mso interpretations that will appear in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In case of linear functors (where all components have dimension at most one), the components correspond to the copies in an mso transduction [13, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 230].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In an mso interpretation, the polynomial functor is used to define the universe of the output structure;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' if 𝐴 is an input structure then elements of 𝐹(𝐴) are called output candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A subset of the output candidates will be the universe of the output structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This subset is defined using an mso query of type 𝐹, which is a family of mso formulas, with one formula for each component in the functor, such that number of free variables in each formula is the dimension of the corresponding component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here are some examples: 𝐴0 = 1 )︁⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ a query of this type is a formula without free variables 𝐴4 ⃒ a query of this type is a formula with four free variables 𝐴2 + 𝐴2 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ a query of this type is two formulas with two free variables each The relations in the output structure are also defined using mso queries,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' with a relation of arity 𝑚 defined using a query of type 𝐹𝑚(𝐴) def= 𝐹(𝐴) × ⋯ × 𝐹(𝐴) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ 𝑚 times The above type is also a polynomial functor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' since polynomial functors are closed under taking products,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the product of 𝐴2 and 𝐴+1 is 𝐴3 +𝐴2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The discussion above is summarized in the following definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 (mso interpretation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function 𝑓 ∶ Σ → Γ between two classes of structures is called an mso interpretation if: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There is a polynomial functor 𝐹 and a mso query of type 𝐹 such that for every input structure 𝐴 ∈ Σ, the universe of the output structure is the subset of the output candidates 𝐹(𝐴) defined by this query;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every relation name 𝑅 in the vocabulary of the output class, of arity 𝑚, there is an mso query of type 𝐹𝑚, which defines the interpretation of 𝑅 in every output structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3One can reduce the polynomial functor in an mso interpretation to a single component 𝐴𝑘, at the cost of increasing the dimension 𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This works for input structures with at least two elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For this reason, [9] uses interpretations with just one component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) A string-to-string mso interpretation is the special case of the above definition where the input type is Σ∗ for some finite alphabet Σ, and the output type is Γ∗ for some finite alphabet Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider the squaring operation on strings (︀1, 2, 3⌋︀ ↦ (︀1, 2, 3, 1, 2, 3⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that the input alphabet is Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This function is defined by an mso interpretation as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The functor 𝐹 is 𝐴2, and the universe formula is “true”, which means that the positions of the output string are all pairs of positions in the input string.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The order formula describes the lexicographic order on 𝐴2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finally, the label of an output position is inherited from the input position on the second coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 String types We are ultimately interested in functions that input and out- put strings over a finite alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, to create such functions using primes and combinators, it will be conve- nient to have more structured types for the simpler functions, such as pairs of strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The idea to use such structured types comes from [7], in particular we use the same types, as de- scribed in the following definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 (List types).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A list type is any type constructed using the constructors 1⟩︀ a type with one element Σ1 × Σ2 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ pairs Σ1 + Σ2 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ co-pairs, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' disjoint union Σ∗ ⃒ lists .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' An example of a list type is (1 + 1 + 1)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This type can be seen as the type of strings over a three letter alphabet;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' in this way the list types generalize strings over finite alphabets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The generalization is minor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' since elements of a list type can be seen as strings over a finite alphabet,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' which uses brackets and commas as in the following example: ((︀left 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' right 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' left 1⌋︀,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 1) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ an element of the list type (1 + 1)∗ × 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Structures for list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will be interested in mso interpretations that transform one list type into another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We could simply represent list types as strings over a finite alphabet in the way described above, and then use mso in- terpretations on strings over a finite alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The resulting definition would be equivalent to the one that we will use in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, we choose to use a direct representation of list types as structures, without passing through a string encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The reason is that quantifiers would be needed to go between list types and their string encodings, and in this paper, we will be particularly interested in quantifier-free interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To each list type we associate a class of struc- tures, which is defined by induction as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (1) The class 1 contains only one structure;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' this structure has one element in its universe and no relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (+) The vocabulary of the class Σ1 + Σ2 is the disjoint union of the vocabularies of the classes Σ1 and Σ2, plus one new nullary relation name (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' arity zero).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A structure in this class is obtained by taking a structure in either of the classes Σ1 or Σ2, extending the vocabulary to the vocabulary of the other class by using empty sets, and interpreting the new nullary relation as “true” or “false” depending on whether the structure is from Σ1 or Σ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (×) The vocabulary of the class Σ1 × Σ2 is the disjoint union of the vocabularies of the class Σ1 and Σ2, plus one new unary relation name (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' arity one).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A structure in this class is obtained by taking the disjoint union (defined in the natural way) of two structures, one from Σ1 and one from Σ2, and using the new unary relation name to select the elements from the first structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (∗) The general idea is that a structure in the class Σ∗ is ob- tained by taking a list (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ of nonempty4 struc- tures in Σ, creating a new structure using disjoint union (with a shared vocabulary), and adding a new binary relation 𝑥 ≤ 𝑦 which holds whenever the structure con- taining 𝑥 appears earlier in the list (or in the same place) than the structure containing 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The problem with this construction is that it would mix nullary relations that come from different structures in the list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To fix this prob- lem, each nullary relation name 𝑅() in the vocabulary of Σ is changed into a unary relation name 𝑅(𝑥) that selects elements 𝑥 such that the corresponding structure satisfies 𝑅().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we apply the above representation to a list type (1 + ⋯ + 1 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ 𝑛 times )∗ then we get the representation of strings as ordered struc- tures from Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, with the exception that the empty string has a universe with one element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, it is not important if we use Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 or 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 for representing strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A polyregular function is a function 𝑓 ∶ Σ → Γ 4A structure is nonempty if its universe is nonempty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This leads to the following subtle point, which arises when considering lists of lists, and related structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since a list can be empty, it follows that we do not allow lists of empty lists such as (︀(︀⌋︀, (︀⌋︀, (︀⌋︀⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This means that the list constructor, as it is used in this paper and formalized in Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4, should be interpreted as possibly empty lists with nonempty list items.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This distinction will not play a role for types such as (1 + 1)∗ where list elements cannot be empty, which is the case that we really care about.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Folding interpretations Conference’17, July 2017, Washington, DC, USA between list types that can be defined by an mso interpretation, assuming that list types are viewed as classes of structures according to Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The original definition of polyregular functions [5] did not use mso interpretations, however mso interpretations were shown equivalent to the original definition in [9, Theorem 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since the original definition was closed under compo- sition, it follows that mso interpretations are closed under composition (as long as the input and output classes are list types).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3 The fold combinator In this section, we discuss dangers of the fold combinator 1 → Γ Γ × Σ → Γ Σ∗ → Γ fold We also explain how some of the dangers can be avoided by using quantifier-free interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We begin this section with several examples illustrating the usefulness of fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a finite automaton with a 𝑛 states and an input alphabet of 𝑚 letters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Assuming some order on the states and alphabet, the transition function can be seen as a function between finite string types (1 + ⋯ + 1) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ 𝑛 times × (1 + ⋯ + 1) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ 𝑚 times → 1 + ⋯ + 1 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ 𝑛 times .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we apply fold to this automaton, under some chosen ini- tial state, then we get the function that inputs a string, and returns the last state in the run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A special case of this con- struction is when both the states and input letters of the automaton are elements of some finite group 𝐺, the initial state is the group identity, and the transition function is the group operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By folding this transition function, we get the group multiplication function of type 𝐺∗ → 𝐺, which is one of the (less appealing) prime functions in the combina- tory calculus from [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There are two symmetric list constructors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 + Σ∗ × Σ → Σ∗ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=')︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='lists are constructed by adding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='letters to the right of the list ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 + Σ × Σ∗ → Σ∗ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=')︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='lists are constructed by adding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='letters to the left of the list ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we apply fold to the two corresponding automata, then we get the reverse and identity functions on lists, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The fold combinator corresponds in a canonical way to the first list constructor, which is why it is sometimes called fold right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 On the dangers of folding We now present two examples which show how the fold com- binator, without any further restrictions, can define functions that are not polyregular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' More generally, one can use fold to derive any primitive recursive function [18, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [Iterating duplication] Consider an automaton where the input alphabet is 1, and the states are 1∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We view the states as natural numbers, with the list 1𝑛 of length 𝑛 representing the number𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The initial state in this automaton is 1, and the transition function is (1𝑛, 1) ∈ 1∗ × 1 ↦ 12𝑛 ∈ 1∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is an example of a polyregular function, in fact it is a linear regular function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, if we apply fold to it, then we get the function 1𝑛 ∈ 1∗ ↦ 12𝑛 ∈ 1∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' which is not polyregular because of exponential growth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [Subtraction] As illustrated in Example 4, we run into trouble if iterate duplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' But we can also run into trouble when the transition function does not create any new elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider an automaton where the input al- phabet is 1+1, and the state space is the integers, represented as the list type 1∗ ⟩︀ represents {−1, −2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} + 1∗ ⟩︀ represents {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} The initial state is zero, and the transition function incre- ments or decrements the state depending on which of the two input letters from 1 + 1 it gets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This transition function is easily seen to be polyregular, and it has the property that the output size is at most the input size, assuming that the input letter contributes to the input size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, by fold- ing this automaton, we get a function that subsumes integer subtraction and is therefore not polyrergular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Using similar ideas, one could simulate two-counter machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 Quantifier-free interpretations and their folding As the two above examples show, we have to be careful when applying fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Clearly we must avoid duplication (Example 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This can be done by requiring the polynomial functor in the interpretation to be the identity, thus ensuring that the output is no larger than the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It is less clear how to avoid the problem with Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Our solution is to use quantifier-free interpretations, as defined below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A quantifier-free interpretation is the spe- cial case of mso interpretations where the polynomial functor is the identity 𝐹(𝐴) = 𝐴 and all formulas are quantifier-free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' One could consider interpretations in which the formulas are quantifier-free, but the functor is not necessarily the identity;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' such interpretations will not be useful in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The transition function in Example 5 is not quanitifier-free, since decrementing a number, which corresponds to remov- ing a list element, is not a quanitifier-free operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) following theorem is the first main contribution of this paper: fold can be safely applied to quantifier-free interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let Σ and Γ be any classes of structures, not necessarily list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If the transition function 𝛿 ∶ Γ × Σ → Γ in the assumption of the fold combinator is a quantifier-free interpretation, then the function in the conclusion is a linear mso interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof Consider an automaton as in the assumption of the theo- rem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For an input to this automaton (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀, and 𝑖 ∈ {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑛} we write 𝐵𝑖 ∈ Γ for the state of the automaton af- ter reading the first 𝑖 input letters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The state 𝐵0 is the initial state, which is given by the assumption to the fold combina- tor, and the state 𝐵𝑛 is the last state, which is the output of the function in the conclusion of the fold combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Our goal is to compute the last state using a linear mso interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since the functor in 𝛿 is the identity, the output candidates are simply the elements of the input structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, the universe of 𝐵𝑛 is contained in the disjoint union of the universe of 𝐵𝑛−1 and the universe of 𝐴𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By unfolding the induction, the universe of 𝐵𝑛 is contained in the universe of the first state 𝐵0 and the input structure 𝐴 = (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, to prove that the fold is an mso interpretation, it will be enough to show that an mso formula can tell us: (a) which elements of 𝐵0 +𝐴 belong to the output structure;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and (b) which relations of the output structure are satisfied by which tuples from 𝐵0 + 𝐴.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The answers to these questions will be contained in the quantifier-free theory of the tuple, as defined below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let 𝐴 be a structure and let ¯𝑎 be a list of distinguished elements, which need not belong to the universe of 𝐴.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The quantifier-free theory of a ¯𝑎 in 𝐴 is the following information: which distinguished elements are in the universe, and which quantifier-free formulas are satisfied by those dis- tinguished elements that are in the universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Using the above terminology, to prove that the fold is definable in mso, we need to show that for each tuple in 𝐵0 + 𝐴, we can define in mso the corresponding quantifier- free theory in the output structure 𝐵0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This will be done in the following claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The key property used by the claim is the following continuity property of quantifier-free inter- pretations: the quantifier-free theory of a tuple of output candidates in the output structure is uniquely determined by the quantifier-free theory of the same tuple in the input structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the following claim, we consider a function which in- puts structures with tuples of 𝑘 distinguished elements, and has finitely many possible output values (quanitifier-free theories, in the case of the claim).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Such a function is called mso definable if for every chosen output value, there is an mso formula with 𝑘 free variables that selects inputs which give chosen output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Claim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every 𝑘 ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} and every tuple ¯𝑏 of ele- ments in 𝐵0, the following function is mso definable: Input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A structure 𝐴 ∈ Σ∗ with elements ¯𝑎 ∈ 𝐴𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The quantifier-free theory of ¯𝑎¯𝑏 in 𝐵𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof By the continuity property mentioned earlier in this proof, the quantifier-free theory of ¯𝑎¯𝑏 in 𝐵𝑛 is uniquely determined by the quantifier-free theory of ¯𝑎¯𝑏 in the structure (𝐵𝑛−1,𝐴𝑛), which in turn is uniquely determined (by compositionality) by the quantifier-free theories of ¯𝑎¯𝑏 in the two individual structures 𝐵𝑛−1 and 𝐴𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, we can think of these quantifier-free theories as being computed by a finite au- tomaton, where the initial state is the quantifier-free theory of ¯𝑏 in 𝐵0, and the input string is (︀qf theory of ¯𝑎 in 𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', qf theory of ¯𝑎 in 𝐴𝑛⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By the continuity property, one can design a transition func- tion for this automaton, which does not depend on the input structure 𝐴 or the tuple ¯𝑎, such that its state after reading the first 𝑖 letters is the quantifier-free theory of ¯𝑎¯𝑏 in 𝐵𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The state space of this automaton is finite, since there are finitely may quantifier-free theories once the vocabulary and num- ber of arguments have been fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since finite automata can be simulated in mso, it follows that the last state in the run of this automaton, which is the theory in the conclusion of the claim, can be defined in mso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ We now use the claim to complete the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The output candidates of the mso interpretation are defined by the polynomial functor 𝐹(𝐴) = 𝐴 + 1 + ⋯ + 1 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ size of initial state 𝐵0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other words, the output candidates are elements of the in- put list and the initial state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By the above claim, the quantifier- free theory of a single output candidate in the output struc- ture can be defined in mso, and since this theory tells us if the output candidate is present in the universe output struc- ture, we can use it to define the universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Similarly, if we want to know if a tuple of output candidates satisfies some relation from the output vocabulary, then we can find this information using mso as in the above claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ On its own, the theorem above does not solve all of the problems with fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' One issue is that the theorem only sup- ports one application of fold, since the folded function is no longer quantifier-free and cannot be folded again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Another issue is that applying the theorem stays within the class of functions that do not increase the output size, while we will also be interested in folding functions that increase the size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Folding interpretations Conference’17, July 2017, Washington, DC, USA These problems will be addressed later in the paper, by de- veloping a suitable type system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before continuing, we give some applications of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a transition function of a finite au- tomaton as in Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In a list type of the form 1 + ⋯ + 1, the component of the disjoint union that is used can be ac- cessed by a quantifier-free formula without free variables, since it is represented using nullary relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, the transition function is a quantifier-free interpretation, and so we can apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 to conclude that the fold is an mso transduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This corresponds to the inclusion regular languages ⊆ mso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Applying Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 to prove this inclusion is not the right way to prove it, since the inclusion itself is used in the proof of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ In Example 6, we applied the fold combinator to a finite automaton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the following example, we give a more inter- esting application, where the state space is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [Streaming string transducers] Define a simple streaming string transducer, simple sst for short, as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It has two finite alphabets Σ and Γ, called the input and output alphabets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It has a configuration space, which is a list type of the form Δ = (Γ∗)𝑘1 + ⋯ + (Γ∗)𝑘𝑚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other words, the set of configurations is obtained by ap- plying some polynomial functor to the set of strings over the output alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The idea is that a configuration consists of a state, which is one of the 𝑚 components, and a register valuation which is a tuple of strings over the output alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The configurations of the transducer are updated according to the following three functions, which are required to be quantifier-free, according the the representation of the input and output alphabets that was used in Example 6: 1 → Δ ⧹︀ initial Δ × Σ → Δ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ transition function Δ → Γ∗ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ final .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The semantics of the transducer is the function of type Σ∗ → Γ∗ that is obtained by folding the first two functions, and post-composing with the final function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2, this function is an mso transduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The model described above subsumes (and in fact, is equiv- alent to) the classical model of sst [1, Section 3], with the only difference (which is why we call our model simple) being that our model allows the input letter to be used only once (as opposed to a constant number of times) in the registers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is because string concatenation, which is the operation used to update registers in an sst, is a quantifier-free opera- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 can be seen as subsuming the implication copyless sst ⊆ deterministic mso transductions proved in [1, Theorem 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The same idea will work for trees, as we will see in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Example 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [Graphs] As mentioned in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2, the folded automaton need not operate on classes that are list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For instance, we could adapt Example 7 to transducers in which the registers, instead of storing strings, store graphs with 𝑘 distinguished vertices, as in Courcelle’s algebras for treewidth [12, Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We could still apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2, since the corresponding operations on graphs are quantifier- free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Similar ideas would also work for cliquewidth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ 4 Deriving quantifier-free functions As we have shown in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2, the fold combinator can be safely applied to quantifier-free interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before discussing the fold combinator, we take a minor detour in this section, and present a complete system for the quantifier- free interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A few examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We begin with examples and non-examples of quantifier-free interpretations operating on list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [Commutativity of product] Consider the func- tion of type Σ1 × Σ2 → Σ2 × Σ1, which swaps the order in a pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Like all examples in this section, this is actually an infinite family of functions, one for every choice of Σ1 and Σ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The function is a quantifier- free interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The only change between the input and output concerns the unary relation from the definition of the product class Σ1 × Σ2 which tells us if an element is from the first coordinate;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' this relation needs to be complemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Example 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [List reverse and concatenation] Consider the list reverse function of type Σ∗ → Σ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is clearly a quantifier-free interpretation – it is enough to replace the order 𝑥 ≤ 𝑦 with its reverse 𝑦 ≤ 𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A similar idea works for the list concatenation function of type Σ∗∗ → Σ∗ which concatenates a list of lists into a list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the input structure, there are two linear orders, corresponding to the inner and outer lists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To get the output structure, we use the lexico- graphic product of these two orders, which can be defined in a quantifier-free way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Example 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [List constructor and destructor] Consider the (left) list constructor 1 + Σ × Σ∗ → Σ∗, that was discussed in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is a quantifier-free interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If the input is from 1, which can be tested in a quantifier-free way using the nullary relation from the co-product, then the output list is created in the natural way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Otherwise, if input is a pair from Σ × Σ∗, then the order on the concatenated list can easily be defined by using the unary predicate that identifies the first argument of a pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Conference’17,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' July 2017,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' DC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' USA Mikołaj Bojańczyk (University of Warsaw) Γ × Σ ↔ Σ × Γ commutativity of × Γ + Σ ↔ Σ + Γ commutativity of + Γ × (Σ × Δ) ↔ (Γ × Σ) × Δ associativity of × Γ + (Σ + Δ) ↔ (Γ + Σ) + Δ associativity of + Γ × (Σ + Δ) ↔ (Γ × Σ) + (Γ × Δ) distributivity Γ1 × Γ2 → Γ𝑖 projections Γ𝑖 → Γ1 + Γ2 co-projections Γ + Γ → Γ co-diagonal Σ∗ × Σ → Σ∗ append Σ∗ → Σ∗ reverse Σ∗∗ → Σ∗ concat Σ → Σ × Γ∗ create empty (Σ × Γ)∗ → Σ∗ × Γ∗ list distribute Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The prime quantifier-free functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Γ1 → Σ1 Γ2 → Σ2 Γ1 × Γ2 → Σ1 × Σ2 functoriality of × Γ1 → Σ1 Γ2 → Σ2 Γ1 + Γ2 → Σ1 + Σ2 functoriality of + Γ → Σ Γ∗ → Σ∗ functoriality of ∗ Γ → Σ Σ → Δ Γ → Δ function composition Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The quantifier-free combinators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The list constructor is bijective, and therefore it has a corresponding inverse of type Σ∗ → 1 + Σ × Σ∗, which we call the list destructor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The list destructor is not a quantifier-free interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The reason is that if the input is an nonempty list, then we would need to isolate in a quantifier-free way the elements from the head, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' from the first list element, which cannot be done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Example 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [Diagonal] Another non-example is𝑥 ↦ (𝑥,𝑥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is not a quantifier-free interpretation, since the output size is bigger than the input size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ A complete system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now present a complete charac- terization of quantifier-free interpretations on list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The system will be used as a basis for the system in the next section, which will describe general mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Σ* Σ* Σ** Σ* Σ** Σ* Σ** create empty append append concat wires represent types, and parallel wires represent products, so this cross-section represents Σ**× Σ*× Σ* boxes represent prime functions, or previously derived functions input is at the top output is at the bottom Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A string diagram that derives the binary operation of type Σ∗ × Σ∗ → Σ∗ for list concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The quantifier-free interpretations between list types are exactly those that can be derived from the prime func- tions in Figure 1 by applying the combinators from Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof of the above theorem, with completeness being the non-trivial part, is in the appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 String diagrams We conclude this section with several example derivations of quantifier-free functions using the system from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To present these derivations, we use string5 diagrams based on [11, Chapter 3], as depicted in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We also use string diagrams with a yellow background, where parallel wires represent co-products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, the following diagram represents the prime function from Figure 1 that describes commutativity of +: Σ Γ Here are two other examples of string diagrams, which use dead ends, and represent projections and co-projections: Σ Σ Γ Γ projection Σ×Γ → Γ co-projection Σ → Σ×Γ Example 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Recall the representation of finite sets as list types 1+⋯+1 used in Examples 2 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Under this represen- tation, every function between finite sets is derivable using the prime functions and combinators of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is easily seen using string diagrams, as illustrated below: 5This is a name clash: the word “string” relates to the shape of the diagrams, and not to the fact that they manipulate types that represent strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Folding interpretations Conference’17, July 2017, Washington, DC, USA 1 0 0 0 0 1 1 2 2 3 3 1 1 1 1 1 1 1 the operation for squaring modulo 4 The representation of finite sets as co-products is important here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, the diagonal function 1 → 1 × 1 is not derivable, as explained in Example 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ 5 Deriving polyregular functions We now move beyond quantifier-free functions and present the main contribution of this paper, which is a system that derives exactly the polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' As explained in Example 5, we cannot simply add the fold combinator to the system from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Another idea would be to have two kinds of functions: quantifier-free functions, and general polyregular functions, with the fold combinator used to go from one kind to the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In such a system, the only con- tribution of fold would be to define linear regular functions, since such are the functions in the conclusion of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We are more ambitious, and we want the fold combinator to be useful also for non-linear functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To define a system with fold, we add a new unary type constructor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This type constructor is denoted by !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and it is written on the left.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The general idea is that an element !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥 is essentially the same element as 𝑥, except that it is harder to obtain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The type constructor is not idempotent, and so !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥 is even harder to obtain than !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The goal of this type constructor is to restrict the application of fold in a way that avoids the problems discussed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is done by using the following safe fold combinator: !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘1 → Γ Γ × Σ → Γ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘(Σ∗) → Γ safe fold In the combinator, !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘 refers to 𝑘-fold application of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='. When applying the combinator, the number 𝑘 ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} must be strictly bigger than the grade of Γ, which is defined to be the maximal nesting of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', as in the following examples: 1∗ ⟩︀ grade zero 1+!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (1+!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ grade two .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, when Γ has grade zero, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' it does not use !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', then safe fold can be used in the form !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 → Γ Γ × Σ → Γ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Σ∗) → Γ safe fold when Γ is without !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The general idea is that the annotation with !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' will disallow certain kinds of repeated applications of fold that would lead to functions that are not polyregular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before giving a formal description of the system, we begin with an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Example 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [List destructor] In this example, we use safe fold to derive a variant of the list destructor Σ∗ → 1 + Σ∗ × Σ that was discussed in Example 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider an automaton where the state space is the output type of the list destructor, the initial state is 1, and the transition function is (1+Σ*×Σ)×Σ Σ*×Σ×Σ Σ*×Σ Σ×Σ* 1 Σ*×Σ Σ* Σ Σ Σ* 1×Σ distribute 1 Σ Σ* empty append By applying the safe fold to this automaton, we get the list deconstructor in a weaker type, namely !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Σ∗) → 1 + Σ∗ × Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The weaker type avoids the issues from Example 5, since the input and output will have different numbers of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', and therefore we will be unable to apply fold again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 Graded types and their derivable functions We now give a formal description of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The type system is the same as previously, except that we have one more type constructor for !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='. Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A graded list type is any type that is con- structed using the following type constructors 1⟩︀ a type with one element Σ1 × Σ2 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ pair Σ1 + Σ2 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ co-pair, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' disjoint union Σ∗ ⃒ lists !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The general idea is that !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' does not change the underlying set, but only introduces some type annotation that controls the way fold and duplication can be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Apart from safe fold, the main way of dealing with !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is the duplicating operation !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ → !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ × Σ absorption, which is named after the same rule in the parsimonious cal- culus of Mazza [20, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There are also prime functions for commuting !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' with the remaining type constructors, for exam- ple !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (︀𝑥,𝑦,𝑧⌋︀ and (︀!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑥, !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑦, !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑧⌋︀ are going to be equivalent in our system;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' for this reason we can write !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗ without specifying the order in which the two constructors are applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There are two kinds of derivability for func- tions between graded list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Strongly derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function is called strongly deriv- able if it can be derived using quantifier-free prime func- tions and combinators from Figures 1 and 2, extended Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) to graded list types that can use !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', along with four new prime functions !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Γ + Σ) ↔ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ+ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with + !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Γ × Σ) ↔ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ× !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with × ( !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ)+ ↔ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Γ+) !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with ∗ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ → !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ × Γ absorption and two new combinators Σ → Γ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ →!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ functoriality of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘1 → Γ Γ × Σ → Γ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘(Σ∗) → Γ safe fold The safe fold combinator can only be applied when Γ has grade < 𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Weakly derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function is called weakly deriv- able if it is of the form 𝑥 ↦!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘 𝑓 (𝑥) for some 𝑘 and some strongly derivable function 𝑓 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other words, a function is weakly derivable if it can be strongly derived for a sufficiently upgraded input type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, the list destructor of type Σ∗ → 1 + Σ∗ × Σ function is not strongly derivable (Example 11), but it is weakly derivable (Example 14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the following theorem, which is the main result of this paper, we are only interested in weak derivability for func- tions between (ungraded) string types, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' between types that do not use !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='. The purpose of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is to get the strong derivations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function between (ungraded) list types is polyregular if and only if it is weakly derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof has two parts: soundness and completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 Completeness The completeness part of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 is that every polyregu- lar function can be weakly derived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Unlike the quantifier-free system in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, completeness is relatively easy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is because fold is a powerful combinator, and we can draw on a prior complete system for the polyregular functions [5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the completeness proof, the polynomial growth output size will come from a single quadratic function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Claim 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' One can weakly derive the following function (︀𝑎1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑛⌋︀ ↦ (︀(︀𝑎𝑛, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎1⌋︀, (︀𝑎𝑛−1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎1⌋︀, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (︀𝑎1⌋︀⌋︀ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=')︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='call this the prefixes function ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof Consider an automaton, where the input alphabet is !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ, the state space is Σ∗∗×!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗, the initial state is the pair of empty lists, and the transition function is !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ* Σ** Σ** Σ Σ* !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ* Σ* Σ append Σ* absorption append dotted box represents functoriality of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By applying fold to this automaton, we get a function of type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗ → Σ∗∗×!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗ which returns the output of the prefixes function on the first output coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Observe that in this proof, we applied the fold to a transition function that already uses !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='. ◻ Using the above function, in the appendix we show that the weakly derivable functions contain an already existing complete system for the polyregular functions [5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before discussing the soundness proof in the theorem, let us comment on the minimality of its system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The system inherits all of the primes and combinators from the quantifier- free system in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the presence of fold, some of these primes and combinators can be derived thus leading to a smaller system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The system from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 remains com- plete after removing the map combinator, as well as all prime functions and combinators that involve the list type, and adding 1 + Σ → Σ∗ lists of length at most one Σ∗ × Σ∗ → Σ∗ binary list concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 Soundness The rest of this section is devoted to the proof of soundness for Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, which is that all weakly derivable functions are polyregular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will define an invariant on strongly derivable functions, which is satisfied by the prime functions, is preserved by the combinators, and which implies that a function is polyregular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This invariant can be seen as giving a semantic explanation of the !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' constructor and the strongly derivable functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The invariant uses a more refined notion of mso interpreta- tions, called graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' These interpretations operate on graded structures, as described in the following definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='6 (Graded structure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A graded structure is a structure, together with a grading function that assigns to each element in the universe a grade in {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Folding interpretations Conference’17, July 2017, Washington, DC, USA The idea is that the grade of an element is the number of times that !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' has been applied, as in the following example ( 1⟩︀ grade zero , !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (︀1, 1, 1⌋︀ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ grade one ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A graded list type can be seen as describing a class of graded structures, with the constructor !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' incrementing the grade of all elements, and the remaining constructors treated in the same way as in Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If 𝐴 is a graded structure, we write 𝐴⋃︀ℓ for the structure that is obtained from 𝐴 by restricting its universe to elements that have grade at least ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the definition of a graded mso interpretation, we use the grades to control how an mso interpretation 𝑓 uses quantifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The general idea is that 𝑓 (𝐴)⋃︀ℓ depends on 𝐴⋃︀ℓ in a quantifier-free way, and on 𝐴⋃︀ℓ+1 in an mso definable way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before presenting the formal definition, we introduce some notation, in which a polynomial functor 𝐹 is applied to a tuple of elements ¯𝑎, yielding a new (typically longer) tuple of elements 𝐹(¯𝑎).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If an input set 𝐴 for a polynomial functor 𝐹 is equipped with some linear order, then this linear order can be extened to a linear order on the output set 𝐹(𝐴), by using some fixed order on the components, and ordering tuples lexicographically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This way we can think of a polynomial functor as transforming linearly ordered sets, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' lists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will care about lists of fixed length, which we call tuples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example if the polynomial functor is 𝐴 + 𝐴2, then applying it to the tuple (1, 2) gives the tuple (1, 2, 1, 2, (1, 1), (1, 2), (2, 1), (2, 2)) ∈ 𝐹({1, 2})6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the definition below, we will care about the theories of tuples of the form 𝐹(¯𝑎), with the theories defined as in Defi- nition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, but extended to mso formulas of given quantifier rank (the quantifier rank of an mso formula is the nesting depth of the quantifiers, with first-order and second-order quantifiers counted in the same way).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Recall that these theo- ries allow for distinguished elements that are not part of the universe in a structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Equipped with this notation, we are ready define the graded version of mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function 𝑓 ∶ Σ → Γ is called a graded mso ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='interpretation if there is some polynomial functor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝐹(𝐴) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝐴 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='⟩︀ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='this is called the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='quantifier-free ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='component ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝐹0(𝐴) + ⋯ + 𝐹𝑚(𝐴) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=')︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='components from this part ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='of the functor are called the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='downgrading components ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='such that the following conditions hold: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Universe and grades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The universe of the output struc- ture is contained in 𝐴 + 𝐹0(𝐴⋃︀1) + 𝐹1(𝐴⋃︀2) + ⋯ + 𝐹𝑚(𝐴⋃︀𝑚 + 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The grades in the output structure are defined as follows: elements from 𝐹ℓ have grade ℓ, and elements from the quantifier-free component inherit their grade from 𝐴.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every 𝑘, ℓ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} there is some quantifier rank 𝑟 ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} such that for every in- put structure 𝐴 and distinguished elements ¯𝑎 ∈ 𝐴𝑘, the quantifier-free theory of the tuple 𝐹(¯𝑎) in 𝑓 (𝐴)⋃︀ℓ is uniquely determined by the following two theories: a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the quantifier-free theory of ¯𝑎 in 𝐴⋃︀ℓ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the rank 𝑟 mso theory of ¯𝑎 in 𝐴⋃︀ℓ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we ignore the grades, then a graded mso interpretation is a special case of an mso interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is because the quantifier-free type mentioned in the continuity condition will tell us which output candidates from 𝐹(𝐴) are in the universe of the output structure, and how the relations of the output structure are defined on them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, the continuity condition tells us that the output not only can be defined in mso, but it can be defined in a way that respects the grades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In particular, in the special case when all input elements have nonzero grade, and all output elements have zero grade, the continuity condition collapses to the usual condition in an mso interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this way, graded mso interpretations generalize ungraded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Graded mso interpretations also generalize quantifier-free interpretations – this happens in the case when all elements in the input and output structures have grade zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this case, only the quantifier-free component is useful, and all formulas are quantifier-free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the appendix, we show that all strongly derivable prime functions are graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This will imply that all weakly derivable functions are ungraded mso inter- pretations, since the continuity condition becomes vacuous when the input type is sufficiently upgraded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof is an induction on the size of a strong derivation, with the most in- teresting cases being composition and safe fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Composition is a corollary of composition closure for mso interpretations on string types [9, Corollary 8], while safe fold is treated in the same way as in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 6 Linear regular functions The last group of results from this paper concerns the linear regular functions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' polyregular functions of linear growth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We show that a small change to the system from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 will give exactly the linear regular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' As we will see, superlinear growth in the system from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 is not created by the fold combinator, with the culprit instead being !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ → !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ × Γ absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This function allows us to create an unbounded number of copies of an element of Γ, as witnessed in the proof of Claim 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we simply remove this function, then the system will become too weak, since all other prime functions and combinators preserve the property that the universe of the Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) output structure is contained in the universe of the input structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The solution is to add a weaker form of absorption !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ → Γ × Γ linear absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other words, removing all occurrences of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is the price paid for copying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The corresponding system describes exactly the linear regular functions, as stated in the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function 𝑓 ∶ Σ → Γ between string types is linear regular if and only if it can be weakly derived in a system that is obtained from the one6 in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 by replacing absorption with linear absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof for the above theorem, which is in the appen- dix, is based on Example 7 about streaming string transduc- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The idea is that linear absorption together with fold is enough to simulate streaming string transducers, which are expressively complete the linear regular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 Tree types It turns out that the system for linear regular functions from Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 can be generalized without much further diffi- culty to trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is in contrast to a prior combinator system for trees [8, Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1], which had an involved proof using approximately fifty prime functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We belive that this is evidence for the usefulness of the fold combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a type for trees, defined inductively by TΣ = 1 + TΣ × Σ × TΣ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ a tree is either a leaf, or has two subtrees and a root label A tree type is a type that is constructed using the types from Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, together with the tree type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Tree types can be seen as structures, using the same construction as for lists in Defintion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4, except that instead of one linear order, we have two orders: the descendant order (which is not a linear order) and the document order given by left subtree < root < right subtree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Define a linear regular tree function to be a function between tree types that is defined using linear mso transductions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Following Wilke [24], we view trees as an algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this algebra, there is an additional type constructor CΣ, which describes contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A context is a tree with a distinguished leaf (called the hole) where other trees can be inserted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is not a primitive type constructor, only syntactic sugar for a certain combination of the list and tree type constructurs: CΣ def= ((TΣ × Σ) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ the hole is in the right subtree + (Σ × TΣ) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ the hole is in the left subtree )∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 6One can also start with the smaller system from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To operate on trees and contexts, we use the following oper- ations, called Wilke’s operations,see [24, Figure 1]: 1 + TΣ × Σ × TΣ → TΣ tree constructor CΣ × TΣ → TΣ replace hole by a tree CΣ × CΣ → CΣ context composition 1 + (TΣ × Σ) + (Σ × TΣ) → CΣ context creation All of these operations are quantifier-free interpretations, and we will use them as primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The last two operations need not be explicitly added, since they can derived using the system from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A function 𝑓 ∶ Σ → Γ between tree types is linear regular if and only if it can be derived in a system that is obtained from the system in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 by adding the tree type, Wilke’s operations, the prime function !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='TΣ ↔ T!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with T and the following combinator !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘1 → Γ Γ × Σ × Γ → Γ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘TΣ → Γ safe tree fold, which can be applied whenever Γ has grade < 𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof (Sketch) As in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We use the same soundness proof, except that tree automata are used instead of string automata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For completeness, we use a result of Alur and D’Antoni, which says that every linear mso interpretation is computed by a streaming tree transducer [3, Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Adjusting for notation, a streaming tree transducer is defined in the same way as in Example 7, except that instead of lists, registers store trees and contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The registers in the transducer are manipulated using Wilke’s operations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and thus for the same reason as in Example 7, the corresponding tree function is weakly derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This completeness proof takes into account only functions of type TΣ → TΓ where Σ and Γ are finite alphabets, but the extension to other tree types is easily accomplished by encoding tree types into such trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Tree polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It is natural to ask about a polyregular system for trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We conjecture that if we add absorption to the system from Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2, and possibly a few extra prime functions, then the system will define exactly the mso interpretations on tree types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This conjecture would imply that tree-to-tree mso inprepretations are closed under composition, which is an open problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 7 Perspectives We finish the paper with some directions for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In our proofs, we are careless about the number of times that !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Maybe a more refined approach can give a better understanding of the correspondence between the nesting of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and the resources involved, such as quantifiers Folding interpretations Conference’17, July 2017, Washington, DC, USA or copying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Alternatively, one could try to do away with !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' entirely, and use some proof system where the safety of fold is captured by a structural property of the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' One idea in this direction is to look at cyclic proofs [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Another idea would be to capture the structural property using the visual language of string diagrams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Another question that concerns string diagrams is about the equivalence problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Decidability of the equivalence problem for polyrergular functions is an open problem, but in the case of linear functions the problem is known to be decidable [16, Theorem 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Maybe one can express the de- cision procedure in terms of string diagrams, by designing equivalences on string diagrams which identify exactly those diagrams that describe the same function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The system in this paper is based on combinators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A more powerful system would also allow for variables, 𝜆, and higher- order types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Such a system exists without fold [6, Section 4], and it is tempting to see if it can be extended with fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The result would be an expressive functional programming language that can only define regular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' References [1] Rajeev Alur and Pavol Černý.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Expressiveness of streaming string transducers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2010, Chennai, India, volume 8 of LIPIcs, pages 1–12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [2] Rajeev Alur and Loris D’Antoni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Streaming Tree Transducers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM, 64(5):31:1–31:55, August 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [3] Rajeev Alur, Adam Freilich, and Mukund Raghothaman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Regular combinators for string transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Computer Science Logic and Logic in Computer Science, CSL-LICS 2014, Vienna, Austria,, pages 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [4] Stephen Bellantoni and Stephen Cook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A new recursion-theoretic characterization of the polytime functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Proceedings of the twenty- fourth annual ACM symposium on Theory of computing, pages 283–293, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [5] Mikołaj Bojańczyk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Polyregular Functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' CoRR, abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='08760, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [6] Mikołaj Bojańczyk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Transducers of polynomial growth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Sci- ence, LICS ’22, New York, NY, USA, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [7] Mikołaj Bojańczyk, Laure Daviaud, and Shankara Narayanan Krishna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Regular and First-Order List Functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Logic in Computer Science, LICS, Oxford, UK, pages 125–134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [8] Mikołaj Bojańczyk and Amina Doumane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' First-order tree-to-tree functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Holger Hermanns, Lijun Zhang, Naoki Kobayashi, and Dale Miller, editors, LICS ’20: 35th Annual ACM/IEEE Symposium on Logic in Computer Science, Saarbrücken, Germany, July 8-11, 2020, pages 252–265.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [9] Mikolaj Bojanczyk, Sandra Kiefer, and Nathan Lhote.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' String-to-string interpretations with polynomial-size output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019, July 9-12, 2019, Patras, Greece, pages 106:1–106:14, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [10] James Brotherston and Alex Simpson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Sequent calculi for induction and infinite descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Journal of Logic and Computation, 21(6):1177– 1216, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [11] Bob Coecke and Alex Kissinger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Picturing quantum processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Cam- bridge University Press, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [12] Bruno Courcelle and Joost Engelfriet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Graph Structure and Monadic Second-Order Logic - A Language-Theoretic Approach, volume 138 of En- cyclopedia of Mathematics and Its Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Cambridge University Press, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [13] Joost Engelfriet and Hendrik Jan Hoogeboom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' MSO Definable String Transductions and Two-way Finite-state Transducers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Logic, 2(2):216–254, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [14] Joost Engelfriet and Sebastian Maneth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Two-way finite state transduc- ers with nested pebbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In International Symposium on Mathematical Foundations of Computer Science, pages 234–244.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Springer, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [15] Noa Globerman and David Harel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Complexity results for two-way and multi-pebble automata and their logics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 169(2):161– 184, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [16] Eitan M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Gurari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The Equivalence Problem for Deterministic Two-Way Sequential Transducers is Decidable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 11(3):448–452, 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [17] Jörg Flum Heinz-Dieter Ebbinghaus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finite Model Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Springer Monographs in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Springer, 2nd edition, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [18] Graham Hutton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A tutorial on the universality and expressiveness of fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Journal of Functional Programming, 9(4):355–372, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [19] Kenneth Krohn and John Rhodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Algebraic theory of machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' prime decomposition theorem for finite semigroups and machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Transactions of the American Mathematical Society, 116:450–450, 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [20] Damiano Mazza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Simple parsimonious types and logarithmic space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In 24th EACSL Annual Conference on Computer Science Logic (CSL 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [21] Tova Milo, Dan Suciu, and Victor Vianu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Typechecking for XML transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 66(1):66–97, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [22] Lê Thành Dung Nguyên, Camille Noûs, and Pierre Pradic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Comparison- free polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In 48th International Colloquium on Au- tomata, Languages, and Programming, ICALP 2021, July 12-16, 2021, Glasgow, Scotland (Virtual Conference), pages 139:1–139:20, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [23] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Shepherdson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The reduction of two-way automata to one-way automata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' IBM Journal of Research and Development, 3(2):198–200, April 1959.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [24] Thomas Wilke.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' An algebraic characterization of frontier testable tree languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theoretical Computer Science, 154(1):85–106, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A The quantifier-free system In this part of the appendix, we prove Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the proof, a derivable function is a function that can be derived using the system from Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other parts of the paper, derivable functions will refer to other systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 has two parts: soundness (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' all derivable functions are quantifier-free interpretations) and completeness (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' all quantifier-free interpretations are deriv- able).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 Soundness To prove soundness of the system, we show that all prime functions from Figure 1 are quantifier-free interpretations, and that the class of quantifier-free interpretations is closed under applying all combinators from Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We only discuss one case, namely the combinator Σ → Γ Σ∗ → Γ∗ functoriality of ∗, which is also known as the map combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The difficulty with this combinator is that in the structure that represents a list of elements (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ ∈ Σ, as per Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4, the Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) nullary predicates from the structures𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛 are replaced by unary predicates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, since the same replacement is done for the output list, it follows that a straightforward syn- tactic construction can be applied to transform the quantifier- free interpretation from the assumption of the combinator into a quantifier-free interpretation from the conclusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The rest of the soundness proof is left to the reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 Completeness The rest of this section is devoted to the completeness proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We begin with some notation and preparatory lemmas that will be used in the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Zero type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will use an extended system, which has an additional type called 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This type represents a class that contains one structure, and that structure has an empty uni- verse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (This class is terminal, in the sense that every class of structures admits a unique quantifier-free interpretation to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=') The corresponding prime functions are Σ → Σ × 0 add 0 0 → Σ∗ create an empty list One should not confuse 0 with the empty class ∅ (which anyway is not part of our type system).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, 0 + Σ ≠ Σ = ∅ + Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The extended system with 0 is equivalent to the original system, since we can view 0 as 1∗, but with only the empty list used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In particular, the extended system is conservative in the following sense: if a function between types that do not use 0 is derivable in the extended system, then it is also derivable in the non-extended system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For this reason, we can do the completeness proof in the extended system, which will be slightly more convenient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' From now on, list types can use 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It will be useful to consider list types in a certain normal form, which is achieved using distributivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We say that a list type is in disjunctive normal form if it is of the form ∐ 𝑖∈𝐼 ∏ 𝑗∈𝐼𝑗 Σ𝑖,𝑗 where each Σ𝑖,𝑗 is one of the types 0 or 1, or a list Σ∗ where Σ is in disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other words, the list type does not contain any product of co-products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In our proof, the main advantage of this normal form concerns nullary relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Recall that the nullary relations in Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4, appear only in the co-product, and they are removed when applying the list constructor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, if a type in disjunctive normal form is not a co-product type, then its vocabulary contains no nullary relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The following lemma shows that every list type admits a derivable isomorphism with some list type in disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here, a derivable isomorphism is a derivable function that has a derivable inverse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Every list type admits a derivable isomorphism with some list type in disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof Using distributivity and functoriality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Thanks to the already proved soundness part of the theo- rem, the derivable isomorphism is also quantifier-free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There- fore, to prove completeness of the system, it is enough to prove completeness only for functions where both the input and output types are in disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' From now on, we only consider list types in disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Safe pairing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The last issue to be discussed before the completeness proof concerns pairing functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that 𝑓 ∶ Σ → Γ1 × Γ2 is a quantifier-free interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the completeness proof, we will want to show that it is derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A natural idea would be to use an inductive argument to derive the two quantifier-free interpretations 𝑓𝑖 ∶ Σ → Γ𝑖 that arise from 𝑓 by projecting it onto the two output coordi- nates, and to then pair these two derivations into a derivation of 𝑓 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Unfortunately, combining these two derviations would require some kind of pairing combinator, or a duplicating function of type Σ → Σ × Σ, none of which are available in our system (because they would be unsound).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For these reasons, we need to be a bit careful with pairing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The crucial observation is that pairing is not always unsound, because some functions can be paired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, the two functions 𝑓1 and 𝑓2 described above can be paired, because they use disjoint parts of the input structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' More formally, the universe formulas are disjoint, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' no element can be selected by both universe formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This view will be used in the completeness proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To formalize it, we use the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let Σ be a list type in disjunctive normal form, and let 𝜑(𝑥) be a quantifier-free formula over its vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There is a list type, denoted by Σ⋃︀𝜑, and a quantifier-free in- terpretation Σ Σ⋃︀𝜑 projection of 𝜑 such that the following conditions are satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every quantifier-free interpretation 𝑓 ∶ Σ → Γ, such that the universe formula of 𝑓 is contained in 𝜑 (which means that the universe formula of 𝑓 implies the formula 𝜑), there is a a decomposition Σ Γ Σ⋃︀𝜑 𝑓 projection of 𝜑 𝑓 ⋃︀𝜑 Folding interpretations Conference’17, July 2017, Washington, DC, USA where 𝑓 ⋃︀𝜑 is a quantifier-free interpetation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Safe pairing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that 𝜑1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝜑𝑛 are formulas as in the assumption of the lemma, which are pairwise disjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Then one can derive the function Σ (Σ⋃︀𝜑1) × ⋯ × (Σ⋃︀𝜑𝑛) that produces all projections in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof The purpose of the type 0 is this lemma;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' with the type used Σ⋃︀𝜑 when 𝜑 selects no elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The lemma is proved by induction on the structure of the type Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that Σ is the zero type 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this case, the formula 𝜑 must be equivalent to “false”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We define 0⋃︀𝜑 to be the same type 0, and the projection is the identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The safe pairing condition holds because of the prime function Σ → Σ × 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that Σ is the unit type 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this case, the formula 𝜑 is equivalent to either “false” or “true”, since the unique structure in 1 has a universe that has only one element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We define 1⋃︀𝜑 to be the 0 or 1, depending on which of the two cases holds, with the projection being the unique function 1 → 1⋃︀𝜑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The safe pairing condition is proved using the prime function Σ → Σ × 0, since the list of quantifier-free formulas in the condition can have at most one formula that is not “false”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a list type of the form Σ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The main observa- tion in the proof is that there is a bijective correspon- dence between quantifier-free formulas over the vo- cabularies of Σ and Σ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This correspondence is defined as follows: for every formula 𝜑 over the vocabulary of Σ, there is a formula 𝜑∗ over the vocabulary of Σ∗ such that for every list 𝐴 = (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ ∈ Σ∗, an element 𝑎 ∈ 𝐴𝑖 is selected by 𝜑∗ in the entire list 𝐴 if and only if 𝑎 is selected by 𝜑 in the list element 𝐴𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It is not hard to see that such a formula exists, and furthermore, every formula over the vocabulary of Σ∗ is of equivalent to a formula of the form 𝜑∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, in the case when the type is a list Σ∗, we can assume that the formula over the vocabulary of Σ∗ is of the form 𝜑∗ for some formula 𝜑 over the vocabulary of Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Define Σ∗⋃︀𝜑∗ def= (Σ⋃︀𝜑)∗, with the projection function for 𝜑∗ being the result of applying the map combinator to the projection func- tion for 𝜑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The safe pairing property is proved by using the induction assumption, and using the function (Σ1 × ⋯ × Σ𝑛)∗ → Σ∗ 1 × ⋯ × Σ∗ 𝑛, which can easily be seen to be derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The case when Σ is a co-product Σ1 + Σ2 is proved similarly to the list case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here, we use a bijective cor- respondence between quantifier-free formulas 𝜑 over the vocabulary of Σ with pairs (𝜑1,𝜑2), where 𝜑𝑖 is a quantifier-free formula over the vocabulary of Σ𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The case when Σ is a product Σ1 × Σ2 is proved simi- larly to the co-product case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Again, there is a bijective correspondence between quantifier-free formulas 𝜑 over the vocabulary of Σ with pairs (𝜑1,𝜑2), where 𝜑𝑖 is a quantifier-free formula over the vocabulary of Σ𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For the existence of such a bijective correspondence, we use the assumption that the type is in disjunctive normal form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Thanks to the assumption, the vocabu- lary has no nullary relations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' if there would be nullary relations then there could be some communication between the two coordinates in the product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a quantifier-free interpretation 𝑓 ∶ Σ → Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let 𝜑 be the universe formula of 𝑓 , and let Σ⋃︀𝜑 be the type obtained by applying Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We write dom𝑓 for this type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The corresponding function in the decomposition as in item 1 is then 𝑓 ⋃︀dom𝑓 ∶ dom𝑓 → Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will use the following terminology for this decomposi- tion: the type Σ⋃︀𝜑 will be called the reduced domain of 𝑓 , the projection will be called the domain reduction of 𝑓 , and the function 𝑔 will be called reduced 𝑓 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here is a diagram that displays this terminology Σ Γ reduced domain of 𝑓 𝑓 domain reduction of 𝑓 reduced 𝑓 Because the domain reduction is derivable, and derivable functions are closed under composition, it is enough to show that for every quantifier-free interpretation, its reduced ver- sion is derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This will be shown in the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every quantifier-free interpretation 𝑓 ∶ Σ → Γ with universe formula 𝜑, one can derive the function 𝑓 ⋃︀𝜑 ∶ Σ⋃︀𝜑 → Γ from item 1 in Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof The lemma is proved by structural induction on the input and output types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the induction step, we will replace either the input or output type by a simpler one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The induction step is shown in Sections A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2–A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5 below, which consider the following cases: Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 the input type is a co-product;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 the output type is a co-product;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 the output type is a product;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 the input type is 0 or 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5 the input type is a list;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='6 the input type is a product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' These cases are exhaustive, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' at least one of them always applied, but they are not disjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' When applying some case, we assume that none of the previous cases can be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The induction basis corresponds to case A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 The input type is a co-product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the represen- tation of the co-product type from Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4, the infor- mation about whether the structure comes from the first or second case is stored in a nullary predicate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, by a straightforward syntactic manipulation of quantifier-free interpretations, from a quantifier-free interpetation 𝑓 ∶ Σ1 + Σ2 → Γ, we can obtain two quantifier-free interpretations 𝑓1 ∶ Σ1 → Γ 𝑓2 ∶ Σ2 → Γ which describe the behaviour of 𝑓 on inputs from Σ1 and Σ2, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let 𝜑 be the universe formula of 𝑓 , and let 𝜑1 and 𝜑2 be the universe formulas of 𝑓1 and 𝑓2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By induction assumption, we can derive 𝑓𝑖⋃︀𝜑𝑖 ∶ Σ𝑖⋃︀𝜑𝑖 → Γ and derive their reduced versions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since by definition we have (Σ1 + Σ2)⋃︀𝜑 = Σ1⋃︀𝜑1 + Σ2⋃︀𝜑2, we can combine these two derivations into a derivation 𝑓 ⋃︀𝜑, by using the combinator Δ1 → Γ Δ2 → Γ Δ1 + Δ2 → Γ cases, which itself can be derived using functoriality of + and the co-diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 The output type is a co-product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a func- tion 𝑓 ∶ Σ → Γ1 + Γ2 whose output type is a co-product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this case, we assume that the previous case cannot be applied, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the input type is not a co-product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To produce the output structure, we need to define the nullary predicate that says which of the two cases in the output type is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In a quantifier-free interpretation, this nullary predicate is defined by a quantifier-free formula, with no free variables, which is evaluated in the input structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since there are no nullary predicates in the input structure (because otherwise, the input type would be a co-product, and we could apply the case from the previous section), it follows that this quantifier-free formula is either “true” or “false”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This means that the function 𝑓 must always use the same variant Γ1 or Γ2 in the co-product from the output type, regardless of the choice of input structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, we can replace 𝑓 by a corresponding function of type Σ → Γ𝑖, apply the induction assumption, and conclude by using composition and the co-projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 The output type is a product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider a function 𝑓 ∶ Σ → Γ1 × Γ2 whose output type is a product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We split this function into two quantifier-free interpretations 𝑓1 ∶ Σ → Γ1 𝑓2 ∶ Σ → Γ2, which produce the two coordinates in the output of 𝑓 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' These two functions must have disjoint universe formulas, since otherwise the same element in the output structure would belong to both coordinates of a pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We can apply the induc- tion assumption, and then combine these derivations into a derivation of 𝑓 by using safe pairing from Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 The input type is 0 or 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By cases A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, we can assume that the output type of the unique function in the family is either 0, 1, or a list type Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' When the output type is 0 or 1, then we are dealing with a quantifier-free interpretation which has one of the types 0 → 0 0 → 1 1 → 0 1 → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There is no quantifier-free interpretation of the type 1 → 0, and for the remaining types there is exactly one quantifier- free interpretation, which is easily seen to be derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We are left with the case when the output type is Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If the input type is 0, then the quantifier-free interpretation necessarily produces the empty list, and it is therefore deriv- able.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If the input type is 1, then the function always produces the same output, which is either the empty list, in which case it can be derived using the list constructor, or a single- ton list (︀𝐴⌋︀ for some fixed structure 𝐴 ∈ Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the singleton case, we can use the induction assumption to derive the func- tion 1 ↦ 𝐴, and pack the result as a list using the list unit operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5 The input type is a list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now arrive at the most interesting case in the proof, which is when the input type is a list Σ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Because the previously studied cases A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 cannot be applied, the output type is one of 0, 1, or Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' When the output type is 0, there is only one possible function, which is easily derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The output type 1 is impossible, since the function could not handle an empty list on the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We are left with a list-to-list function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To prove the inductive step for such functions, we use the analysis from the following claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every quantifier-free interpretation 𝑓 ∶ Σ∗ → Γ∗ Folding interpretations Conference’17, July 2017, Washington, DC, USA one can find quantifier-free interpretations 𝑓1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 𝑓𝑘 ∶ Σ∗ → Γ∗ with disjoint universe formulas such that 𝑓 is equal to 𝐴 ∈ Σ∗ ↦ 𝑓1(𝐴)⋯𝑓𝑘(𝐴) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ list concatenation and each 𝑓𝑖 has one of the following properties: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' all output lists of 𝑓𝑖 have length at most one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' there is some quantifier-free interpretation 𝑔 ∶ Σ → Γ∗ such that 𝑓𝑖 is equal to (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ ↦ 𝑔(𝐴1)⋯𝑔(𝐴𝑛) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ list concatenation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' as in item 2, but with reverse list order 𝑔(𝐴𝑛)⋯𝑔(𝐴1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before proving the claim, we use it to complete the in- duction step of the lemma in the present list-to-list case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Apply Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 to the function 𝑓 , yielding a decomposition into functions 𝑓1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 𝑓𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The induction assumption can be applied to these functions, since item 1 in the claim gives a smaller output type (namely Γ instead of Γ∗ for the only list element), while the remaining two items give smaller input types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finally, these derivations can be combined into a derivation of 𝑓 , using the pairing operation from Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2, the function for list concatenation from Example ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', and the prime function (Σ × Γ)∗ → Σ∗ × Γ∗ list distribute which is used to separate the domains of the functions 𝑓1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 𝑓𝑘 from the input list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It remains to prove the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof (of Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4) Consider the universe formula 𝜑(𝑥) of 𝑓 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Decompose this formula as a finite union 𝜑(𝑥) = ⋁ 𝜎∈Φ 𝜎(𝑥) of quantifier-free theories as in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' quantifier- free formulas that specify all relations satisfied by 𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Take some input structure in Σ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For elements of this structure that satisfy the universe formula, there are two orders: the input order that describes the order in the input list 𝐴 = (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ ∈ Σ∗ and the output order that describes the order in the output list 𝑓 (𝐴) = (︀𝐵1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐵𝑚⌋︀ ∈ Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the proof of the claim, we will analyze the relationship between these two orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Both of these orders are reflex- ive, total, and transitive, but not necessarily anti-symmetric, since two elements may belong to the same list element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For an element 𝑎 in an input structure 𝐴 ∈ Σ∗ that satisfies the universe formula 𝜑(𝑥), the unary theory of 𝑎 is defined to be the unique quantifier-free theory 𝜎 ∈ Φ that is satis- fied by 𝑎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If 𝑎 is strictly smaller than 𝑏 in the input order, then by compositionality, the output order on 𝑎 and 𝑏 will be uniquely determined by the unary theories of the two individual elements 𝑎 and 𝑏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This means that exactly of the following three implications must hold 𝑎 is strictly before 𝑏 in the output order 𝑎 is strictly before 𝑏 in the input order and the unary theories of 𝑎 and 𝑏 are 𝜎 and 𝜏 𝑎 is equivalent to 𝑏 in the output order 𝑎 is strictly after 𝑏 in the output order 𝜎<𝜏 𝜎∼𝜏 𝜎>𝜏 Depending on which implication holds, we write one of 𝜎 < 𝜏 𝜎 ∼ 𝜏 𝜎 > 𝜏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Before continuing, we make two cautionary remarks about the notation involving the relations < and > described above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The first cautionary remark is that < and > describe relations that are not necessarily converses of each other, since 𝜎 < 𝜏 and 𝜏 > 𝜎 do not mean the same thing;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' one of these condi- tions could be true without the other one being true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The second cautionary remark is that 𝜎 < 𝜏 is not necessarily ob- tained from some partial order by looking at strictly growing pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, we could have both 𝜎 < 𝜏 and 𝜏 < 𝜎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To prove the claim, we make five observations about the relations <, > and ∼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In these observations, we use partial equivalence relations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' a partial equivalence relation is defined to be a binary relation that is symmetric and transitive but not necessarily reflexive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Equivalence classes of partial equiv- alence relations are defined in the expected way;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the only difference is that some elements of the domain might not belong to any equivalence class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The first observation is that 𝜎 ∼ 𝜏 is a partial equiva- lence relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It is easy to see that the relation 𝜎 ∼ 𝜏 is transitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now argue that it is symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (This is not immediately obvious.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=') Suppose that 𝜎 ∼ 𝜏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Con- sider a list in 𝐴 ∈ Σ∗ with four distinguished elements 𝑎1 ⟩︀ unary type 𝜎 < 𝑎2 ⟩︀ unary type 𝜏 < 𝑎3 ⟩︀ unary type 𝜎 < 𝑎4 ⟩︀ unary type 𝜏 with the order relationship describing the input order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' From the assumption on 𝜎 ∼ 𝜏 we can conclude that three pairs (depicted by lines in the following diagram) Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) belong to the same elements in the output list: 𝑎1 𝑎2 𝑎3 𝑎4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 𝜎∼𝜏 𝜎∼𝜏 𝜎∼𝜏 Since belonging to the the same element in the output list is a transitive relation, we can deduce that 𝑎2 and 𝑎3 belong to the same element in the output list, thus establishing 𝜏 ∼ 𝜎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The next observation is that (𝜎 < 𝜏 ∧𝜏 < 𝜎) is a partial equivalence relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It is symmetric by definition, and it is transitive because each of the two conjuncts is transitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By the same proof as in the previous item, (𝜎 > 𝜏 ∧𝜏 > 𝜎) is a partial equivalence relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now show that the equivalence classes of the par- tial equivalence relations described in the first three observations are disjoint, and give a partition of Φ = Φ1 ∪ ⋯ ∪ Φ of all unary types in Φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every𝜎 ∈ Φ, we have exactly one of the cases 𝜎 ∼ 𝜎, 𝜎 < 𝜎, or 𝜎 > 𝜎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This proves that every 𝜎 belongs to exactly one of the equivalence classes in the previous three items.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The last observation is that the order on equivalence classes in the previous item can be chosen so that for all 𝑖 < 𝑗 we have 𝜎 ∈ Φ𝑖 and 𝜏 ∈ Φ𝑗 ⇒ 𝜎 < 𝜏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let Φ𝑖 and Φ𝑗 be different equivalence classes from the previous item.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every 𝜎 ∈ Φ𝑖 and 𝜏 ∈ Φ𝑗 we have exactly one of the three cases 𝜎 < 𝜏 or 𝜎 > 𝜏 or 𝜎 ∼ 𝜏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The third case cannot hold, since otherwise Φ𝑖 and Φ𝑗 would be in the same equivalence class from the first observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, one of the two first cases must hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A short analysis, which is left to the reader, also shows that which of the two cases holds (first or second) does not depend on the choice of the 𝜎 and 𝜏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This means that there is an unambiguous order relationship between Φ𝑖 and Φ𝑗, and this relationship can be used to prove item 5 of the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let Φ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', Φ𝑚 be as in the last of the above observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We know that for every input structure 𝐴 ∈ Σ∗, the output list can be decomposed as 𝑓 (𝐴) = 𝑓1(𝐴)⋯𝑓𝑛(𝐴) where 𝑓𝑖 is the function obtained from 𝑓 by restricting the output elements to those that have type from Φ𝑖 in the in- put structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To complete the proof of the claim, we will show that each function 𝑓𝑖 has one of the three kinds in the statement of the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose first that Φ𝑖 is an equivalence class defined by 𝜎 ∼ 𝜏 as in the first observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This means that all outputs produced by 𝑓𝑖 are equivalent in the output order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Hence this 𝑓𝑖 is of kind 1 as in the statement of the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose now that Φ𝑖 is an equivalence class defined by (𝜎 < 𝜏 ∧ 𝜏 < 𝜎) as in the second observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This means that for every input list 𝐴 ∈ Σ∗, if we take two elements 𝑎 and 𝑏 that have unary theory in Φ𝑖, then 𝑎 is strictly before 𝑏 in the input order 𝑎 is strictly before 𝑏 in the output order Hence this 𝑓𝑖 is of kind 2 as in the statement of the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A symmetric argument works for an equivalence class defined by (𝜎 > 𝜏 ∧ 𝜏 > 𝜎), except that this time the output order is reversed, giving a function as in item 3 of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='6 The input type is a product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The final case in the proof of Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 is when the input type is a product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since all types are in disjunctive normal form, the input type is a product Σ = Σ1 × ⋯ × Σ𝑚 where each Σ𝑖 is either 1 or a list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (The type 0 can be removed from a product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=') Because the previously studied cases A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 about output types that are products or co-products cannot be applied, the output type is either 0, 1, or a list type Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If the output type is 0, then the function is easily derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider now the case when the output type is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It cannot be the case that each of the input types Σ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', Σ𝑚 is a list, since the quantifier-free interpretation would be unable to handle the case when all lists are empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, one of the input types is the unit type 1, and the conclusion of the lemma can be proved by using 1 → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We are left with the case when the ouput type is of the form Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here, we proceed in the same way as in Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5, with the corresponding version of Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 being the fol- lowing claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof of the claim, which uses a similar analysis of unary quantifier-free theories as in Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4, is left to the reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Claim A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every quantifier-free interpretation 𝑓 ∶ Σ1 × ⋯ × Σ𝑚 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ Σ → Γ∗ one can find quantifier-free interpretations 𝑓1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 𝑓𝑘 ∶ Σ1 × Σ2 → Γ∗ with disjoint universe formulas such that 𝑓 is equal to 𝐴 ∈ Σ ↦ 𝑓1(𝐴)⋯𝑓𝑘(𝐴) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ list concatenation and each 𝑓𝑖 has one of the following properties: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' all output lists of 𝑓𝑖 have length at most one;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' or Folding interpretations Conference’17, July 2017, Washington, DC, USA 𝐺∗ → 𝐺 group multiplication Σ → Σ × Σ diagonal Σ∗ → 1 + Σ × Σ∗ list destructor (Σ + Γ)∗ → (Σ∗ + Γ∗)∗ block Σ∗ → (Σ∗ × Σ∗)∗ split Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Additional polyregular prime functions from [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 𝑓𝑖 factors through the projection Σ1 × ⋯ × Σ𝑚 → Σ𝑗 for some 𝑗 ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑚}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This completes the last of the cases in the induction step, and thus also the proof of the lemma, which also completes the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ B Completeness for polyregular functions In this section, we prove the completeness of the system in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' we show that every polyregular function can be weakly derived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This implication is the less interesting one, since our system is designed to be powerful, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' it should be easy to derive functions in it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will deduce the com- pleteness of our system with fold from another completeness result that uses a system without fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We begin by describing the system that we reduce to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It has all of the combinators from Figure 2, and its prime functions are contained in those from Figure 1 plus certain additional functions that are described in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The first three primes from Figure 4 have already been discussed in the paper, so we only explain the block and split functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The split function of type Σ∗ → Σ∗ × Σ∗ outputs all possible ways of splitting the input list into (prefix, suffix) pairs, as explained in the following example: (︀1, 2, 3⌋︀ ↧ (︀((︀⌋︀, (︀1, 2, 3⌋︀), ((︀1⌋︀, (︀2, 3⌋︀), ((︀1, 2⌋︀, (︀3⌋︀), ((︀1, 2, 3⌋︀, (︀⌋︀)⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The other additional function is the block function of type (Σ + Γ)∗ → (Σ∗ + Γ∗)∗, which blocks the elements of the input list into maximal blocks of same type, as illustrated in the following example that uses numbers for elements of Σ and letters for elements of Γ: (︀1, 2,𝑎, 3, 4, 5,𝑏,𝑐⌋︀ ↧ (︀(︀1, 2⌋︀, (︀𝑎⌋︀, (︀3, 4, 5⌋︀, (︀𝑏,𝑐⌋︀⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Theorem B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' [5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 64] A function between list types is polyregular if and only if it can be derived using the prime functions and combinators from the quantifier-free system Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, plus the prime functions from Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In contrast to the system with fold from this paper, the system from the above theorem was designed to be minimal, and therefore, the completeness proof for the system with fold will be a simple corollary of completeness of the system from the above theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Thanks to Theorem B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, to prove the completeness result for our system with fold, it is enough to show that (a) all prime functions in Theorem B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 are weakly derivable;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' and (b) the combinators in Theorem B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 preserve the weakly derivable functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Combinators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider first (b), about the combinators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The combinators are those from Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There is one com- binator for function composition, and three combinators for functoriality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The combinators for functoriality are dealt with using the prime functions about !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commuting with the remaining constructors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The combinator for function com- position is explained in the following diagram: Σ Γ Δ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='ℓΓ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘+ℓΣ derivable weakly derivable upgrading Prime functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider now (a), about the prime func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Clearly all prime functions in the quantifier-free sys- tem are weakly derivable, since they are even strongly deriv- able.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Weak derivability of the additional functions for group multiplication and the list destructor was already discussed in Examples 2 and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The diagonal function can easily be weakly derived using absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We are left with the split and block function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Split and block are weakly derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof To weakly derive the split function, we use the prefixes func- tion from Claim 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we take a list (︀𝑎1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑛⌋︀ ∈ Σ∗, and then apply prefixes, reverse, followed prefixes again, then the output is a list in Σ∗∗∗ of length 𝑛 whose 𝑖-th ele- ment is (︀(︀𝑎1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑛⌋︀, (︀𝑎1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑛−1⌋︀, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', (︀𝑎1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑖⌋︀⌋︀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (2) Since weakly derivable functions are closed under compo- sition, this output can be produced by a weakly derivable function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since weakly derivable functions are also closed Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) under map, to complete the proof that split is weakly deriv- able, it remains to show that a weakly derivable function can transform the 𝑖-th element in (2) into the corresponding element in the output of split, namely ((︀𝑎1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑖⌋︀, (︀𝑎𝑖+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑎𝑛⌋︀).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (3) This is done as follows: using the list deconstructor, we split the list in (2) into its head and tail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The head is reversed, while the tail is transformed so that each list element is replaced by its own head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now turn to the block function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' One approach is to derive the block function from split – thus showing that it is not needed in the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is shown in [5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='90].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, since we will later use a system that uses block but not split, we show how to derive block directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To compute the block function,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' we use an automaton where the input alphabet is Σ + Γ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the state space is Δ = (Σ∗ + Γ∗)∗ × (Σ∗ + Γ∗) )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ most recent block and the transition function is illustrated in the following diagram (by symmetry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' we only draw the left half): (Σ* + Γ*)* × (Σ* + Γ*) × (Σ + Γ) (Σ* + Γ*)* × (Σ* + Γ*) (Σ* + Γ*)* × Σ* × Σ (Σ* + Γ*)* × Γ* × Σ (Σ* + Γ*)* × Σ* (Σ* + Γ*)* × Σ* (Σ* + Γ*)* (Σ* + Γ*)* (Σ* + Γ*)* Σ* Γ* Σ*+Γ* Σ* Σ* Σ Σ append co-projection append unit distribute distribute-1 In the diagram,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the unit function is the function 𝑥 ↦ (︀𝑥⌋︀ which can be derived as in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' If we set the initial state of the above automaton to be a pair of empty lists (the second one having type, say, Σ∗), then after reading a list in !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Σ+Γ)∗, its state will store the output of the block operation, except that the last list element will be held separately and will need to be added using append.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 A smaller system A corollary of the completeness proof is Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5, which shows that certain primes and combinators can be removed from the system in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, while keeping it complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We remove the map combinator, as well as all quantifier-free functions from Figure 1 that involve the list type, namely the functions Σ∗ × Σ → Σ∗ append Σ∗ → Σ∗ reverse Σ∗∗ → Σ∗ concat Σ → Σ × Γ∗ create empty (Σ × Γ)∗ → Σ∗ × Γ∗ list distribute In their place, we have only two functions 1 + Σ → Σ∗ lists of length at most one Σ∗ × Σ∗ → Σ∗ binary list concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will show that the smaller system remains complete, because it can weakly derive the removed functions, and furthermore, the weakly derivable functions in the smaller system are closed under the map combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof (of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='5) Consider first the prime functions that are removed from the smaller system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The append function can be (strongly) de- rived in the smaller system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Using append, we can (strongly) derive the left list constructor, whose safe folding gives the list reversal in type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗ → Σ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' is obtained by composing a co-projection with the right list constructor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Applying the safe fold combinator to the left list constructor (after swapping the order of its arguments) shows that the reverse function can be derived in type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗ → Σ, and hence it is weakly derivable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The concat function is derived in type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ∗∗ → Σ∗ by folding binary list concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To weakly derive the create empty function, we observe that for every type Σ we can derive the unique function !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ → 1, and this derivation can be used together with absorption to derive the create empty function in type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ → Σ × Γ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finally, the list distribute function can be derived in type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Σ × Γ)∗ → Σ∗ × Γ∗ by a straightforward application of safe fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Finally, we can also eliminate the map combinator (func- toriality of ∗), since using safe fold we obtain a version of the map combinator in type Γ → Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ∗ → Σ∗ weak map, Folding interpretations Conference’17, July 2017, Washington, DC, USA 1 + Σ → Σ∗ lists of length at most one Σ∗ × Σ∗ → Σ∗ binary list concatenation !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Γ + Σ) ↔ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ+ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with + !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Γ × Σ) ↔ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ× !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with × ( !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ)∗ ↔ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (Γ∗) !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commutes with ∗ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ → !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ × Γ absorption Γ × Σ ↔ Σ × Γ commutativity of × Γ + Σ ↔ Σ + Γ commutativity of + Γ × (Σ × Δ) ↔ (Γ × Σ) × Δ associativity of × Γ + (Σ + Δ) ↔ (Γ + Σ) + Δ associativity of + Γ × (Σ + Δ) ↔ (Γ × Σ) + (Γ × Δ) distributivity Γ1 × Γ2 → Γ𝑖 projections Γ𝑖 → Γ1 + Γ2 co-projections Γ + Γ → Γ co-diagonal !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘1 → Γ Γ × Σ → Γ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘Σ∗ → Γ safe fold Γ → Σ Σ → Δ Γ → Δ function composition Γ1 → Σ1 Γ2 → Σ2 Γ1 × Γ2 → Σ1 × Σ2 functoriality of × Γ1 → Σ1 Γ2 → Σ2 Γ1 + Γ2 → Σ1 + Σ2 functoriality of + Γ → Σ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Γ →!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ functoriality of !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A complete system for weakly deriving the polyregular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The safe fold combinator can only be applied when the type Γ has grade < 𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' which is strong enough to replace the usual map combinator in the completeness proof of the system in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Summing, up we can reduce the system as stated in the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ For easier reference, the system in the above theorem is described in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' C Soundness for polyregular functions In this section, we prove the soundness implication in The- orem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We prove that every strongly derivable function is a graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The prime functions from Figure 1 are quantifier-free, and therefore they are a special case of graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The extra prime func- tions from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, namely absorption and those about !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' commuting with the remaining type constructors, are easily seen to be the graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The combinators for functoriality are also easily seen to preserve graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' There are two interesting cases, namely the combinators for function composition and safe fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 Function composition We first show that the graded mso interpretations are closed under composition, as long as the input and output types are graded list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Consider two graded mso interpretations Σ Γ Δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 𝑓1 𝑓2 We want to show that their composition 𝑓2 ○ 𝑓1 ∶ Σ → Δ is a graded mso interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Let the corresponding poly- nomial functors be 𝐹1 and 𝐹2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The key tool is the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every 𝑘,𝑟 ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' }, the following func- tion is mso definable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Input A structure 𝐴 ∈ Σ with distinguished elements ¯𝑎 ∈ 𝐴𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Output The rank 𝑟 mso theory of the tuple 𝐹(¯𝑎) in 𝑓1(𝐴).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof This lemma reduces to closure under composition of mso interpretations for list types [9, Corollary 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The result that we reduce to is non-trivial, and it depends on the fact that the input and output types are list types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ Thanks to the above lemma, we can use a standard com- position construction, with the polynomial functor for the composition being the composition 𝐹2○𝐹1 of the correspond- ing polynomial functors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 Safe fold We are left with showing that graded mso interpretations are closed under the safe fold combinator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' All of the conceptual pieces are already in place, and we will simply show that the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 works, with minor adjustments to take into account the added generality of graded structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose Γ is a type where all grades are < 𝑘, and we apply the safe fold combinator to graded mso interpretations of types !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘1 → Γ and Γ → Σ → Γ, yielding a function of type !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘Σ → Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By choice of 𝑘, in the resulting function every element in the input structure has strictly bigger grade than every element in the ouput structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For such functions, the continuity condition in Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='7 becomes trivial, and there is no Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) difference between graded and un-graded mso interpreta- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, in order to prove the soundess of fold, it is enough to show that following lemma, that applying fold to a graded mso interpretation yields an (ungraded) mso interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every graded mso interpretation 𝛿 ∶ Γ × Σ → Γ, between graded list types, and every 𝐵0 ∈ Γ, the following function is an (ungraded) mso interpretation 𝐴 = (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ list of structures in Σ, with the grades forgotten ↦ 𝐵𝑛 ⃒ defined based on 𝐴 as in the proof of Calim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof Let 𝑚 be the maximal grade that appears in Γ, and let the polynomial functor in the transition function 𝛿 be 𝐹(𝐴) = 𝐹0(𝐴) + ⋯ + 𝐹𝑚(𝐴) + 𝐴.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By the continuity condition for the graded mso interpretation 𝛿, the elements of grade ℓ in 𝐵𝑛 are the disjoint union of two sets: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' grade ℓ elements in 𝐵𝑛−1 or 𝐴𝑛;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' or 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 𝐹ℓ applied to grade > ℓ elements in 𝐵𝑛−1 or 𝐴𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By unfolding the inductive definition of 𝐵𝑛−1 in the first item of the above description, we see that the elements of grade ℓ in 𝐵𝑛 are the disjoint union of two sets: 1*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' grade ℓ elements in 𝐵0 or 𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' or 2*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' 𝐹ℓ applied to grade > ℓ elements in 𝐵𝑖−1 or 𝐴𝑖 for some 𝑖 ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑛}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will represent the elements that satisfy 1* or 2* as a subset of 𝐺ℓ(𝐴) for some polynomial functor 𝐺ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This functor is defined as follows by induction on ℓ, in reverse order𝑚, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that we want to define 𝐺ℓ and assume that we have already defined 𝐺ℓ′ for ℓ′ > ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' (In the induction basis of ℓ = 𝑚 the assumption is empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=') To represent the elements in item 1*, we use the functor 𝐴 + 1 + ⋯ + 1 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ number of elements in 𝐵0 that have grade ℓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A tempting idea for item 2* is to use the functor 𝐻ℓ(𝐴) = 𝐹ℓ(𝐺ℓ+1(𝐴) + ⋯ +𝐺𝑚(𝐴) + 𝐴 ⟩︀ represents elements of grade > 𝑚 in the input structure ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Unfortunately, this idea is not correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The reason is that in item 2*, there is a dijsoint union ranging over 𝑖 ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝑛}, and the disjointness of this union is not taken into account by 𝐻ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The problem is that the universe of the structures 𝐵0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐵𝑛 are not disjoint, and the functor 𝐻ℓ can incorrectly identify elements that are obtained by applying 𝐹ℓ to the same elements that appear in both 𝐵𝑖 and 𝐵𝑗 for 𝑖 ≠ 𝑗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To eliminate this problem, we will add an explicit identifier for the index 𝑗 to the functor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To view the index 𝑖 as an element of the input structure 𝐴𝑖, we use the first element in the universe of the corresponding list element 𝐴𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here, we when refer to the first element in the universe, we mean the natural linear order on the universe in a structure from a graded list type, which arises from the ordered nature of lists and pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, instead of 𝐻ℓ(𝐴), to represent item 2* we use the product 𝐴 × 𝐻ℓ(𝐴), with the 𝐴 part representing the index 𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Summing up, the functor 𝐺ℓ that describes elements in each 𝐵𝑖 is 𝐺ℓ(𝐴) = 𝐴 + 1 + ⋯ + 1 )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ number of elements in 𝐵0 that have grade ℓ +𝐴 × 𝐻ℓ(𝐴).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the rest of this proof, we will view the universe of 𝐵𝑛 as being a subset of 𝐺(𝐴) = 𝐺0(𝐴) + ⋯ +𝐺𝑚(𝐴), with𝐺ℓ(𝐴) representing the elements of grade ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The polyno- mial functor𝐺(𝐴) will be the polynomial functor for the mso interpretation in the conclusion of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To conclude the proof of the lemma, we need to show that in mso we can define which elements of 𝐺(𝐴) belong to the universe of 𝐵𝑛, and what relations from the output vocabulary are satisfied by tuples of such elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In other words, we need to define in mso the quantifier-free theory of tuples from 𝐺(𝐴) in the output structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is done in the following claim, which completes the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Claim C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every ℓ,𝑘 ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} the following function is mso definable: Input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A structure 𝐴 ∈ Σ∗ with elements ¯𝑎 ∈ 𝐴𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The quantifier-free theory of 𝐺(¯𝑎) in 𝐵𝑛⋃︀ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Furthermore, the output depends only on 𝐴 and ¯𝑎 restricted to elements of grade at least ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof Fix some ℓ and 𝑘 as in the statement of the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The claim is proved by induction on ℓ, in reverse order 𝑚, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Suppose that we want to prove the claim for some grade ℓ, and assume that it has already been proved for strictly bigger grades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We use the same idea as in the proof of Claim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Con- sider a finite automaton, in which the states are all possible theories that arise by taking some 𝑘-tuple ¯𝑎, and returning the quantifier-free theory of 𝐺(¯𝑎) in some structure from Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This set of states is finite, since the length of the tuple and the vocabulary are fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We will design an automaton with this set of states, to- gether with an input string (which will be called the advice string), so that it satisfies the following invariant: after read- ing the first 𝑖 letters of the advice string, the state of the automaton is the quantifier-free theory of 𝐺(¯𝑎) in 𝐵𝑖⋃︀ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The initial state of the automaton is determined by the invariant, it must be the quantifier-free theory of 𝐺(¯𝑎) in 𝐵0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Since the universe of 𝐵0 is equal to 𝐺(∅), it follows that Folding interpretations Conference’17, July 2017, Washington, DC, USA the initial state does not depend on the tuple ¯𝑎 or the input structure 𝐴.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We now describe the transition function of the automaton, as well as the advice string.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' By unfolding the definition of the graded mso interpretation 𝛿, there is some quantifier rank 𝑠 such that the state of the automaton after reading 𝑖 letters is uniquely determined by the following four pieces of information: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the quantifier-free theory of 𝐺(¯𝑎) in 𝐵𝑖−1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the quantifier-free theory of 𝐺(¯𝑎) in 𝐴𝑖, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the rank 𝑠 mso theory of 𝐺(¯𝑎) in 𝐵𝑖−1⋃︀ℓ + 1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the rank 𝑠 mso theory of 𝐺(¯𝑎) in 𝐴𝑖⋃︀ℓ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The first piece of information is the previous state of the automaton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The remaining infomration will be the stored in the advice string;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' the 𝑖-th letter of the advice string will contain the information described the last three items above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Note that the advice string can be computed in mso, by the induction assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, since the automaton can be simulated in mso, it follows that the last state of this automaton can be defined in mso, thus proving the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ ◻ D Proof of Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 In this section, we prove that the system in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3 is sound and complete with respect to linear regular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Soundness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The soundness proof follows the same lines as the soundness proof in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The general idea is that we use graded mso interpretations where all components have dimension at most one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This, however, on its own is not going to be enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To see why, let us compare the two absorption functions !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ → Σ×!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ not allowed !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='Σ → Σ × Σ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ allowed .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Both of them have linear size increase – each element of the input structure contributes two copies to the output structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' What is wrong with the function that is not allowed?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The problem is that one of the copies has the same grade, and the other has lower grade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the presence of folding, we can get an unbounded number of copies, by spawning a new lower grade copy in each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This phenomenon will not occur in the allowed function, since both copies have lower grade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The phenomenon discussed above is formalised in the following definition: Definition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' A linear graded mso interpretation is a graded mso interpretation in which the underlying functor is linear, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' all components have dimension one, and which furthermore satisfies the following downgrading condition: if an element of the input structure has at least two copies in the output structure, then all of the copies have strictly lower grade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the definition above, the copies of an element in the output structure are defined in the natural way;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' this defini- tion makes sense when the functor is linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For example, if the functor is 𝐴 + 𝐴 + 𝐴 + 1 + 1 then each input element spawns at most three copies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The components of dimension zero, of which there are two in the above example, are not counted as copies of any input elment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To prove completeness of the system from Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, we show that all functions that are strongly derived in it are linear graded mso interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The proof is a simple inducton on the derivation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' The most interesting cases are composition and folding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For composition, we simply observe that the condition on lower grades from Definition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1 is preserved under composition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' We are left with folding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' where we use the following lemma, which is the same as Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2 except that the functions in the assumption and conclusion are required to be linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the assumption, we use linearity as defined in Definition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, in particular the downgrading condition is assumed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' in the conclusion we have an ungraded function, and therefore only the linearity of the functor and not the downgrading condition are assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' For every linear graded mso interpretation 𝛿 ∶ Γ × Σ → Γ, between graded list types, and every 𝐵0 ∈ Γ, the following function is an (ungraded) linear mso interpretation 𝐴 = (︀𝐴1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=',𝐴𝑛⌋︀ )︁⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂]︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂⌊︂)︂ list of structures in Σ, with the grades forgotten ↦ 𝐵𝑛 ⃒ defined based on 𝐴 as in the proof of Calim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Proof We use the same proof as in Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' However, there is one difficulty, which is that the functor 𝐺 defined in that proof is not linear, even if 𝛿 is linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This is because of the product 𝐴 × 𝐻ℓ(𝐴) which is used to code indexes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In fact, the functor 𝐺 can have arbitrarily high dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' How- ever, thanks to the downgrading condition on 𝛿, one show by induction that for every grade ℓ there is some constant 𝑐ℓ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='} such that for every grade ℓ element 𝑎 in the input structure, there are at most 𝑐ℓ elements in the output structure which use 𝑎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Here, we say that an element uses 𝑎 if it belongs to 𝐺(𝐴) but not to 𝐺(𝐴 ∖ {𝑎}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Using this property, we can turn 𝐺 into a linear functor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' ◻ This finishes the soundness proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Below, we give two completeness proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' First completeness proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This proof uses the sst model from Example 7, which is complete for linear regular func- tions, in the case where the input and output types are strings over finite alphabets [1, Theorem 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In Example 7, we show Conference’17, July 2017, Washington, DC, USA Mikołaj Bojańczyk (University of Warsaw) how to weakly derive every sst that uses each input letter at most once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' To get the general form of sst, where an input letter can be used a constant number of times, it is enough to generalize the model from Example 7 so that the initial function is weakly derivable, and the transition function can be derived in type Δ×!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='𝑘Σ → Δ for some 𝑘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' With these relaxations, we get all copyless sst, and retain weak derivability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This proof works only for func- tions of string-to-string type (admittedly, this is the case that we really care about), and for this reason we also present a second proof, which can also handle types such as strings of strings or pairs of strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Second completeness proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In this proof, similarly to the completeness proof from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, we reduce to a known complete system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the case of linear mso interpretations, the corresponding known system is from [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' It is the same as in Theorem B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='1, except that the split function is removed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' In the completeness proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content='3, only the proof for split used general absorption (as opposed to linear absorption).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' Therefore, the system with linear absorption is complete for the linear regular functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} +page_content=' This completes the second completeness proof, and thus also the proof of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DNE4T4oBgHgl3EQfew0W/content/2301.05101v1.pdf'} diff --git a/E9FLT4oBgHgl3EQfFi_K/content/tmp_files/2301.11988v1.pdf.txt b/E9FLT4oBgHgl3EQfFi_K/content/tmp_files/2301.11988v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f5a286e2fd1f29c2c97f565712a895f9e2edc6fb --- /dev/null +++ b/E9FLT4oBgHgl3EQfFi_K/content/tmp_files/2301.11988v1.pdf.txt @@ -0,0 +1,970 @@ +arXiv:2301.11988v1 [cs.DC] 27 Jan 2023 +Energy-Efficient Distributed Algorithms for +Synchronous Networks⋆ +Pierre Fraigniaud1⋆⋆, Pedro Montealegre2, Ivan Rapaport3⋆ ⋆ ⋆, and +Ioan Todinca4 +1 Institut de Recherche en Informatique Fondamentale (IRIF), CNRS and Université +Paris Cité, Paris, France. pierre.fraigniaud@irif.fr +2 Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, Chile +p.montealegre@uai.cl +3 Departamento de Ingeniería Matemática - Centro de Modelamiento Matemático +(UMI 2807 CNRS), Universidad de Chile, Santiago, Chile rapaport@dim.uchile.cl +4 Laboratoire d’informatique fondamentale d’Orléans (LIFO), Université d’Orléans, +Orléans, France Ioan.Todinca@univ-orleans.fr +Abstract. We study the design of energy-efficient algorithms for the +LOCAL and CONGEST models. Specifically, as a measure of complex- +ity, we consider the maximum, taken over all the edges, or over all the +nodes, of the number of rounds at which an edge, or a node, is active +in the algorithm. We first show that every Turing-computable problem +has a CONGEST algorithm with constant node-activation complexity, +and therefore constant edge-activation complexity as well. That is, ev- +ery node (resp., edge) is active in sending (resp., transmitting) messages +for only O(1) rounds during the whole execution of the algorithm. In +other words, every Turing-computable problem can be solved by an al- +gorithm consuming the least possible energy. In the LOCAL model, the +same holds obviously, but with the additional feature that the algorithm +runs in O(poly(n)) rounds in n-node networks. However, we show that +insisting on algorithms running in O(poly(n)) rounds in the CONGEST +model comes with a severe cost in terms of energy. Namely, there are +problems requiring Ω(poly(n)) edge-activations (and thus Ω(poly(n)) +node-activations as well) in the CONGEST model whenever solved by +algorithms bounded to run in O(poly(n)) rounds. Finally, we demon- +strate the existence of a sharp separation between the edge-activation +complexity and the node-activation complexity in the CONGEST model, +for algorithms bounded to run in O(poly(n)) rounds. Specifically, under +this constraint, there is a problem with O(1) edge-activation complexity +but ˜Ω(n1/4) node-activation complexity. +Keywords: Synchronous distributed algorithms · LOCAL and CON- +GEST models · Energy efficiency. +⋆ This work was performed during the visit of the first and last authors to Universidad +de Chile, and to Universidad Adolfo Ibañez, Chile. +⋆⋆ Additional support from ANR project DUCAT (ref. ANR-20-CE48-0006). +⋆ ⋆ ⋆ Additional support from ANID via PIA/Apoyo a Centros Cientificos y Tecnológicos +de Excelencia AFB 170001 and Fondecyt 1220142. + +2 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +1 +Introduction +1.1 +Objective +Designing computing environments consuming a limited amount of energy while +achieving computationally complex tasks is an objective of utmost importance, +especially in distributed systems involving a large number of computing entities. +In this paper, we aim at designing energy-efficient algorithms for the standard +LOCAL and CONGEST models of distributed computing in networks [11]. Both +models assume a network modeled as an n-node graph G = (V, E), where each +node is provided with an identifier, i.e., an integer that is unique in the network, +which can be stored on O(log n) bits. All nodes are assumed to run the same +algorithm, and computation proceeds as a series of synchronous rounds (all nodes +start simultaneously at round 1). During a round, every node sends a message to +each of its neighbors, receives the messages sent by its neighbors, and performs +some individual computation. The two models LOCAL and CONGEST differ +only in the amount of information that can be exchanged between nodes at each +round. +The LOCAL model does not bound the size of the messages, whereas the +CONGEST model allows only messages of size O(log n) bits. Initially, every +node v ∈ V knows solely its identifier id(v), an upper bound of the number n of +nodes, which is assumed to be polynomial in n and to be the same for all nodes, +plus possibly some input bit-string x(v) depending on the task to be solved by +the nodes. In this paper, we denote by N the maximum between the largest +identifier and the upper bound on n given to all nodes. Hence N = O(poly(n)), +and is supposed to be known by all nodes. After a certain number of rounds, +every node outputs a bit-string y(v), where the correctness of the collection of +outputs y = {y(v) : v ∈ V } is defined with respect to the specification of the +task to be solved, and may depend on the collection of inputs x = {x(v) : v ∈ V } +given to the nodes, as well as on the graph G (but not on the identifiers assigned +to the nodes, nor on the upper bound N). +Activation complexity. We measure the energy consumption of an algorithm A +by counting how many times each node and each edge is activated during the +execution of the algorithm. More specifically, a node v (resp., an edge e) is +said to be active at a given round r if v is sending a message to at least one +of its neighbors at round r (resp., if a message traverses e at round r). The +node-activation and the edge-activation of an algorithm A running in a graph +G = (V, E) are respectively defined as +nact(A) := max +v∈V #activation(v), and eact(A) := max +e∈E #activation(e), +where #activation(v) (resp., #activation(e)) denotes the number of rounds dur- +ing which node v (resp., edge e) is active along the execution of the algorithm A. +By definition, we have that, in any graph of maximum degree ∆, +eact(A) ≤ 2 · nact(A), +and nact(A) ≤ ∆ · eact(A). +(1) + +Energy-Efficient Distributed Algorithms +3 +Objective. Our goal is to design frugal algorithms, that is, algorithms with con- +stant node-activation, or to the least constant edge-activation, independent of +the number n of nodes and of the number m of edges. Indeed, such algorithms +can be viewed as consuming the least possible energy for solving a given task. +Moreover, even if the energy requirement for solving the task naturally grows +with the number of components (nodes or edges) of the network, it grows linearly +with this number whenever using frugal algorithms. We refer to node-frugality +or edge-frugality depending on whether we focus on node-activation or edge- +activation, respectively. +1.2 +Our Results +We first show that every Turing-computable problem5 can thus be solved by a +node-frugal algorithm in the LOCAL model as well as in the CONGEST model. +It follows from Eq. 1 that every Turing-computable problem can be solved by +an edge-frugal algorithm in both models. In other words, every problem can +be solved by an energy-efficient distributed algorithm. One important question +remains: what is the round complexity of frugal algorithms? +In the LOCAL model, our node-frugal algorithms run in O(poly(n)) rounds. +However, they may run in exponentially many rounds in the CONGEST model. +We show that this cannot be avoided. Indeed, even if many symmetry-breaking +problems such as computing a maximal-independent set (mis) and comput- +ing a (∆ + 1)-coloring can be solved by a node-frugal algorithm performing in +O(poly(n)) rounds, we show that there exist problems (e.g., deciding C4-freeness +or deciding the presence of symmetries in the graph) that cannot be solved in +O(poly(n)) rounds in the CONGEST model by any edge-frugal algorithm. +Finally, we discuss the relation between node-activation complexity and edge- +activation complexity. We show that the bounds given by Eq. 1 are essentially +the best that can be achieved in general. Precisely, we identify a problem, namely +Depth First Pointer Chasing (dfpc), which has edge-activation complexity +O(1) for all graphs with an algorithm running in O(poly(n)) rounds in the CON- +GEST model, but satisfying that, for every ∆ = O +Ä +n1/4 +√log n +ä +, its node-activation +complexity in graphs with maximum degree ∆ is Ω(∆) whenever solved by an +algorithm bounded to run in O(poly(n)) rounds in the CONGEST model. In +particular, Depth First Pointer Chasing has constant edge-activation com- +plexity but node-activation complexity ˜Ω(n1/4) in the CONGEST model (for +O(poly(n))-round algorithms). +Our main results are summarized in Table 1. +Our Techniques. Our upper bounds are mostly based on similar types of up- +per bounds techniques used in the sleeping model [2,4] (cf. Section 1.3), based +5 A problem is Turing-computable if there exists a Turing machine that, given any +graph with identifiers and inputs assigned to the nodes, computes the output of each +node in the graph. + +4 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +Awakeness +Node-Activation +Edge-Activation +LOCAL +• ∀Π, Π ∈ O(log n) with +• ∀Π, Π ∈ O(1) with • ∀Π, Π ∈ O(1) with +O(poly(n)) rounds [2] +O(poly(n)) rounds +O(poly(n)) rounds +• st ∈ Ω(log n) [2] +CONGEST • mis ∈ O(polyloglog(n)) +• ∀Π, Π ∈ O(1) +• ∀Π, Π ∈ O(1) +with O(polylog(n)) +• poly(n) rounds +• poly(n) rounds +rounds [6] (randomized) +⇒ ∃Π ∈ Ω(poly(n)) +⇒ ∃Π ∈ Ω(poly(n)) +• mst ∈ O(log n) +• poly(n) rounds +• dfpc ∈ O(1) with +with O(poly(n)) +⇒ dfpc ∈ ˜Ω(n1/4) +O(poly(n)) rounds +rounds [1] +• Π ∈ FO and ∆ = O(1) +⇒ Π ∈ O(1) with +O(poly(n)) rounds [8] +Table 1. Summary of our results where, for a problem Π, Π ∈ O(f(n)) means that +the corresponding complexity of Π is O(f(n)) (same shortcut for Ω). +on constructing spanning trees along with gathered and broadcasted informa- +tion. However, the models considered in this paper do not suffer from the same +limitations as the sleeping model (cf. Section 2), and thus one can achieve acti- +vation complexity O(1) in scenarios where the sleeping model limits the awake +complexity to Ω(log n). +Our lower bounds for CONGEST are based on reductions from 2-party com- +munication complexity. However, as opposed to the standard CONGEST model +in which the simulation of a distributed algorithm by two players is straightfor- +ward (each player performs the rounds sequentially, one by one, and exchanges +the messages sent across the cut between the two subsets of nodes handled by the +players at each round), the simulation of distributed algorithms in which only +subsets of nodes are active at various rounds requires more care. This is especially +the case when the simulation must not only control the amount of information +exchanged between these players, but also the number of communication steps +performed by the two players. Indeed, there are 2-party communication com- +plexity problems that are hard for k steps, but trivial for k + 1 steps [10], and +some of our lower bounds rely on this fact. +1.3 +Related Work +The study of frugal algorithms has been initiated in [8], which focuses on the +edge-frugality in the CONGEST model. It is shown that for bounded-degree +graphs, any problem expressible in first-order logic (e.g., C4-freeness) can be +solved by an edge-frugal algorithm running in O(poly(n)) rounds in the CON- +GEST model. This also holds for planar graphs with no bounds on the maximum +degree, whenever the nodes are provided with their local combinatorial embed- +ding. Our results show that these statements cannot be extended to arbitrary +graphs as we prove that any algorithm solving C4-freeness in O(poly(n)) rounds +in the CONGEST model has edge-activation ˜Ω(√n). + +Energy-Efficient Distributed Algorithms +5 +More generally, the study of energy-efficient algorithms in the context of +distributed computing in networks has been previously considered in the frame- +work of the sleeping model, introduced in [4]. This model assumes that nodes +can be in two states: awake and asleep. A node in the awake state performs as +in the LOCAL and CONGEST models, but may also decide to fall asleep, for +a prescribed amount of rounds, controlled by each node, and depending on the +algorithm executed at the nodes. A sleeping node is totally inactive in the sense +that it does not send messages, it cannot receive messages (i.e., if a message is +sent to a sleeping node by an awake neighbor, then the message is lost), and +it is computationally idle (apart from counting rounds). The main measure of +interest in the sleeping model is the awake complexity, defined as the maximum, +taken over all nodes, of the number of rounds at which each node is awake during +the execution of the algorithm. +In the LOCAL model, it is known [2] that all problems have awake complexity +O(log n), using algorithms running in O(poly(n)) rounds. This bound is tight in +the sense that there are problems (e.g., spanning tree construction) with awake +complexity Ω(log n) [2,3]. +In the CONGEST model, It was first shown [4] that mis has constant average +awake complexity, thanks to a randomized algorithm running in O(polylog(n)) +rounds. The round complexity was improved in [7] with a randomized algo- +rithm running in O(log n) rounds. The (worst-case) awake complexity of mis +was proved to be O(log log n) using a randomized Monte-Carlo algorithm run- +ning in O(poly(n)) rounds [6]. This (randomized) round complexity can even +be reduced to O(log3 n · log log n · log⋆ n), to the cost of slightly increasing the +awake complexity to O(log log n · log⋆ n). mst has also been considered, and it +was proved [1] that its (worst-case) awake complexity is O(log n) thanks to a +(deterministic) algorithm running in O(poly(n)) rounds. The upper bound on +the awake complexity of mst is tight, thank to the lower bound for spanning +tree (st) in [2]. +2 +Preliminaries +In this section, we illustrate the difference between the standard LOCAL and +CONGEST models, their sleeping variants, and our node- and edge-activation +variants. Fig. 1(a) displays the automaton corresponding to the behavior of a +node in the standard models. A node is either active (A) or terminated (T). At +each clock tick (i.e., round) a node is subject to message events corresponding to +sending and receiving messages to/from neighbors. A node remains active until +it terminates. +Fig. 1(b) displays the automaton corresponding to the behavior of a node in +the sleeping variant. In this variant, a node can also be in a passive (P) state. In +this state, the clock event can either leave the node passive, or awake the node, +which then moves back to the active state. +Finally, Fig. 1(c) displays the automaton corresponding to the behavior of +a node in our activation variants. It differs from the sleeping variant in that a + +6 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +A +P +T +clock +msg +msg +msg +clock +clock +clock +A +P +T +clock +msg +clock +clock +clock +A +T +clock +msg +(a) +(b) +(c) +Fig. 1. (a) Classical model (b) Sleeping model, (c) Activation model. +passive node is also subject to message events, which can leave the node passive, +but may also move the node to the active state. In particular, a node does not +need to be active for receiving messages, and incoming messages may not trigger +an immediate response from the node (e.g., forwarding information). Instead, a +node can remain passive while collecting information from each of its neighbors, +and eventually react by becoming active. +Example 1: Broadcast. Assume that one node of the n-node cycle Cn has a token +to be broadcast to all the nodes. Initially, all nodes are active. However, all nodes +but the one with the token become immediately passive when the clock ticks for +entering the second round. The node with the token sends the token to one of +its neighbors, and becomes passive at the next clock tick. Upon reception of the +token, a passive node becomes active, forwards the token, and terminates. When +the source node receives the token back, it becomes active, and terminates. The +node-activation complexity of broadcast is therefore O(1), whereas it is known +that broadcasting has awake complexity Ω(log n) in the sleeping model [2]. +Example 2: At-least-one-leader. Assume that each node of the cycle Cn has an +input-bit specifying whether the node is leader or not, and the nodes must col- +lectively check that there is at least one leader. Every leader broadcasts a token, +outputs accept, and terminates. Non-leader nodes become passive immediately +after the beginning of the algorithm, and start waiting for N rounds (recall that +N is an upper bound on the number n of nodes). Whenever the “sleep” of a (pas- +sive) non-leader is interrupted by the reception of a token, it becomes active, +forwards the token, outputs accept, and terminates. After N rounds, a passive +node that has not been “awaken” by a token becomes active, outputs reject, and +terminates. This guarantees that there is at least one leader if and only if all +nodes accept. The node-activation complexity of this algorithm is O(1), while +the awake complexity of at-least-one-leader is Ω(log n) in the sleeping model, by +reduction to broadcast. +The following observation holds for LOCAL and CONGEST, by noticing that +every algorithm for the sleeping model can be implemented with no overheads +in terms of node-activation. + +Energy-Efficient Distributed Algorithms +7 +Observation 1 In n-node graphs, every algorithm with awake complexity a(n) +and round complexity r(n) has node-activation complexity a(n) and round com- +plexity r(n). +It follows from Observation 1 that all upper bound results for the awake +complexity directly transfer to the node-activation complexity. However, as we +shall show in this paper, in contrast to the sleeping model in which some problems +(e.g., spanning tree) have awake complexity Ω(log n), even in the LOCAL model, +all problems admit a frugal algorithm in the CONGEST model, i.e., an algorithm +with node-activation O(1). +Definition 1. A LOCAL or CONGEST algorithm is node-frugal (resp., edge- +frugal) if the activation of every node (resp., edge) is upper-bounded by a constant +independent of the graph, and of the identifiers and inputs given to the nodes. +3 +Universality of Frugal Algorithms +In this section we show that every Turing-computable problem can be solved +by frugal algorithms, both in the LOCAL and CONGEST models. Thanks to +Eq. 1, it is sufficient to prove that this holds for node-frugality. +Lemma 1. There exists a CONGEST algorithm electing a leader, and con- +structing a BFS tree rooted at the leader, with node-activation complexity O(1), +and performing in O(N 2) = O(poly(n)) rounds. +Proof. The algorithm elects as leader the node with smallest identifier, and initi- +ates a breadth-first search from that node. At every node v, the protocol performs +as follows. +– If v has received no messages until round id(v) · N, then v elects itself as +leader, and starts a BFS by sending message (id(v), 0) to all its neighbors. +Locally, v sets its parent in the BFS tree to ⊥, and the distance to the root +to 0. +– Otherwise, let r be the first round at which vertex v receives a message. Such +a message is of type (id(u), d) where u is the neighbor of v which sent the +message to v, and d is the distance from u to the leader in the graph. Node +v sets its parent in the BFS tree to id(u), its distance to the root to d + 1, +and, at round r + 1, it sends the message (id(v), d + 1) to all its neighbors. +(If v receives several messages at round r, from different neighbors, then v +selects the messages coming from the neighobors with smallest identifier). +The node v with smallest identifier is indeed the node initiating the BFS, as +the whole BFS is constructed between rounds id(v) · N and id(v) · N + N − 1, +and N ≥ n. The algorithm terminates at round at most O(N 2). +⊓⊔ + +8 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +An instance of a problem is a triple (G, id, x) where G = (V, E) is an n-node +graph, id : V → [1, N] with N = O(poly(n)), and x : V → [1, ν] is the input +assignment to the nodes. Note that the input range ν may depend on n, and even +be exponential in n, even for classical problems, e.g., whenever weights assigned +to the edges are part of the input. A solution to a graph problem is an output +assignment y : V → [1, µ], and the correctness of y depends on G and x only, +with respect to the specification of the problem. We assume that µ and ν are +initially known to the nodes, as it is the case for, e.g., mst, in which the weights +of the edges can be encoded on O(log n) bits. +Theorem 1. Every Turing-computable problem has a LOCAL algorithm with +O(1) node-activation complexity, and running in O(N 2) = O(poly(n)) rounds. +Proof. Once the BFS tree T of Lemma 1 is constructed, the root can (1) gather +the whole instance (G, id, x), (2) compute a solution y, and (3) broadcast y to +all nodes. Specifically, every leaf v of T sends the set +E(v) = +� +{(id(v), x(v)), (id(w), x(w))} : w ∈ N(v) +� +to its parent in T . An internal node v waits for receiving a set of edges S(u) +from each of its children u in T , and then forwards the set +S(v) = E(v) ∪ (∪u∈child(v)S(u)) +to its parent. Each node of T is activated once during this phase, and thus the +node-activation complexity of gathering is 1. Broadcasting the solution y from +the leader to all the nodes is achieved along the edges of T , again with node- +activation 1. +⊓⊔ +The algorithm used in the proof of Theorem 1 cannot be implemented in +CONGEST due to the size of the messages, which may require each node to be +activated more than a constant number of times. To keep the node-activation +constant, we increased the round complexity of the algorithm. +Lemma 2. Every node-frugal algorithm A performing in R rounds in the LO- +CAL model with messages of size at most M bits can be implemented by a node- +frugal algorithm B performing in R 2M rounds in the CONGEST model. +Proof. Let v be a node sending a message m through an incident edge e at +round r of A. Then, in B, v sends one “beep” through edge e at round r 2M + t +where t is lexicographic rank of m among the at most 2M messages generated +by A. +⊓⊔ +Theorem 2. Every Turing-computable problem has a CONGEST algorithm with +O(1) node-activation complexity, and running in 2poly(n)+O((ν+µ) log n) rounds for +inputs in the range [1, ν] and outputs in the range [1, µ]. +Proof. The algorithm used in the proof of Theorem 1 used messages of size at +most 2N 2 + ν log N bits during the gathering phase, and of size at most µ log N +bits during the broadcast phase. The result follows from Lemma 2. +⊓⊔ + +Energy-Efficient Distributed Algorithms +9 +Of course, there are many problems that can be solved in the CONGEST +model by a frugal algorithm much faster than the bound from Theorem 2. This +is typically the case of all problems that can be solved by a sequential greedy +algorithm visiting the nodes in arbitrary order, and producing a solution at the +currently visited node based only on the partial solution in the neighborhood of +the node. Examples of such problems are mis, ∆ + 1-coloring, etc. We call such +problem sequential-greedy. +Theorem 3. Every sequential-greedy problem whose solution at every node can +be encoded on O(log n) bits has a node-frugal CONGEST algorithm running in +O(N) = O(poly(n)) rounds. +Proof. Every node v ∈ V generates its output at round id(v) according to its +current knowledge about its neighborhood, and sends this output to all its neigh- +bors. +⊓⊔ +4 +Limits of CONGEST Algorithms with Polynomially +Many Rounds +Given a graph G = (V, E) such that V is partitioned in two sets VA, VB, the set +of edges with one endpoint in VA and the other in VB is called the cut. We denote +by e(VA, VB) the number of edges in the cut, and by n(VA, VB) the number of +nodes incident to an edge of the cut. Consider the situation where there are +two players, namely Alice and Bob. We say that a player controls a node v if +it knows all its incident edges and its input. For a CONGEST algorithm A, we +denote A(I) the output of A on input I = (G, id, x). We denote RA(n) the +round complexity of A on inputs of size n. +Lemma 3 (Simulation lemma). Let A be an algorithm in the CONGEST +model, let I = (G, id, x) be an input for A, and let VA, VB be a partition of V (G). +Suppose that Alice controls all the nodes in VA, and Bob controls all the nodes in +VB. Then, there exists a communication protocol P between Alice and Bob with +at most 2 · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A)) rounds and using total +communication O(min(n(VA, VB)·nact(A), e(VA, VB)·eact(A))·log n·log(RA(n)), +such that each player computes the value of A(I) at all nodes he or she controls. +Proof. In protocol P, Alice and Bob simulate the rounds of algorithm A in +all the nodes they control. The simulation run in phases. Each phase is used to +simulate up to a certain number of rounds t of algorithm A, and takes two rounds +of protocol P (one round for Alice, and one round for Bob). By simulating A +up to t rounds, we mean that Alice and Bob know all the states of all the nodes +they control, on every round up to round t. +In the first phase, players start simulating A from the initial state. Let us +suppose that both Alice and Bob have already executed p ≥ 0 phases, meaning +that they had correctly simulated A up to round t = t(p) ≥ 0. Let us explain +phase p + 1 (see also Figure 2). + +10 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +rounds +VA +VB +ra +rb +VA +VB +VA +VB +Oblivious simulation +of Alice +t +Oblivious simulation +of Bob +Transcript of +algorithm A +Fig. 2. Illustration of one phase of the simulation protocol. Assuming that the players +agree on the simulation of algorithm A up to round t, each player runs an oblivious +simulation at the nodes they control. In the example of the figure, the next message +corresponds to a node controlled by Bob, who sends a message to a node in VA at +round rb. The oblivious simulation of Alice is not aware of this message, and incor- +rectly considers that a message is sent from VA to VB at round ra > rb. Using the +communication rounds in this phase, the players agree that the message of Bob was +correct. Thus the simulation is correct up to round rb, for both players. +Starting from round t, Alice runs an oblivious simulation of algorithm A over +all nodes that she controls. By oblivious, we mean that Alice assumes that no +node of VB communicates a message to a node in VA in any round at least t. The +oblivious simulation of Alice stops in one of the following two possible scenarios: +(1) All nodes that she controls either terminate or enter into a passive state that +quits only on an incoming message from VB. +(2) The simulation reaches a round ra where a message is sent from a node in +VA to a node in VB. +At the same time, Bob runs and oblivious simulation of A starting from +round t (i.e. assuming that no node of VA sends a message to a node in VB in +any round at least t). The oblivious simulation of Bob stops in one of the same +two scenarios analogous to the ones above. In this case, we call rb the round +reached by Bob in his version of scenario (2). +At the beginning of a phase, it is the turn of Alice to speak. Once the obliv- +ious simulation of Alice stops, she is ready to send a message to Bob. If the +simulation stops in the scenario (1), Alice sends a message "scenario 1" to Bob. +Otherwise, Alice sends ra together with all the messages sent from nodes in VA +to nodes in VB at round ra, to Bob. When Bob receives the message from Alice, +one of the following situations holds: +Case 1: the oblivious simulation of both Alice and Bob stopped in the first sce- +nario. In this case, since A is correct, there are no deadlocks. Therefore, all +vertices of G reached a terminal state, meaning that the oblivious simulation + +Energy-Efficient Distributed Algorithms +11 +of both players was in fact a real simulation of A, and the obtained states are +the output states. Therefore, Bob sends a message to Alice indicating that the +simulation is finished, and indeed Alice and Bob have correctly computed the +output of A for all the nodes they control. +Case 2: the oblivious simulation of Alice stopped in scenario (1), and the one of +Bob stopped in the scenario (2). In this case, Bob infers that his oblivious simu- +lation was correct. He sends rb and all the messages communicated in round rb +through the cut to Alice. When Alice receives the message of Bob, she updates +the state of the nodes she controls up to round rb. It follows that both players +have correctly simulated algorithm A up to round rb > t. +Case 3: the oblivious simulation of Alice stopped in scenario (2), and the one of +Bob stopped in scenario (1). In this case, Bob infres that the simulation of Alice +was correct up to round ra. He sends a message to Alice indicating that she has +correctly simulated A up to round ra, and he updates the states of all the nodes +he controls up to round ra. It follows that both players have correctly simulated +A up to round ra > t. +Case 4: the oblivious simulation of both players stopped in scenario (2), and +ra > rb. Bob infers that his oblivious simulation was correct up to rb, and that +the one of Alice was not correct after round rb. Then, the players act in the same +way as described in Case 2. Thus, both players have correctly simulated A up +to round rb. +Case 5: the oblivious simulation of both players stopped in scenario (2), and +rb > ra. Bob infers that his oblivious simulation was incorrect after round ra, +and that the one of Alice was correct up to round ra. Then, the players act in the +same way as described in Case 3. Thus, both players have correctly simulated A +up to round ra. +Case 6: the oblivious simulation of both players stopped in scenario (2), and +rb = ra. Bob assumes that both oblivious simulations were correct. He sends rb +together with all the messages communicated from his nodes at round rb through +the cut. Then, he updates the states of all the nodes he controls up to round rb. +When Alice receives the message from Bob, she updates the states of the nodes +she controls up to round rb. It follows that both players have correctly simulated +A up to round rb > t. +Observe that, except when the algorithm terminates, on each phase of the +protocol, at least one node controlled by Alice or Bob is activated. Since the +number of rounds of P is twice the number of phases, we deduce that the total +number of rounds is at most +2 · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A)). + +12 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +Moreover, on each round of P, the players communicate O(log(RA(n)) · log n · +e(VA, VB)) bits. As a consequence, the total communication cost of P is +O(log(RA(n)) · log n · e(VA, VB)) · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A))), +which completes the proof. +⊓⊔ +We use the simulation lemma to show that there are problems that cannot +be solved by a frugal algorithm in a polynomial number of rounds. In problem +C4-freeness, all nodes of the input graph G must accept if G has no cycle of +4 vertices, and at least one node must reject if such a cycle exists. Observe that +this problem is expressible in first-order logic, in particular it has en edge-frugal +algorithm with a polynomial number of rounds in graphs of bounded degree [8]. +We show that, in graphs of unbounded degree, this does not hold anymore. +We shall also consider problem Symmetry, where the input is a graph G with +2n nodes indexed from 1 to 2n, and with a unique edge {1, n + 1} between +GA = G[{1, . . . , n}] and GB = G[{n + 1, . . . , 2n}]. Our lower bounds holds +even if every node is identified by its index. All nodes must output accept if +the function f : {1, . . . , n} → {n + 1, . . . , 2n} defined by f(x) = x + n is an +isomorphism from GA to GB, otherwise at least one node must output reject. +The proof of the following theorem is based on classic reductions from com- +munication complexity problems Equality and Set Disjointness (see, e.g., +[9]), combined with Lemma 3. +Theorem 4. Any CONGEST algorithm solving Symmetry (resp., C4-free- +ness) in polynomially many rounds has node-activation and edge-activation at +least Ω +Ä +n2 +log2 n +ä +(resp., Ω +Ä +√n +log2 n +ä +). +Proof. In problem Equality, two players Alice and Bob have a boolean vector +of size k, xA for Alice and xB for Bob. Their goal is to answer true if xA = xB, +and false otherwise. The communication complexity of this problem is known to +be Θ(k) [9]. Let k = n2. We can interpret xA and xB as the adjacency matrix of +two graphs GA and GB in an instance of Symmetry. It is a mere technicality to +"shift" GB as if its vertices were indexed from 1 to n, such that Symmetry is true +for G iff xA = xB. Moreover, Alice can construct GA from its input xA, and Bob +can construct GB from xB. Both can simulate the unique edge joining the two +graphs in G. Therefore, by Lemma 3 applied to G, if Alice controls the vertices +of GA, and Bob controls the vertices of GB, then any CONGEST algorithm A +solving Symmetry in polynomially many rounds yields a two-party protocol for +Equality on n2 bits. Since graphs GA and GB are linked by a unique edge, the +total communication of the protocol is O(eact(A)·log2 n) and O(nact(A)·log2 n). +The result follows. +In Set Disjointness, each of the two players Alice and Bob has a Boolean +vector of size k, xA for Alice, and xB for Bob. Their goal is to answer true if +there is no index i ∈ [k] such that both xA[i] and xB[i] are true (in which case, +xA and xB are disjoint), and false otherwise. The communication complexity of + +Energy-Efficient Distributed Algorithms +13 +this problem is known to be Θ(k) [9]. We use the technique in [5] to construct an +instance G for C4 freeness, with a small cut, from two Boolean vectors xA, xB +of size k = Θ(n3/2). Consider a C4-free n-vertex graph H with a maximum +number of edges. Such a graph has k = Θ(n3/2) edges, as recalled in [5]. We +can consider the edges E(H) as indexed from 1 to k, and V (H) as [n]. Let now +xA and xB be two Boolean vectors of size k. These vectors can be interpreted +as edge subsets E(xA) and E(xB) of H, in the sense that the edge indexed i in +E(H) appears in E(xA) (resp. E(xB)) iff xA[i] (resp. xB[i]) is true. Graph G is +constructed to have 2n vertices, formed by two sub-graphs GA = G[{1, . . . , n}] +and GB = G[{n+1, . . . , 2n}]. The edges of E(GA) are exactly the ones of E(xA). +Similarly, the edges of E(GB) correspond to E(xA), modulo the fact that the +vertex indexes are shifted by n, i.e., for each edge {u, v} ∈ E(xB), we add edge +{u+n, v +n} to GB. Moreover we add a perfect matching to G, between V (GA) +and V (GB), by adding all edges {i, i + n}, for all i ∈ [n]. Note that G is C4- +free if and only if vectors xA and xB are disjoint. Indeed, since GA, GB are +isomorphic to sub-graphs of H, they are C4-free. Thus any C4 of G must contain +two vertices in GA and two in GB, in which case the corresponding edges in +GA and GB designate the same bit of xA and xB respectively. Moreover Alice +and Bob can construct GA and GB, as well as the edges in the matching, from +their respective inputs xA and xB. Therefore, thanks to Lemma 3, a CONGEST +algorithm A for C4-freeness running in a polynomial number of rounds can +be used to design a protocol P solving Set Disjointness on k = Θ(n3/2) +bits, where Alice controls V (GA) and Bob controls V (GB). The communication +complexity of the protocol is O(eact(A) · n · log2 n), and O(nact(A) · n · log2 n), +since the cut between GA and GB is a matching. The result follows. +⊓⊔ +5 +Node versus Edge Activation +In this section we exhibit a problem that admits an edge-frugal CONGEST +algorithm running in a polynomial number of rounds, for which any algorithm +running in a polynomial number of rounds has large node-activation complexity. +We proceed by reduction from a two-party communication complexity prob- +lem. However, unlike the previous section, we are now also interested in the +number of rounds of the two-party protocols. We consider protocols in which +the two players Alice and Bob do not communicate simultaneously. For such a +protocol P, a round is defined as a maximal contiguous sequence of messages +emitted by a same player. We denote by R(P) the number of rounds of P. +Let G be a graph, and S be a subset of nodes of G. We denote by ∂S the +number of vertices in S with a neighbor in V \ S. +Lemma 4 (Round-Efficient Simulation lemma). Let A be an algorithm in +the CONGEST model, let I = (G, id, x) be an input for A, and let VA, VB be a +partition of V (G). Let us assume that Alice controls all the nodes in VA, and +Bob controls all the nodes in VB, and both players know the value of nact(A). +Then, there exists a communication protocol P between Alice and Bob such + +14 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +that, in at most min(∂VA, ∂VB)·nact(A) rounds, and using total communication +O(((∂(VA) + ∂(VB)) · nact(A)))2 · log n · log RA(n)) bits, each player computes +the value of A(I) at all the nodes he or she controls. +Proof. In protocol P, Alice and Bob simulate the rounds of algorithm A at +all the nodes each player controls. Without loss of generality, we assume that +algorithm A satisfies that the nodes send messages at different rounds, by merely +multiplying by N the number of rounds. +Initially, Alice runs a oblivious simulation of A that stops when every node +in VA either has terminated, or entered into the passive state that it may leave +only after having received a message from a node in VB (this corresponds to +what we call the first scenario in the proof of Lemma 3). Then, Alice sends to +Bob the integer t1 = 0, and the set M 1 +A of all messages sent from nodes in VA +to nodes in VB in the communication rounds that she simulated, together with +their corresponding timestamps. If the number of messages communicated by +Alice exceeds nact(A) · ∂A, we trim the list up to this threshold. +Let us suppose that the protocol P has run for p rounds, and let us assume +that it is the turn of Bob to speak at round p + 1 — the case where Alice speaks +at round p + 1 can be treated in the same way. Moreover, we assume that P +satisfies the following two conditions: +1. At round p, Alice sents an integer tp ≥ 0, and a list of timestamped messages +M p +A corresponding to messages sent from nodes in VA to nodes in VB in an +oblivious simulation of A starting from a round tp. +2. Bob had correctly simulated A at all the nodes he controls, up to round tp. +We now describe round p+1 (see also Figure 3). Bob initiates a simulation of +A at all the nodes he controls. However, this simulation is not oblivious. Specif- +ically, Bob simulates A from round tp taking into account all the messages sent +from nodes in VA to nodes in VB, as listed in the messages M p +A. The simulation +stops when Bob reaches a round tp+1 > tp at which a node in VB sends a mes- +sage to a node in VA. Observe that, up to round tp+1, the oblivious simulation +of Alice was correct. At this point, Bob initiates an oblivious simulation of A at +all the nodes he controls, starting from tp+1. Finally, Bob sends to Alice tp+1, +and the list M p+1 +B +of all timestamped messages sent from nodes in VB to nodes +in VA resulting from the oblivious simulation of the nodes he controls during +rounds at least tp+1. Using this information, Alice infers that her simulation was +correct up to round tp+1, and she starts the next round for protocol P. +The simulation carries on until one of the two players runs an oblivious +simulation in which all the nodes he or she controls terminate, and no messages +were sent through the cut in at any intermediate round. In this case, this player +sends a message "finish" to the other player, and both infer that their current +simulations are correct. As a consequence, each player has correctly computed +the output of A at all the nodes he or she controls. +At every communication round during which Alice speaks, at least one vertex +of VA which has a neighbor in VB is activated. Therefore, the number of rounds of + +Energy-Efficient Distributed Algorithms +15 +rounds +VA +VB +VA +VB +VA +VB +Transcript of +the algorithm A +Simulation +of Alice +tp +Simulation +of Bob +tp+1 +Fig. 3. Illustration of the round-efficient simulation protocol for algorithm A. After +round p, Alice has correctly simulated the algorithm up to round tp. It is the turn +of Bob to speak in round p + 1. In round p, Alice sent to Bob the set of messages +M p +A, obtained from an oblivious simulation of A starting from tp. Only the first three +messages are correct, since at round tp+1 Bob communicates a message to Alice. Then, +Bob runs an oblivious simulation of A starting from tp+1, and communicates all the +messages sent from nodes VB to nodes in VA. In this case the two first messages are +correct. +Alice is at most ∂VA · nact(A). By the same argument, we have that the number +of rounds of Bob is at most ∂VB · nact(A). It follows that +R(P) = min(∂VA, ∂VB) · nact(A). +At each communication round, Alice sends at most ∂(VA)·nact(A) timestamped +messages, which can be encoded using O(∂(VA)·nact(A))·log n·log RA(n)) bits. +Similarly, Bob sends O(∂(VB) · nact(A)) · log n · log RA(n)) bits. It follows that +C(P) = O(((∂(VA) + ∂(VB)) · nact(A)))2 · log n · log RA(n)), +which completes the proof. +⊓⊔ +In order to separate the node-activation complexity from the edge-activation +complexity, we consider a problem called Depth First Pointer Chasing, +and we show that this problem can be solved by an edge-frugal CONGEST +algorithm running in O(poly(n)) rounds, whereas the node-activation complexity +of any algorithm running in O(poly(n)) rounds for this problem is Ω(∆), for +any ∆ ∈ O( +√n +log n). The lower bound is proved thanks to the Round-Efficient +Simulation Lemma (Lemma 3), by reduction from the two-party communication +complexity problem Pointer Chasing, for which too few rounds imply large +communication complexity [10]. +In the Depth First Pointer Chasing, each node v of the graph is given as +input its index DFS(v) ∈ [n] in a depth-first search ordering (as usual we denote +[n] = {1, . . ., n}). Moreover the vertex indexed i is given a function fi : [n] → [n], + +16 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +and the root (i.e., the node indexed 1) is given a value x ∈ [n] as part of its input. +The goal is to compute the value of fn ◦ fn−1 ◦ · · · ◦ f1(x) at the root. +Lemma 5. There exists an edge-frugal CONGEST algorithm for problem Depth +First Pointer Chasing, with polynomial number of rounds. +Proof. The lemma is established using an algorithm that essentially traverses +the DFS tree encoded by the indices of the nodes, and performs the due par- +tial computation of the function at every node, that is, the node with index i +computes fi ◦fi−1 . . . f1(x), and forwards the result to the node with index i+1. +At round 1, each node v transmits its depth-first search index DFS(v) to +its neighbors. Therefore, after this round, every node knows its parent, and its +children in the DFS tree. Then the algorithm merely forwards messages of type +m(i) = fi ◦ fi−1 . . . f1(x), corresponding to iterated computations for increasing +values i, along the DFS tree, using the DFS ordering. That is, for any node v, +let MaxDFS(v) denote the maximum DFS index appearing in the subtree of +the DFS tree rooted at v. We will not explicitly compute this quantity but it +will ease the notations. At some round, vertex v of DFS index i will receive a +message m(i − 1) from its parent (of index i − 1). Then node v will be in charge +of computing message m(MaxDFS(v)), by “calling” its children in the tree, and +sending this message back to its parent. In this process, each edge in the subtree +rooted at v is activated twice. +The vertex of DFS index 1 initiates the process at round 2, sending f1(x) to its +child of DFS index 2. Any other node v waits until it receives a message from its +parent, at a round that we denote r(v). This message is precisely m(i−1) = fi−1◦ +fi−2 . . . f1(x), for i = DFS(v). Then v computes message m(i) = fi◦fi−1 . . . f1(x) +using its local function fi. If it has no children, then it sends this message m(i) +to its parent at round r(v) + 1. Assume now that v has j children in the DFS +tree, denoted u1, u2, . . . , uj, sorted by increasing DFS index. Observe that, by +definition of DFS trees, DFS(uk) = MaxDFS(uk−1) + 1 for each k ∈ {2, . . . , j}. +Node v will be activated j times, once for each edge {v, uk}, 1 ≤ k ≤ j, as +follows. At round r(v) + 1 (right after receiving the message from its parent), +v sends message m(i) to its child u1, then it awaits until round r1(v) when it +gets back a message from u1. +The process is repeated for k = 2, . . . , j: at round rk−1(v) + 1, node v sends +the message m(DFS(uk) − 1) received from uk−1 to uk, and waits until it gets +back a message from uk, at round rk(v). Note that if k < j then this message is +m(DFS(uk+1) − 1), and if k = j then this message is m(MaxDFS(v)). At round +rj(v) + 1, after having received messages from all its children, v backtracks +message m(MaxDFS(v)) to its parent. If v is the root, then the process stops. +The process terminates in O(n) rounds, and, except for the first round, every +edge of the DFS tree is activated twice: first, going downwards, from the root +towards the leaves, and, second, going upwards. At the end, the root obtains the +requested message m(n) = fn ◦ fn−1 . . . f1(x). +⊓⊔ +Let us recall the Pointer Chasing problem as defined in [10]. Alice is +given a function fA : [n] → [n], and a number x0 ∈ [n]. Bob is given function + +Energy-Efficient Distributed Algorithms +17 +fB : [n] → [n]. Both players have a parameter k ∈ [n]. Note that the size +of the input given to each player is Θ(n log n) bits. The goal is to compute +(fA ◦ fB)k(x0), i.e., k successive iterations of fA ◦ fB applied to x0. We give a +slightly simplified version of the result in [10]. +Lemma 6 (Nissan and Wigderson [10]). Any two-party protocol for Pointer +Chasing using less than 2k rounds has communication complexity Ω(n−k log n). +We have now all ingredients for proving the main result of this section. +Theorem 5. For every ∆ ∈ O +Ä +n1/4 +√log n +ä +, every CONGEST algorithm solving +Depth First Pointer Chasing in graphs of maximum degree ∆ with polyno- +mialy many rounds has node-activation complexity Ω(∆). +Proof. Let k be the parameter of Pointer Chasing that will be fixed later. +The lower bound is established for this specific parameter k. Let us consider an +arbitrary instance of Pointer Chasing fA, fB : [n] → [n], and x0 ∈ [n], with +parameter k. We reduce that instance to a particular instance of Depth First +Pointer Chasing (see Fig. 4). +a1 +a2 +ak +b1 +b2 +bk +v1 +vn−2k +v2 +Alice +Bob +Fig. 4. Reduction from Pointer Chasing to Depth First Pointer Chasing. +The graph is a tree T on n vertices, composed of a path (v1, . . . , vn−2k), and +2k leaves vn−2k+1, . . . , vn, all adjacent to vn−2k. Node v1 is called the root, and +node vn−2k is said central. Note that the ordering obtained by taking DFS(vi) = i +is a depth-first search of T , rooted at v1. The root v1 is given value x0 as +input. If i ≤ n − 2k, then function fi is merely the identity function f (i.e., +f(x) = x for all x). For every j ∈ [k], let aj = vn−k+2j−1, and bj = vn−k+2j. +All nodes bj get as input the function fB, and all nodes aj get the function fA. +Observe that the output of Depth First Pointer Chasing on this instance +is precisely the same as the output of the initial instance of Pointer Chasing. +Indeed, fn−2k ◦ fn−2k−1 ◦ · · · ◦ f1 is the identity function, and the sequence +fn◦fn−1◦· · ·◦fn−2k+2◦fn−2k+1 alternates nodes of “type” aj with nodes of “type” +bj, for decreasing values of j ∈ [k], and thus corresponds to fA ◦fB ◦· · ·◦fA ◦fB, +where the pair fA ◦ fB is repeated k times, exactly as in problem Pointer +Chasing. + +18 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +We can now apply Round-Efficient Simulation Lemma. Let Alice control all +vertices aj, for all j ∈ [k], and vertices v1, . . . , vn−2k. Let Bob control vertices bj, +for all j ∈ [k]. See Fig. 4. Note that Alice and Bob can construct the subgraph +that they control, based only on their input in the considered Pointer Chasing +instance, and they both now value k. +Claim. If there exists a CONGEST algorithm A for Depth First Pointer +Chasing on n-node graphs performing in RA rounds with node-activation smaller +than 2k, then Pointer Chasing can be solved by a two-party protocol P in +less than 2k rounds, with communication complexity O(k4 log n log RA) bits. +The claim directly follows from Lemma 4. Indeed, by construction, ∂VA = 1 +and ∂VB = k. Since we assumed nact(A) < 2k, the two-way protocol P provided +by Lemma 4 solves the Pointer Chasing instance in less than 2k rounds, and +uses O(k4 log n log RA) bits. +By Lemma 6, we must have k4 log n log RA ∈ Ω(n − k log n). Therefore, +if our CONGEST algorithm A has polynomially many rounds, we must have +k ∈ Ω +Ä +n1/4 +√log n +ä +. Since our graph has maximum degree ∆ = 2k+1, the conclusion +follows. +⊓⊔ +6 +Conclusion +In this paper, we have mostly focused on the round complexity of (deterministic) +frugal algorithms solving general graph problems in the LOCAL or CONGEST +model. It might be interesting to consider specific classical problems. As far as +“local problems” are concerned, i.e., for locally checkable labeling (LCL) prob- +lems, we have shown that MIS and (∆+1)-coloring admit frugal algorithms with +polynomial round complexities. It is easy to see, using the same arguments, that +problems such as maximal matching share the same properties. It is however not +clear that the same holds for (2∆ − 1)-edge coloring. +Open Problem 1 Is there a (node or edge) frugal algorithm solving (2∆ − 1)- +edge-coloring with round complexity O(poly(n)) in the CONGEST model? +In fact, it would be desirable to design frugal algorithms with sub-polynomial +round complexities for LCL problems in general. In particular: +Open Problem 2 Is there a (node or edge) frugal algorithm solving mis or +(∆ + 1)-coloring with round complexity O(polylog(n)) in the LOCAL model? +The same type of questions can be asked for global problems. In particular, +it is known that MST has no “awake frugal” algorithms, as MST has awake +complexity Ω(log n), even in the LOCAL model. In contrast, frugal algorithms +for MST do exist as far as node-activation complexity is concerned. The issue is +about the round complexities of such algorithms. + +Energy-Efficient Distributed Algorithms +19 +Open Problem 3 Is there a (node or edge) frugal algorithm solving mst with +round complexity O(poly(n)) in the CONGEST model? +Another intriguing global problem is depth-first search (dfs), say starting +from an identified node. This can be performed by an edge-frugal algorithm +performing in a linear number of rounds in CONGEST. However, it is not clear +whether the same can be achieved by a node-frugal algorithm. +Open Problem 4 Is there a node-frugal algorithm solving dfs with round com- +plexity O(poly(n)) in the CONGEST model? +Finally, we have restricted our analysis to deterministic algorithms, and it +might obviously be worth considering randomized frugal algorithms as well. +References +1. Augustine, J., Moses, W.K., Pandurangan, G.: Brief announcement: Distributed +MST computation in the sleeping model: Awake-optimal algorithms and lower +bounds. In: 41st ACM Symposium on Principles of Distributed Computing +(PODC). pp. 51–53 (2022). https://doi.org/10.1145/3519270.3538459 +2. Barenboim, L., Maimon, T.: Deterministic logarithmic completeness in the dis- +tributed sleeping model. In: 35th International Symposium on Distributed Com- +puting (DISC). LIPIcs, vol. 209, pp. 10:1–10:19. Schloss Dagstuhl - Leibniz- +Zentrum für Informatik (2021). https://doi.org/10.4230/LIPIcs.DISC.2021.10 +3. Chang, Y., Dani, V., Hayes, T.P., He, Q., Li, W., Pettie, S.: The energy complexity +of broadcast. In: 37th ACM Symposium on Principles of Distributed Computing +(PODC). pp. 95–104 (2018). https://doi.org/10.1145/3212734.3212774 +4. Chatterjee, +S., +Gmyr, +R., +Pandurangan, +G.: +Sleeping +is +efficient: +MIS +in +O(1)-rounds +node-averaged +awake +complexity. +In: +39th +ACM +Sympo- +sium on Principles of Distributed Computing (PODC). pp. 99–108 (2020). +https://doi.org/10.1145/3382734.3405718 +5. Drucker, A., Kuhn, F., Oshman, R.: On the power of the congested clique +model. In: Proceedings of the 2014 ACM Symposium on Principles of Dis- +tributed Computing. p. 367–376. PODC ’14, Association for Computing Machin- +ery, New York, NY, USA (2014). https://doi.org/10.1145/2611462.2611493, +https://doi.org/10.1145/2611462.2611493 +6. Dufoulon, +F., +Moses, +W.K., +Pandurangan, +G.: +Sleeping +is +super- +efficient: +MIS +in +exponentially +better +awake +complexity +(2022). +https://doi.org/10.48550/ARXIV.2204.08359 +7. Ghaffari, M., Portmann, J.: Average awake complexity of MIS and matching. In: +34th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). +pp. 45–55 (2022). https://doi.org/10.1145/3490148.3538566 +8. Grumbach, S., Wu, Z.: Logical locality entails frugal distributed computation +over graphs. In: 35th International Workshop on Graph-Theoretic Concepts +in Computer Science (WG). LNCS, vol. 5911, pp. 154–165. Springer (2009). +https://doi.org/10.1007/978-3-642-11409-0 +9. Kushilevitz, E., Nisan, N.: Communication complexity. Cambridge University Press +(1997) +10. Nisan, N., Wigderson, A.: Rounds in communication complexity revisited. SIAM +Journal on Computing 22(1), 211–219 (1993). https://doi.org/10.1137/0222016 +11. Peleg, D.: Distributed computing: a locality-sensitive approach. SIAM (2000) + +20 +P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca +Acknowledgements. The authors are thankful to Benjamin Jauregui for helpful +discussions about the sleeping model. + diff --git a/E9FLT4oBgHgl3EQfFi_K/content/tmp_files/load_file.txt b/E9FLT4oBgHgl3EQfFi_K/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..77d95f94ac9ca640062935b5bf0cd9f75166d5dc --- /dev/null +++ b/E9FLT4oBgHgl3EQfFi_K/content/tmp_files/load_file.txt @@ -0,0 +1,701 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf,len=700 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='11988v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='DC] 27 Jan 2023 Energy-Efficient Distributed Algorithms for Synchronous Networks⋆ Pierre Fraigniaud1⋆⋆, Pedro Montealegre2, Ivan Rapaport3⋆ ⋆ ⋆, and Ioan Todinca4 1 Institut de Recherche en Informatique Fondamentale (IRIF), CNRS and Université Paris Cité, Paris, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' pierre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='fraigniaud@irif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='fr 2 Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, Chile p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='montealegre@uai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='cl 3 Departamento de Ingeniería Matemática - Centro de Modelamiento Matemático (UMI 2807 CNRS), Universidad de Chile, Santiago, Chile rapaport@dim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='uchile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='cl 4 Laboratoire d’informatique fondamentale d’Orléans (LIFO), Université d’Orléans, Orléans, France Ioan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='Todinca@univ-orleans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='fr Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We study the design of energy-efficient algorithms for the LOCAL and CONGEST models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Specifically, as a measure of complex- ity, we consider the maximum, taken over all the edges, or over all the nodes, of the number of rounds at which an edge, or a node, is active in the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We first show that every Turing-computable problem has a CONGEST algorithm with constant node-activation complexity, and therefore constant edge-activation complexity as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' That is, ev- ery node (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', edge) is active in sending (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', transmitting) messages for only O(1) rounds during the whole execution of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In other words, every Turing-computable problem can be solved by an al- gorithm consuming the least possible energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the LOCAL model, the same holds obviously, but with the additional feature that the algorithm runs in O(poly(n)) rounds in n-node networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, we show that insisting on algorithms running in O(poly(n)) rounds in the CONGEST model comes with a severe cost in terms of energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Namely, there are problems requiring Ω(poly(n)) edge-activations (and thus Ω(poly(n)) node-activations as well) in the CONGEST model whenever solved by algorithms bounded to run in O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Finally, we demon- strate the existence of a sharp separation between the edge-activation complexity and the node-activation complexity in the CONGEST model, for algorithms bounded to run in O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Specifically, under this constraint, there is a problem with O(1) edge-activation complexity but ˜Ω(n1/4) node-activation complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Keywords: Synchronous distributed algorithms · LOCAL and CON- GEST models · Energy efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⋆ This work was performed during the visit of the first and last authors to Universidad de Chile, and to Universidad Adolfo Ibañez, Chile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⋆⋆ Additional support from ANR project DUCAT (ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ANR-20-CE48-0006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⋆ ⋆ ⋆ Additional support from ANID via PIA/Apoyo a Centros Cientificos y Tecnológicos de Excelencia AFB 170001 and Fondecyt 1220142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 2 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca 1 Introduction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1 Objective Designing computing environments consuming a limited amount of energy while achieving computationally complex tasks is an objective of utmost importance, especially in distributed systems involving a large number of computing entities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this paper, we aim at designing energy-efficient algorithms for the standard LOCAL and CONGEST models of distributed computing in networks [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Both models assume a network modeled as an n-node graph G = (V, E), where each node is provided with an identifier, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', an integer that is unique in the network, which can be stored on O(log n) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' All nodes are assumed to run the same algorithm, and computation proceeds as a series of synchronous rounds (all nodes start simultaneously at round 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' During a round, every node sends a message to each of its neighbors, receives the messages sent by its neighbors, and performs some individual computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The two models LOCAL and CONGEST differ only in the amount of information that can be exchanged between nodes at each round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The LOCAL model does not bound the size of the messages, whereas the CONGEST model allows only messages of size O(log n) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Initially, every node v ∈ V knows solely its identifier id(v), an upper bound of the number n of nodes, which is assumed to be polynomial in n and to be the same for all nodes, plus possibly some input bit-string x(v) depending on the task to be solved by the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this paper, we denote by N the maximum between the largest identifier and the upper bound on n given to all nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Hence N = O(poly(n)), and is supposed to be known by all nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' After a certain number of rounds, every node outputs a bit-string y(v), where the correctness of the collection of outputs y = {y(v) : v ∈ V } is defined with respect to the specification of the task to be solved, and may depend on the collection of inputs x = {x(v) : v ∈ V } given to the nodes, as well as on the graph G (but not on the identifiers assigned to the nodes, nor on the upper bound N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Activation complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We measure the energy consumption of an algorithm A by counting how many times each node and each edge is activated during the execution of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' More specifically, a node v (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', an edge e) is said to be active at a given round r if v is sending a message to at least one of its neighbors at round r (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', if a message traverses e at round r).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The node-activation and the edge-activation of an algorithm A running in a graph G = (V, E) are respectively defined as nact(A) := max v∈V #activation(v), and eact(A) := max e∈E #activation(e), where #activation(v) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', #activation(e)) denotes the number of rounds dur- ing which node v (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', edge e) is active along the execution of the algorithm A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' By definition, we have that, in any graph of maximum degree ∆, eact(A) ≤ 2 · nact(A), and nact(A) ≤ ∆ · eact(A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' (1) Energy-Efficient Distributed Algorithms 3 Objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our goal is to design frugal algorithms, that is, algorithms with con- stant node-activation, or to the least constant edge-activation, independent of the number n of nodes and of the number m of edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Indeed, such algorithms can be viewed as consuming the least possible energy for solving a given task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Moreover, even if the energy requirement for solving the task naturally grows with the number of components (nodes or edges) of the network, it grows linearly with this number whenever using frugal algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We refer to node-frugality or edge-frugality depending on whether we focus on node-activation or edge- activation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='2 Our Results We first show that every Turing-computable problem5 can thus be solved by a node-frugal algorithm in the LOCAL model as well as in the CONGEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1 that every Turing-computable problem can be solved by an edge-frugal algorithm in both models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In other words, every problem can be solved by an energy-efficient distributed algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' One important question remains: what is the round complexity of frugal algorithms?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the LOCAL model, our node-frugal algorithms run in O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, they may run in exponentially many rounds in the CONGEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We show that this cannot be avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Indeed, even if many symmetry-breaking problems such as computing a maximal-independent set (mis) and comput- ing a (∆ + 1)-coloring can be solved by a node-frugal algorithm performing in O(poly(n)) rounds, we show that there exist problems (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', deciding C4-freeness or deciding the presence of symmetries in the graph) that cannot be solved in O(poly(n)) rounds in the CONGEST model by any edge-frugal algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Finally, we discuss the relation between node-activation complexity and edge- activation complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We show that the bounds given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1 are essentially the best that can be achieved in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Precisely, we identify a problem, namely Depth First Pointer Chasing (dfpc), which has edge-activation complexity O(1) for all graphs with an algorithm running in O(poly(n)) rounds in the CON- GEST model, but satisfying that, for every ∆ = O Ä n1/4 √log n ä , its node-activation complexity in graphs with maximum degree ∆ is Ω(∆) whenever solved by an algorithm bounded to run in O(poly(n)) rounds in the CONGEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In particular, Depth First Pointer Chasing has constant edge-activation com- plexity but node-activation complexity ˜Ω(n1/4) in the CONGEST model (for O(poly(n))-round algorithms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our main results are summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our Techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our upper bounds are mostly based on similar types of up- per bounds techniques used in the sleeping model [2,4] (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='3), based 5 A problem is Turing-computable if there exists a Turing machine that, given any graph with identifiers and inputs assigned to the nodes, computes the output of each node in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 4 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca Awakeness Node-Activation Edge-Activation LOCAL ∀Π,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Π ∈ O(log n) with ∀Π,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Π ∈ O(1) with • ∀Π,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Π ∈ O(1) with O(poly(n)) rounds [2] O(poly(n)) rounds O(poly(n)) rounds st ∈ Ω(log n) [2] CONGEST • mis ∈ O(polyloglog(n)) ∀Π,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Π ∈ O(1) ∀Π,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Π ∈ O(1) with O(polylog(n)) poly(n) rounds poly(n) rounds rounds [6] (randomized) ⇒ ∃Π ∈ Ω(poly(n)) ⇒ ∃Π ∈ Ω(poly(n)) mst ∈ O(log n) poly(n) rounds dfpc ∈ O(1) with with O(poly(n)) ⇒ dfpc ∈ ˜Ω(n1/4) O(poly(n)) rounds rounds [1] Π ∈ FO and ∆ = O(1) ⇒ Π ∈ O(1) with O(poly(n)) rounds [8] Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Summary of our results where, for a problem Π, Π ∈ O(f(n)) means that the corresponding complexity of Π is O(f(n)) (same shortcut for Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' on constructing spanning trees along with gathered and broadcasted informa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, the models considered in this paper do not suffer from the same limitations as the sleeping model (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Section 2), and thus one can achieve acti- vation complexity O(1) in scenarios where the sleeping model limits the awake complexity to Ω(log n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our lower bounds for CONGEST are based on reductions from 2-party com- munication complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, as opposed to the standard CONGEST model in which the simulation of a distributed algorithm by two players is straightfor- ward (each player performs the rounds sequentially, one by one, and exchanges the messages sent across the cut between the two subsets of nodes handled by the players at each round), the simulation of distributed algorithms in which only subsets of nodes are active at various rounds requires more care.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This is especially the case when the simulation must not only control the amount of information exchanged between these players, but also the number of communication steps performed by the two players.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Indeed, there are 2-party communication com- plexity problems that are hard for k steps, but trivial for k + 1 steps [10], and some of our lower bounds rely on this fact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='3 Related Work The study of frugal algorithms has been initiated in [8], which focuses on the edge-frugality in the CONGEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It is shown that for bounded-degree graphs, any problem expressible in first-order logic (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', C4-freeness) can be solved by an edge-frugal algorithm running in O(poly(n)) rounds in the CON- GEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This also holds for planar graphs with no bounds on the maximum degree, whenever the nodes are provided with their local combinatorial embed- ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our results show that these statements cannot be extended to arbitrary graphs as we prove that any algorithm solving C4-freeness in O(poly(n)) rounds in the CONGEST model has edge-activation ˜Ω(√n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Energy-Efficient Distributed Algorithms 5 More generally, the study of energy-efficient algorithms in the context of distributed computing in networks has been previously considered in the frame- work of the sleeping model, introduced in [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This model assumes that nodes can be in two states: awake and asleep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' A node in the awake state performs as in the LOCAL and CONGEST models, but may also decide to fall asleep, for a prescribed amount of rounds, controlled by each node, and depending on the algorithm executed at the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' A sleeping node is totally inactive in the sense that it does not send messages, it cannot receive messages (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', if a message is sent to a sleeping node by an awake neighbor, then the message is lost), and it is computationally idle (apart from counting rounds).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The main measure of interest in the sleeping model is the awake complexity, defined as the maximum, taken over all nodes, of the number of rounds at which each node is awake during the execution of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the LOCAL model, it is known [2] that all problems have awake complexity O(log n), using algorithms running in O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This bound is tight in the sense that there are problems (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', spanning tree construction) with awake complexity Ω(log n) [2,3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the CONGEST model, It was first shown [4] that mis has constant average awake complexity, thanks to a randomized algorithm running in O(polylog(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The round complexity was improved in [7] with a randomized algo- rithm running in O(log n) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The (worst-case) awake complexity of mis was proved to be O(log log n) using a randomized Monte-Carlo algorithm run- ning in O(poly(n)) rounds [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This (randomized) round complexity can even be reduced to O(log3 n · log log n · log⋆ n), to the cost of slightly increasing the awake complexity to O(log log n · log⋆ n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' mst has also been considered, and it was proved [1] that its (worst-case) awake complexity is O(log n) thanks to a (deterministic) algorithm running in O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The upper bound on the awake complexity of mst is tight, thank to the lower bound for spanning tree (st) in [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 2 Preliminaries In this section, we illustrate the difference between the standard LOCAL and CONGEST models, their sleeping variants, and our node- and edge-activation variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1(a) displays the automaton corresponding to the behavior of a node in the standard models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' A node is either active (A) or terminated (T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At each clock tick (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', round) a node is subject to message events corresponding to sending and receiving messages to/from neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' A node remains active until it terminates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1(b) displays the automaton corresponding to the behavior of a node in the sleeping variant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this variant, a node can also be in a passive (P) state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this state, the clock event can either leave the node passive, or awake the node, which then moves back to the active state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Finally, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1(c) displays the automaton corresponding to the behavior of a node in our activation variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It differs from the sleeping variant in that a 6 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca A P T clock msg msg msg clock clock clock A P T clock msg clock clock clock A T clock msg (a) (b) (c) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' (a) Classical model (b) Sleeping model, (c) Activation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' passive node is also subject to message events, which can leave the node passive, but may also move the node to the active state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In particular, a node does not need to be active for receiving messages, and incoming messages may not trigger an immediate response from the node (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', forwarding information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Instead, a node can remain passive while collecting information from each of its neighbors, and eventually react by becoming active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Example 1: Broadcast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Assume that one node of the n-node cycle Cn has a token to be broadcast to all the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Initially, all nodes are active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, all nodes but the one with the token become immediately passive when the clock ticks for entering the second round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The node with the token sends the token to one of its neighbors, and becomes passive at the next clock tick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Upon reception of the token, a passive node becomes active, forwards the token, and terminates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' When the source node receives the token back, it becomes active, and terminates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The node-activation complexity of broadcast is therefore O(1), whereas it is known that broadcasting has awake complexity Ω(log n) in the sleeping model [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Example 2: At-least-one-leader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Assume that each node of the cycle Cn has an input-bit specifying whether the node is leader or not, and the nodes must col- lectively check that there is at least one leader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Every leader broadcasts a token, outputs accept, and terminates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Non-leader nodes become passive immediately after the beginning of the algorithm, and start waiting for N rounds (recall that N is an upper bound on the number n of nodes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Whenever the “sleep” of a (pas- sive) non-leader is interrupted by the reception of a token, it becomes active, forwards the token, outputs accept, and terminates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' After N rounds, a passive node that has not been “awaken” by a token becomes active, outputs reject, and terminates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This guarantees that there is at least one leader if and only if all nodes accept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The node-activation complexity of this algorithm is O(1), while the awake complexity of at-least-one-leader is Ω(log n) in the sleeping model, by reduction to broadcast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The following observation holds for LOCAL and CONGEST, by noticing that every algorithm for the sleeping model can be implemented with no overheads in terms of node-activation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Energy-Efficient Distributed Algorithms 7 Observation 1 In n-node graphs, every algorithm with awake complexity a(n) and round complexity r(n) has node-activation complexity a(n) and round com- plexity r(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows from Observation 1 that all upper bound results for the awake complexity directly transfer to the node-activation complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, as we shall show in this paper, in contrast to the sleeping model in which some problems (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', spanning tree) have awake complexity Ω(log n), even in the LOCAL model, all problems admit a frugal algorithm in the CONGEST model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', an algorithm with node-activation O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' A LOCAL or CONGEST algorithm is node-frugal (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', edge- frugal) if the activation of every node (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', edge) is upper-bounded by a constant independent of the graph, and of the identifiers and inputs given to the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 3 Universality of Frugal Algorithms In this section we show that every Turing-computable problem can be solved by frugal algorithms, both in the LOCAL and CONGEST models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Thanks to Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 1, it is sufficient to prove that this holds for node-frugality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' There exists a CONGEST algorithm electing a leader, and con- structing a BFS tree rooted at the leader, with node-activation complexity O(1), and performing in O(N 2) = O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The algorithm elects as leader the node with smallest identifier, and initi- ates a breadth-first search from that node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At every node v, the protocol performs as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' – If v has received no messages until round id(v) · N, then v elects itself as leader, and starts a BFS by sending message (id(v), 0) to all its neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Locally, v sets its parent in the BFS tree to ⊥, and the distance to the root to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' – Otherwise, let r be the first round at which vertex v receives a message.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Such a message is of type (id(u), d) where u is the neighbor of v which sent the message to v, and d is the distance from u to the leader in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Node v sets its parent in the BFS tree to id(u), its distance to the root to d + 1, and, at round r + 1, it sends the message (id(v), d + 1) to all its neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' (If v receives several messages at round r, from different neighbors, then v selects the messages coming from the neighobors with smallest identifier).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The node v with smallest identifier is indeed the node initiating the BFS, as the whole BFS is constructed between rounds id(v) · N and id(v) · N + N − 1, and N ≥ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The algorithm terminates at round at most O(N 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ 8 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca An instance of a problem is a triple (G, id, x) where G = (V, E) is an n-node graph, id : V → [1, N] with N = O(poly(n)), and x : V → [1, ν] is the input assignment to the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Note that the input range ν may depend on n, and even be exponential in n, even for classical problems, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', whenever weights assigned to the edges are part of the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' A solution to a graph problem is an output assignment y : V → [1, µ], and the correctness of y depends on G and x only, with respect to the specification of the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We assume that µ and ν are initially known to the nodes, as it is the case for, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', mst, in which the weights of the edges can be encoded on O(log n) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Every Turing-computable problem has a LOCAL algorithm with O(1) node-activation complexity, and running in O(N 2) = O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Once the BFS tree T of Lemma 1 is constructed, the root can (1) gather the whole instance (G, id, x), (2) compute a solution y, and (3) broadcast y to all nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Specifically, every leaf v of T sends the set E(v) = � {(id(v), x(v)), (id(w), x(w))} : w ∈ N(v) � to its parent in T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' An internal node v waits for receiving a set of edges S(u) from each of its children u in T , and then forwards the set S(v) = E(v) ∪ (∪u∈child(v)S(u)) to its parent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Each node of T is activated once during this phase, and thus the node-activation complexity of gathering is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Broadcasting the solution y from the leader to all the nodes is achieved along the edges of T , again with node- activation 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ The algorithm used in the proof of Theorem 1 cannot be implemented in CONGEST due to the size of the messages, which may require each node to be activated more than a constant number of times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' To keep the node-activation constant, we increased the round complexity of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Every node-frugal algorithm A performing in R rounds in the LO- CAL model with messages of size at most M bits can be implemented by a node- frugal algorithm B performing in R 2M rounds in the CONGEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let v be a node sending a message m through an incident edge e at round r of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, in B, v sends one “beep” through edge e at round r 2M + t where t is lexicographic rank of m among the at most 2M messages generated by A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Every Turing-computable problem has a CONGEST algorithm with O(1) node-activation complexity, and running in 2poly(n)+O((ν+µ) log n) rounds for inputs in the range [1, ν] and outputs in the range [1, µ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The algorithm used in the proof of Theorem 1 used messages of size at most 2N 2 + ν log N bits during the gathering phase, and of size at most µ log N bits during the broadcast phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The result follows from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ Energy-Efficient Distributed Algorithms 9 Of course, there are many problems that can be solved in the CONGEST model by a frugal algorithm much faster than the bound from Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This is typically the case of all problems that can be solved by a sequential greedy algorithm visiting the nodes in arbitrary order, and producing a solution at the currently visited node based only on the partial solution in the neighborhood of the node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Examples of such problems are mis, ∆ + 1-coloring, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We call such problem sequential-greedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Every sequential-greedy problem whose solution at every node can be encoded on O(log n) bits has a node-frugal CONGEST algorithm running in O(N) = O(poly(n)) rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Every node v ∈ V generates its output at round id(v) according to its current knowledge about its neighborhood, and sends this output to all its neigh- bors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ 4 Limits of CONGEST Algorithms with Polynomially Many Rounds Given a graph G = (V, E) such that V is partitioned in two sets VA, VB, the set of edges with one endpoint in VA and the other in VB is called the cut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We denote by e(VA, VB) the number of edges in the cut, and by n(VA, VB) the number of nodes incident to an edge of the cut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Consider the situation where there are two players, namely Alice and Bob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We say that a player controls a node v if it knows all its incident edges and its input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' For a CONGEST algorithm A, we denote A(I) the output of A on input I = (G, id, x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We denote RA(n) the round complexity of A on inputs of size n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Lemma 3 (Simulation lemma).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let A be an algorithm in the CONGEST model, let I = (G, id, x) be an input for A, and let VA, VB be a partition of V (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Suppose that Alice controls all the nodes in VA, and Bob controls all the nodes in VB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, there exists a communication protocol P between Alice and Bob with at most 2 · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A)) rounds and using total communication O(min(n(VA, VB)·nact(A), e(VA, VB)·eact(A))·log n·log(RA(n)), such that each player computes the value of A(I) at all nodes he or she controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In protocol P, Alice and Bob simulate the rounds of algorithm A in all the nodes they control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The simulation run in phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Each phase is used to simulate up to a certain number of rounds t of algorithm A, and takes two rounds of protocol P (one round for Alice, and one round for Bob).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' By simulating A up to t rounds, we mean that Alice and Bob know all the states of all the nodes they control, on every round up to round t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the first phase, players start simulating A from the initial state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let us suppose that both Alice and Bob have already executed p ≥ 0 phases, meaning that they had correctly simulated A up to round t = t(p) ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let us explain phase p + 1 (see also Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 10 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca rounds VA VB ra rb VA VB VA VB Oblivious simulation of Alice t Oblivious simulation of Bob Transcript of algorithm A Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Illustration of one phase of the simulation protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Assuming that the players agree on the simulation of algorithm A up to round t, each player runs an oblivious simulation at the nodes they control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the example of the figure, the next message corresponds to a node controlled by Bob, who sends a message to a node in VA at round rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The oblivious simulation of Alice is not aware of this message, and incor- rectly considers that a message is sent from VA to VB at round ra > rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Using the communication rounds in this phase, the players agree that the message of Bob was correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Thus the simulation is correct up to round rb, for both players.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Starting from round t, Alice runs an oblivious simulation of algorithm A over all nodes that she controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' By oblivious, we mean that Alice assumes that no node of VB communicates a message to a node in VA in any round at least t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The oblivious simulation of Alice stops in one of the following two possible scenarios: (1) All nodes that she controls either terminate or enter into a passive state that quits only on an incoming message from VB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' (2) The simulation reaches a round ra where a message is sent from a node in VA to a node in VB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At the same time, Bob runs and oblivious simulation of A starting from round t (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' assuming that no node of VA sends a message to a node in VB in any round at least t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The oblivious simulation of Bob stops in one of the same two scenarios analogous to the ones above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this case, we call rb the round reached by Bob in his version of scenario (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At the beginning of a phase, it is the turn of Alice to speak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Once the obliv- ious simulation of Alice stops, she is ready to send a message to Bob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' If the simulation stops in the scenario (1), Alice sends a message "scenario 1" to Bob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Otherwise, Alice sends ra together with all the messages sent from nodes in VA to nodes in VB at round ra, to Bob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' When Bob receives the message from Alice, one of the following situations holds: Case 1: the oblivious simulation of both Alice and Bob stopped in the first sce- nario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this case, since A is correct, there are no deadlocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, all vertices of G reached a terminal state, meaning that the oblivious simulation Energy-Efficient Distributed Algorithms 11 of both players was in fact a real simulation of A, and the obtained states are the output states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, Bob sends a message to Alice indicating that the simulation is finished, and indeed Alice and Bob have correctly computed the output of A for all the nodes they control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Case 2: the oblivious simulation of Alice stopped in scenario (1), and the one of Bob stopped in the scenario (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this case, Bob infers that his oblivious simu- lation was correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' He sends rb and all the messages communicated in round rb through the cut to Alice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' When Alice receives the message of Bob, she updates the state of the nodes she controls up to round rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows that both players have correctly simulated algorithm A up to round rb > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Case 3: the oblivious simulation of Alice stopped in scenario (2), and the one of Bob stopped in scenario (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this case, Bob infres that the simulation of Alice was correct up to round ra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' He sends a message to Alice indicating that she has correctly simulated A up to round ra, and he updates the states of all the nodes he controls up to round ra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows that both players have correctly simulated A up to round ra > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Case 4: the oblivious simulation of both players stopped in scenario (2), and ra > rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Bob infers that his oblivious simulation was correct up to rb, and that the one of Alice was not correct after round rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, the players act in the same way as described in Case 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Thus, both players have correctly simulated A up to round rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Case 5: the oblivious simulation of both players stopped in scenario (2), and rb > ra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Bob infers that his oblivious simulation was incorrect after round ra, and that the one of Alice was correct up to round ra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, the players act in the same way as described in Case 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Thus, both players have correctly simulated A up to round ra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Case 6: the oblivious simulation of both players stopped in scenario (2), and rb = ra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Bob assumes that both oblivious simulations were correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' He sends rb together with all the messages communicated from his nodes at round rb through the cut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, he updates the states of all the nodes he controls up to round rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' When Alice receives the message from Bob, she updates the states of the nodes she controls up to round rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows that both players have correctly simulated A up to round rb > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Observe that, except when the algorithm terminates, on each phase of the protocol, at least one node controlled by Alice or Bob is activated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Since the number of rounds of P is twice the number of phases, we deduce that the total number of rounds is at most 2 · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 12 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca Moreover, on each round of P, the players communicate O(log(RA(n)) · log n · e(VA, VB)) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' As a consequence, the total communication cost of P is O(log(RA(n)) · log n · e(VA, VB)) · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A))), which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ We use the simulation lemma to show that there are problems that cannot be solved by a frugal algorithm in a polynomial number of rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In problem C4-freeness, all nodes of the input graph G must accept if G has no cycle of 4 vertices, and at least one node must reject if such a cycle exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Observe that this problem is expressible in first-order logic, in particular it has en edge-frugal algorithm with a polynomial number of rounds in graphs of bounded degree [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We show that, in graphs of unbounded degree, this does not hold anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We shall also consider problem Symmetry, where the input is a graph G with 2n nodes indexed from 1 to 2n, and with a unique edge {1, n + 1} between GA = G[{1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , n}] and GB = G[{n + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , 2n}].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Our lower bounds holds even if every node is identified by its index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' All nodes must output accept if the function f : {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , n} → {n + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , 2n} defined by f(x) = x + n is an isomorphism from GA to GB, otherwise at least one node must output reject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The proof of the following theorem is based on classic reductions from com- munication complexity problems Equality and Set Disjointness (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', [9]), combined with Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Any CONGEST algorithm solving Symmetry (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', C4-free- ness) in polynomially many rounds has node-activation and edge-activation at least Ω Ä n2 log2 n ä (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Ω Ä √n log2 n ä ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In problem Equality, two players Alice and Bob have a boolean vector of size k, xA for Alice and xB for Bob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Their goal is to answer true if xA = xB, and false otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The communication complexity of this problem is known to be Θ(k) [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let k = n2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We can interpret xA and xB as the adjacency matrix of two graphs GA and GB in an instance of Symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It is a mere technicality to "shift" GB as if its vertices were indexed from 1 to n, such that Symmetry is true for G iff xA = xB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Moreover, Alice can construct GA from its input xA, and Bob can construct GB from xB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Both can simulate the unique edge joining the two graphs in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, by Lemma 3 applied to G, if Alice controls the vertices of GA, and Bob controls the vertices of GB, then any CONGEST algorithm A solving Symmetry in polynomially many rounds yields a two-party protocol for Equality on n2 bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Since graphs GA and GB are linked by a unique edge, the total communication of the protocol is O(eact(A)·log2 n) and O(nact(A)·log2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The result follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In Set Disjointness, each of the two players Alice and Bob has a Boolean vector of size k, xA for Alice, and xB for Bob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Their goal is to answer true if there is no index i ∈ [k] such that both xA[i] and xB[i] are true (in which case, xA and xB are disjoint), and false otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The communication complexity of Energy-Efficient Distributed Algorithms 13 this problem is known to be Θ(k) [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We use the technique in [5] to construct an instance G for C4 freeness, with a small cut, from two Boolean vectors xA, xB of size k = Θ(n3/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Consider a C4-free n-vertex graph H with a maximum number of edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Such a graph has k = Θ(n3/2) edges, as recalled in [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We can consider the edges E(H) as indexed from 1 to k, and V (H) as [n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let now xA and xB be two Boolean vectors of size k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' These vectors can be interpreted as edge subsets E(xA) and E(xB) of H, in the sense that the edge indexed i in E(H) appears in E(xA) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' E(xB)) iff xA[i] (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' xB[i]) is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Graph G is constructed to have 2n vertices, formed by two sub-graphs GA = G[{1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , n}] and GB = G[{n+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , 2n}].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The edges of E(GA) are exactly the ones of E(xA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Similarly, the edges of E(GB) correspond to E(xA), modulo the fact that the vertex indexes are shifted by n, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', for each edge {u, v} ∈ E(xB), we add edge {u+n, v +n} to GB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Moreover we add a perfect matching to G, between V (GA) and V (GB), by adding all edges {i, i + n}, for all i ∈ [n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Note that G is C4- free if and only if vectors xA and xB are disjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Indeed, since GA, GB are isomorphic to sub-graphs of H, they are C4-free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Thus any C4 of G must contain two vertices in GA and two in GB, in which case the corresponding edges in GA and GB designate the same bit of xA and xB respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Moreover Alice and Bob can construct GA and GB, as well as the edges in the matching, from their respective inputs xA and xB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, thanks to Lemma 3, a CONGEST algorithm A for C4-freeness running in a polynomial number of rounds can be used to design a protocol P solving Set Disjointness on k = Θ(n3/2) bits, where Alice controls V (GA) and Bob controls V (GB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The communication complexity of the protocol is O(eact(A) · n · log2 n), and O(nact(A) · n · log2 n), since the cut between GA and GB is a matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The result follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ 5 Node versus Edge Activation In this section we exhibit a problem that admits an edge-frugal CONGEST algorithm running in a polynomial number of rounds, for which any algorithm running in a polynomial number of rounds has large node-activation complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We proceed by reduction from a two-party communication complexity prob- lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, unlike the previous section, we are now also interested in the number of rounds of the two-party protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We consider protocols in which the two players Alice and Bob do not communicate simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' For such a protocol P, a round is defined as a maximal contiguous sequence of messages emitted by a same player.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We denote by R(P) the number of rounds of P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let G be a graph, and S be a subset of nodes of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We denote by ∂S the number of vertices in S with a neighbor in V \\ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Lemma 4 (Round-Efficient Simulation lemma).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let A be an algorithm in the CONGEST model, let I = (G, id, x) be an input for A, and let VA, VB be a partition of V (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let us assume that Alice controls all the nodes in VA, and Bob controls all the nodes in VB, and both players know the value of nact(A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, there exists a communication protocol P between Alice and Bob such 14 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca that, in at most min(∂VA, ∂VB)·nact(A) rounds, and using total communication O(((∂(VA) + ∂(VB)) · nact(A)))2 · log n · log RA(n)) bits, each player computes the value of A(I) at all the nodes he or she controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In protocol P, Alice and Bob simulate the rounds of algorithm A at all the nodes each player controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Without loss of generality, we assume that algorithm A satisfies that the nodes send messages at different rounds, by merely multiplying by N the number of rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Initially, Alice runs a oblivious simulation of A that stops when every node in VA either has terminated, or entered into the passive state that it may leave only after having received a message from a node in VB (this corresponds to what we call the first scenario in the proof of Lemma 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, Alice sends to Bob the integer t1 = 0, and the set M 1 A of all messages sent from nodes in VA to nodes in VB in the communication rounds that she simulated, together with their corresponding timestamps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' If the number of messages communicated by Alice exceeds nact(A) · ∂A, we trim the list up to this threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let us suppose that the protocol P has run for p rounds, and let us assume that it is the turn of Bob to speak at round p + 1 — the case where Alice speaks at round p + 1 can be treated in the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Moreover, we assume that P satisfies the following two conditions: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At round p, Alice sents an integer tp ≥ 0, and a list of timestamped messages M p A corresponding to messages sent from nodes in VA to nodes in VB in an oblivious simulation of A starting from a round tp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Bob had correctly simulated A at all the nodes he controls, up to round tp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We now describe round p+1 (see also Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Bob initiates a simulation of A at all the nodes he controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, this simulation is not oblivious.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Specif- ically, Bob simulates A from round tp taking into account all the messages sent from nodes in VA to nodes in VB, as listed in the messages M p A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The simulation stops when Bob reaches a round tp+1 > tp at which a node in VB sends a mes- sage to a node in VA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Observe that, up to round tp+1, the oblivious simulation of Alice was correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At this point, Bob initiates an oblivious simulation of A at all the nodes he controls, starting from tp+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Finally, Bob sends to Alice tp+1, and the list M p+1 B of all timestamped messages sent from nodes in VB to nodes in VA resulting from the oblivious simulation of the nodes he controls during rounds at least tp+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Using this information, Alice infers that her simulation was correct up to round tp+1, and she starts the next round for protocol P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The simulation carries on until one of the two players runs an oblivious simulation in which all the nodes he or she controls terminate, and no messages were sent through the cut in at any intermediate round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this case, this player sends a message "finish" to the other player, and both infer that their current simulations are correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' As a consequence, each player has correctly computed the output of A at all the nodes he or she controls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At every communication round during which Alice speaks, at least one vertex of VA which has a neighbor in VB is activated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, the number of rounds of Energy-Efficient Distributed Algorithms 15 rounds VA VB VA VB VA VB Transcript of the algorithm A Simulation of Alice tp Simulation of Bob tp+1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Illustration of the round-efficient simulation protocol for algorithm A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' After round p, Alice has correctly simulated the algorithm up to round tp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It is the turn of Bob to speak in round p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In round p, Alice sent to Bob the set of messages M p A, obtained from an oblivious simulation of A starting from tp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Only the first three messages are correct, since at round tp+1 Bob communicates a message to Alice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then, Bob runs an oblivious simulation of A starting from tp+1, and communicates all the messages sent from nodes VB to nodes in VA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this case the two first messages are correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Alice is at most ∂VA · nact(A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' By the same argument, we have that the number of rounds of Bob is at most ∂VB · nact(A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows that R(P) = min(∂VA, ∂VB) · nact(A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At each communication round, Alice sends at most ∂(VA)·nact(A) timestamped messages, which can be encoded using O(∂(VA)·nact(A))·log n·log RA(n)) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Similarly, Bob sends O(∂(VB) · nact(A)) · log n · log RA(n)) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It follows that C(P) = O(((∂(VA) + ∂(VB)) · nact(A)))2 · log n · log RA(n)), which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ In order to separate the node-activation complexity from the edge-activation complexity, we consider a problem called Depth First Pointer Chasing, and we show that this problem can be solved by an edge-frugal CONGEST algorithm running in O(poly(n)) rounds, whereas the node-activation complexity of any algorithm running in O(poly(n)) rounds for this problem is Ω(∆), for any ∆ ∈ O( √n log n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The lower bound is proved thanks to the Round-Efficient Simulation Lemma (Lemma 3), by reduction from the two-party communication complexity problem Pointer Chasing, for which too few rounds imply large communication complexity [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In the Depth First Pointer Chasing, each node v of the graph is given as input its index DFS(v) ∈ [n] in a depth-first search ordering (as usual we denote [n] = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', n}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Moreover the vertex indexed i is given a function fi : [n] → [n], 16 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca and the root (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', the node indexed 1) is given a value x ∈ [n] as part of its input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The goal is to compute the value of fn ◦ fn−1 ◦ · · · ◦ f1(x) at the root.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' There exists an edge-frugal CONGEST algorithm for problem Depth First Pointer Chasing, with polynomial number of rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The lemma is established using an algorithm that essentially traverses the DFS tree encoded by the indices of the nodes, and performs the due par- tial computation of the function at every node, that is, the node with index i computes fi ◦fi−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' f1(x), and forwards the result to the node with index i+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At round 1, each node v transmits its depth-first search index DFS(v) to its neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, after this round, every node knows its parent, and its children in the DFS tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then the algorithm merely forwards messages of type m(i) = fi ◦ fi−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' f1(x), corresponding to iterated computations for increasing values i, along the DFS tree, using the DFS ordering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' That is, for any node v, let MaxDFS(v) denote the maximum DFS index appearing in the subtree of the DFS tree rooted at v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We will not explicitly compute this quantity but it will ease the notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At some round, vertex v of DFS index i will receive a message m(i − 1) from its parent (of index i − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then node v will be in charge of computing message m(MaxDFS(v)), by “calling” its children in the tree, and sending this message back to its parent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In this process, each edge in the subtree rooted at v is activated twice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The vertex of DFS index 1 initiates the process at round 2, sending f1(x) to its child of DFS index 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Any other node v waits until it receives a message from its parent, at a round that we denote r(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This message is precisely m(i−1) = fi−1◦ fi−2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' f1(x), for i = DFS(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Then v computes message m(i) = fi◦fi−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' f1(x) using its local function fi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' If it has no children, then it sends this message m(i) to its parent at round r(v) + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Assume now that v has j children in the DFS tree, denoted u1, u2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , uj, sorted by increasing DFS index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Observe that, by definition of DFS trees, DFS(uk) = MaxDFS(uk−1) + 1 for each k ∈ {2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , j}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Node v will be activated j times, once for each edge {v, uk}, 1 ≤ k ≤ j, as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At round r(v) + 1 (right after receiving the message from its parent), v sends message m(i) to its child u1, then it awaits until round r1(v) when it gets back a message from u1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The process is repeated for k = 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , j: at round rk−1(v) + 1, node v sends the message m(DFS(uk) − 1) received from uk−1 to uk, and waits until it gets back a message from uk, at round rk(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Note that if k < j then this message is m(DFS(uk+1) − 1), and if k = j then this message is m(MaxDFS(v)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At round rj(v) + 1, after having received messages from all its children, v backtracks message m(MaxDFS(v)) to its parent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' If v is the root, then the process stops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The process terminates in O(n) rounds, and, except for the first round, every edge of the DFS tree is activated twice: first, going downwards, from the root towards the leaves, and, second, going upwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' At the end, the root obtains the requested message m(n) = fn ◦ fn−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' f1(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ Let us recall the Pointer Chasing problem as defined in [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Alice is given a function fA : [n] → [n], and a number x0 ∈ [n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Bob is given function Energy-Efficient Distributed Algorithms 17 fB : [n] → [n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Both players have a parameter k ∈ [n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Note that the size of the input given to each player is Θ(n log n) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The goal is to compute (fA ◦ fB)k(x0), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', k successive iterations of fA ◦ fB applied to x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We give a slightly simplified version of the result in [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Lemma 6 (Nissan and Wigderson [10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Any two-party protocol for Pointer Chasing using less than 2k rounds has communication complexity Ω(n−k log n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We have now all ingredients for proving the main result of this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' For every ∆ ∈ O Ä n1/4 √log n ä , every CONGEST algorithm solving Depth First Pointer Chasing in graphs of maximum degree ∆ with polyno- mialy many rounds has node-activation complexity Ω(∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let k be the parameter of Pointer Chasing that will be fixed later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The lower bound is established for this specific parameter k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let us consider an arbitrary instance of Pointer Chasing fA, fB : [n] → [n], and x0 ∈ [n], with parameter k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' We reduce that instance to a particular instance of Depth First Pointer Chasing (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' a1 a2 ak b1 b2 bk v1 vn−2k v2 Alice Bob Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Reduction from Pointer Chasing to Depth First Pointer Chasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The graph is a tree T on n vertices, composed of a path (v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , vn−2k), and 2k leaves vn−2k+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , vn, all adjacent to vn−2k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Node v1 is called the root, and node vn−2k is said central.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Note that the ordering obtained by taking DFS(vi) = i is a depth-first search of T , rooted at v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The root v1 is given value x0 as input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' If i ≤ n − 2k, then function fi is merely the identity function f (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', f(x) = x for all x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' For every j ∈ [k], let aj = vn−k+2j−1, and bj = vn−k+2j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' All nodes bj get as input the function fB, and all nodes aj get the function fA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Observe that the output of Depth First Pointer Chasing on this instance is precisely the same as the output of the initial instance of Pointer Chasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Indeed, fn−2k ◦ fn−2k−1 ◦ · · · ◦ f1 is the identity function, and the sequence fn◦fn−1◦· · ·◦fn−2k+2◦fn−2k+1 alternates nodes of “type” aj with nodes of “type” bj, for decreasing values of j ∈ [k], and thus corresponds to fA ◦fB ◦· · ·◦fA ◦fB, where the pair fA ◦ fB is repeated k times, exactly as in problem Pointer Chasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 18 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca We can now apply Round-Efficient Simulation Lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let Alice control all vertices aj, for all j ∈ [k], and vertices v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' , vn−2k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Let Bob control vertices bj, for all j ∈ [k].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' See Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Note that Alice and Bob can construct the subgraph that they control, based only on their input in the considered Pointer Chasing instance, and they both now value k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' If there exists a CONGEST algorithm A for Depth First Pointer Chasing on n-node graphs performing in RA rounds with node-activation smaller than 2k, then Pointer Chasing can be solved by a two-party protocol P in less than 2k rounds, with communication complexity O(k4 log n log RA) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The claim directly follows from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Indeed, by construction, ∂VA = 1 and ∂VB = k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Since we assumed nact(A) < 2k, the two-way protocol P provided by Lemma 4 solves the Pointer Chasing instance in less than 2k rounds, and uses O(k4 log n log RA) bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' By Lemma 6, we must have k4 log n log RA ∈ Ω(n − k log n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Therefore, if our CONGEST algorithm A has polynomially many rounds, we must have k ∈ Ω Ä n1/4 √log n ä .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Since our graph has maximum degree ∆ = 2k+1, the conclusion follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' ⊓⊔ 6 Conclusion In this paper, we have mostly focused on the round complexity of (deterministic) frugal algorithms solving general graph problems in the LOCAL or CONGEST model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It might be interesting to consider specific classical problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' As far as “local problems” are concerned, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', for locally checkable labeling (LCL) prob- lems, we have shown that MIS and (∆+1)-coloring admit frugal algorithms with polynomial round complexities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It is easy to see, using the same arguments, that problems such as maximal matching share the same properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' It is however not clear that the same holds for (2∆ − 1)-edge coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Open Problem 1 Is there a (node or edge) frugal algorithm solving (2∆ − 1)- edge-coloring with round complexity O(poly(n)) in the CONGEST model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In fact, it would be desirable to design frugal algorithms with sub-polynomial round complexities for LCL problems in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In particular: Open Problem 2 Is there a (node or edge) frugal algorithm solving mis or (∆ + 1)-coloring with round complexity O(polylog(n)) in the LOCAL model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The same type of questions can be asked for global problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In particular, it is known that MST has no “awake frugal” algorithms, as MST has awake complexity Ω(log n), even in the LOCAL model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In contrast, frugal algorithms for MST do exist as far as node-activation complexity is concerned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The issue is about the round complexities of such algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Energy-Efficient Distributed Algorithms 19 Open Problem 3 Is there a (node or edge) frugal algorithm solving mst with round complexity O(poly(n)) in the CONGEST model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Another intriguing global problem is depth-first search (dfs), say starting from an identified node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' This can be performed by an edge-frugal algorithm performing in a linear number of rounds in CONGEST.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' However, it is not clear whether the same can be achieved by a node-frugal algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Open Problem 4 Is there a node-frugal algorithm solving dfs with round com- plexity O(poly(n)) in the CONGEST model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Finally, we have restricted our analysis to deterministic algorithms, and it might obviously be worth considering randomized frugal algorithms as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Augustine, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Moses, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Pandurangan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Brief announcement: Distributed MST computation in the sleeping model: Awake-optimal algorithms and lower bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: 41st ACM Symposium on Principles of Distributed Computing (PODC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 51–53 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1145/3519270.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='3538459 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Barenboim, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Maimon, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Deterministic logarithmic completeness in the dis- tributed sleeping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: 35th International Symposium on Distributed Com- puting (DISC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' LIPIcs, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 209, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 10:1–10:19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Schloss Dagstuhl - Leibniz- Zentrum für Informatik (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='4230/LIPIcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='DISC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Chang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Dani, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Hayes, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', He, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Pettie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': The energy complexity of broadcast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: 37th ACM Symposium on Principles of Distributed Computing (PODC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 95–104 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1145/3212734.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='3212774 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Chatterjee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Gmyr, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Pandurangan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Sleeping is efficient: MIS in O(1)-rounds node-averaged awake complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: 39th ACM Sympo- sium on Principles of Distributed Computing (PODC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 99–108 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1145/3382734.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='3405718 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Drucker, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Kuhn, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Oshman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': On the power of the congested clique model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: Proceedings of the 2014 ACM Symposium on Principles of Dis- tributed Computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 367–376.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' PODC ’14, Association for Computing Machin- ery, New York, NY, USA (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1145/2611462.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='2611493, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1145/2611462.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='2611493 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Dufoulon, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Moses, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Pandurangan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Sleeping is super- efficient: MIS in exponentially better awake complexity (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='08359 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Ghaffari, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Portmann, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Average awake complexity of MIS and matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: 34th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 45–55 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1145/3490148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='3538566 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Grumbach, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Logical locality entails frugal distributed computation over graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' In: 35th International Workshop on Graph-Theoretic Concepts in Computer Science (WG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' LNCS, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 5911, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' 154–165.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Springer (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1007/978-3-642-11409-0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Kushilevitz, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Nisan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Communication complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Cambridge University Press (1997) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Nisan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=', Wigderson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Rounds in communication complexity revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' SIAM Journal on Computing 22(1), 211–219 (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content='1137/0222016 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Peleg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=': Distributed computing: a locality-sensitive approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' SIAM (2000) 20 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Fraigniaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Montealegre, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Rapaport, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' Todinca Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} +page_content=' The authors are thankful to Benjamin Jauregui for helpful discussions about the sleeping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/E9FLT4oBgHgl3EQfFi_K/content/2301.11988v1.pdf'} diff --git a/EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss b/EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..6b3f29874ceff91dadd95c8e97b45a3ed63a2187 --- /dev/null +++ b/EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:240cb08635f7c9bc86f2897b07707fd5194673fb8e34e83faf800efb733a4f43 +size 3473453 diff --git a/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf b/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..71964eafc80aa29a95035b2aa3f02dc7bfda3842 Binary files /dev/null and b/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf differ diff --git a/FNE0T4oBgHgl3EQfhAFe/content/tmp_files/2301.02425v1.pdf.txt b/FNE0T4oBgHgl3EQfhAFe/content/tmp_files/2301.02425v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..c36ffa9a550782928cebc1bb66837e6edeae7905 --- /dev/null +++ b/FNE0T4oBgHgl3EQfhAFe/content/tmp_files/2301.02425v1.pdf.txt @@ -0,0 +1,242 @@ +arXiv:2301.02425v1 [hep-ph] 6 Jan 2023 +January 2023 +An SU(15) Approach to Bifermion Classification +Claudio Corian`o∗ +Paul H. Frampton† +Dario Melle‡ +Dipartimento di Matematica e Fisica “Ennio De Giorgi”, +Universit`a del Salento and INFN-Lecce, +Via Arnesano, 73100 Lecce, Italy +National Center for HPC, Big Data and Quantum Computing +Thomas W. Kephart§ +Department of Physics and Astronomy, Vanderbilt University, +Nashville, TN 37235, USA. +Tzu-Chiang Yuan¶ +Institute of Physics, Academia Sinica, Nangang, Taipei 11529, Taiwan. +Abstract +One interesting way to extend the standard model is the hypothesis of bifermions which are +bosons which couple to pairs of quarks and leptons. We point out that SU(15) grand unification +gives a natural way to classify bifermions and discuss leptoquarks, biquarks and bileptons. In +fact, SU(15) provides an ideal covering group as it contains all possible bifermions within a +single model. +∗claudio.coriano@le.infn.it +†paul.h.frampton@gmail.com +‡dario.melle@studenti.unisalento.it +§tom.kephart@gmail.com +¶tcyuan@phys.sinica.edu.tw + +The standard model (SM) of particle theory has remained robust and only +occasionally tantalising hints have appeared from experiment about how to +extend it. If and when these hints have become more definite they are likely +to influence all of theoretical physics by clarifying the choices which Nature +has made. +A recent disappointment was that the anomalies in B decays +which had stubbornly remained for the eight years 2014-2022 at the 3σ level +have now been withdrawn [1]. The present article is intended to be useful for +the time when further discrepancies from the standard model appear. One +attempt at grand unification [2] involves the gauge group SU(15) where all +15 states of a quark-lepton family are in the defining representation and every +possible leptoquark is present in the adjoint representation which provides a +useful classification. The adjoint appears in 15 × 15∗ = 1 + 224 and contains +72 leptoquarks which transform in irreducible representations of the standard +model gauge group +(SU(3)C, SU(2)L)Y with Q = T3 + Y/2 in four sets of 18 as follows +B = +1/3, L = +1, +2(3, 2)−5/3 +Q = (−1/3, −4/3) +ue−, de− +(3, 2)1/3 +Q = (2/3. − 1/3) +uν, dν +B = −1/3, L = +1, +2(3∗, 1)−4/3 +Q = (−2/3) +¯uν +(3∗, 1)−10/3 +Q = (−5/3) +¯ue− +(3∗, 3)−5/3 Q = (−5/3, −2/3, +1/3) ¯ue−, ¯uν, ¯dν +B = +1/3, L = −1, +2(3, 1)4/3 +Q = (2/3) +e+d +(3, 1)10/3 +Q = (5/3) +¯ue− +(3, 3)4/3 +Q = (−1/3, 2/3.5/3) +νd, e+d, e+u. +B = −1/3, L = −1, +2(3∗, 2)5/3 +Q = (1/3, 4/3) +e+¯u, e+ ¯d +(3∗, 2)−1/3 +Q = (−2/3, 1/3) +ν¯u, e+¯u +The adjoint describes the spin-one gauge bosons of SU(15) and also a spin- +zero Higgs if it is used [3] for symmetry breaking. A spin-one hypothesis +would imply that a leptoquark is a gauge boson of SU(15). In that case, if +1 + +at least the first two families are treated sequentially as 15’s, unless there +is an ad hoc assumption motivated by the data [4], muon-electron LFU = +Lepton Flavour Universality, meaning that the leptons e, µ have identical +properties in every way except for their different masses, will be an inevitable +consequence. A spin-zero hypothesis would imply bifermions in the product +15 × 15 = 105A + 120S as per their Yukawa interactions, hence we examine +the decompositions of 15, 105 and 120 into their SM components, which is +easily done with the Mathematica package LieART [5,6]: +15 = (3, 2)+ 1 +3 + (3∗, 1)− 4 +3 + (3∗, 1)+ 2 +3 + (1, 2)−1 + (1, 1)+2 +(1) +105 = (3∗, 2)− 1 +3 + (3, 1)+ 4 +3 + (1, 1)−2 ++(3∗, 3)+ 2 +3 + (1, 2)−1 + (3∗, 1)+ 2 +3 + (3, 1)− 8 +3 ++(3, 2)+ 7 +3 + (6, 1)+ 2 +3 + (8, 2)−1 ++(6∗, 1)− 2 +3 + (3∗, 2)− 7 +3 + (3∗, 1)+ 8 +3 + (3, 1)− 2 +3 + (3, 3)− 2 +3 + (1, 2)+1 ++(8, 2)1 + (3, 1)− 2 +3 + (1, 2)+1 +(2) +and +120 = +(6∗, 1)+ 4 +3 + (3∗, 2)− 1 +3 + (1, 3)−2 ++(3∗, 1)+ 2 +3 + (1, 2)−1 ++(1, 1)6 + (3∗, 1)+ 2 +3 + (3, 2)+ 7 +3 + (6∗, 1)− 8 +3 + (6, 3)+ 2 +3 + (8, 2−1) ++(6∗, 1)− 2 +3 + (3∗, 2)− 7 +3 + (3∗, 1)+ 8 +3 + (3, 1)− 2 +3 + (3, 3)− 2 +3 + (1, 2)+1 ++(8, 2)1 + (3, 1)− 2 +3 + (1, 2)+1 +(3) +The leptoquark (3∗, 1)+ 2 +3 which could have fit the now non-existent B anoma- +lies is seen in both 105 and 120. Being a weak singlet, it doesn’t contribute +to the oblique parameters [7] that are tightly constrained by electroweak pre- +cision data. The one disadvantage of SU(15), but only an aesthetic one and +a stumbling block we must initially ignore, is that anomaly cancellation re- +quires the addition of mirror fermions. An advantage of SU(15) is the absence +of proton decay because all of the adjoint components have well-defined B and +L quantum numbers. Even if one rejects the SU(15) model for being vector- +like, it is still an ideal testing ground and classification system of leptoquarks, +diquarks and dileptons. i.e., it is a perfect umbrella model for models with +incomplete lists of bifermions. Smoking guns for SU(15) include a predicted +2 + +enhancement for B → K(∗)ν¯ν. Because of the lepton mass dependence in the +Higgs Yukawas, it predicts significant LFU-violating enhancements relative +to the SM for the decays B+ → K+τ +τ − and Bs → τ +τ −. In an ingenious +argument [8], it has been convincingly shown that violation of LFU implies +the occurrence of LFV decays which are vanishing in the standard model. +These will include the decays τ → µγ, τ → µφ and Bs → τµ. The dis- +covery of such LFV processes could lend support for the additional particles +we have discussed. It will be exciting to learn from experiments about more +violations of LFU, as well as any examples of LFV. Such additional input is +necessary to further evolve the theory. There has been extensive discussion +of leptoquarks because they were temporarily suggested by the now-defunct +B anomalies. Bileptons are suggested by the 331-model. We are tempted to +believe that the third and last type of bifermion, the biquark, appearing in +the 224 of SU(15) may also exist in Nature. +The 224 has 76 components with B = L = 0. The remaining 148 include the +72 leptoquarks listed ut supra, 72 biquarks and 4 bileptons. +The 72 biquarks fall into two sets of 36: +B = +2/3, L = 0, +(3∗ + 6, 2)5/3 +Q = (1/3, 4/3) +uu, dd +(3∗ + 6, 2)1/3 +Q = (1/3. − 2/3) +ud, dd +and +B = −2/3, L = +0, +(3 + 6∗, 2)−5/3 +Q = (−4/3, −1/3) +¯u¯u, ¯u ¯d +(3 + 6∗, 2)1/3 +Q = (−1/3, 2/3) +¯u ¯d, ¯d ¯d +In the phenomenological analysis of tetraquarks (first discovered in 2003) +and pentaquarks (2015), the name “diquark” is used for two quarks behaving +together like a molecule, so a diquark is definitely a bound state and not an +elementary particle like a biquark. At present the study of tetraquarks and +pentaquarks is successful [9] by using only diquarks without biquarks. +It will be interesting to discover whether biquarks become necessary in these +analyses. The distinction between diquark and biquark could be made using +the same criterion as used in [10] to decide whether the deuteron is a bound +state or elementary. +Finally, we discuss the four bileptons in the 224 which are in two SU(2) +doublets (Y −−, Y −) with B = 0, L = 2, and (Y ++, Y +) with B = 0, L = −2. +3 + +In the context of the 331-model, they lead [11] to the prediction of a reso- +nance in same-sign leptons with mass between 1 TeV and 4 TeV, and width +ΓY ≃ 0.05 − 0.10 TeV. The bilepton resonance in µ±µ± has been the subject +of searches by the +ATLAS and CMS Collaborations at the LHC. In March 2022, ATLAS pub- +lished an inconclusive result [12] about the existence of the bilepton, putting +only a lower mass limit MY > 1.08 TeV. CMS may have better momentum +resolution and charge identification than ATLAS and may therefore be able +to investigate the bilepton resonance proper. At the time of writing, CMS +began an in earnest search in October 2022 which is expected to be unblinded +at some time in 2023. Of the three classes of elementary bifermion (biquark, +leptoquark, bilepton) the one which appears nearest to confirmation at the +present time is the bilepton. +Acknowledgements +The work of C. C. and R. T. is funded by the European Union, Next +Generation EU, PNRR project ”National Centre for HPC, Big Data and +Quantum Computing”, project code CN00000013 and by INFN iniziativa +specifica QFT-HEP. +References +[1] LHCb Collaboration, +arXiv:2212.09153[hep-ex]. +[2] P.H. Frampton and B.H. Lee, +Phys. Rev. Lett. 64, 619 (1990). +[3] P.H. Frampton and T.W. Kephart, +Phys. Rev. D42, 3892 (1990). +[4] C. Cornella, D.A. Faroughy, J. Fuentes-Martin, G. Isidori +and M. Neubert, +JCAP 08:050 (2021). +arXiv:2103.16558[hep-ph]. +[5] R. Feger and T. W. Kephart, +Comput. Phys. Commun. 192, 166 (2015). +[6] R. Feger, T. W. Kephart and R. J. Saskowski, +Comput. Phys. Commun. 257, 107490 (2020). +4 + +[7] M. E. Peskin and T. Takeuchi, +Phys. Rev. D 46, 381-409 (1992) +[8] S.L. Glashow, D. Guadagnoli and K. Lane, +Phys. Rev. Lett. 114, 091801 (2014). +arXiv:1411.0565[hep-ph]. +[9] L. Maiani and A. Pilloni, +arXiv:2207.05141[hep-ph]. +[10] S. Weinberg, +Phys. Rev. 137, B672 (1965). +[11] P.H. Frampton, +Phys. Rev. Lett. 69, 2889 (1992). +[12] ATLAS Collaboration, +ATLAS-CONF-2022-010 (11 March 2022). +5 + diff --git a/FNE0T4oBgHgl3EQfhAFe/content/tmp_files/load_file.txt b/FNE0T4oBgHgl3EQfhAFe/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f0b5ae5f225e402ba859e8f15a914d50f138007b --- /dev/null +++ b/FNE0T4oBgHgl3EQfhAFe/content/tmp_files/load_file.txt @@ -0,0 +1,196 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf,len=195 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='02425v1 [hep-ph] 6 Jan 2023 January 2023 An SU(15) Approach to Bifermion Classification Claudio Corian`o∗ Paul H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Frampton† Dario Melle‡ Dipartimento di Matematica e Fisica “Ennio De Giorgi”, Universit`a del Salento and INFN-Lecce, Via Arnesano, 73100 Lecce, Italy National Center for HPC, Big Data and Quantum Computing Thomas W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Kephart§ Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Tzu-Chiang Yuan¶ Institute of Physics, Academia Sinica, Nangang, Taipei 11529, Taiwan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Abstract One interesting way to extend the standard model is the hypothesis of bifermions which are bosons which couple to pairs of quarks and leptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' We point out that SU(15) grand unification gives a natural way to classify bifermions and discuss leptoquarks, biquarks and bileptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' In fact, SU(15) provides an ideal covering group as it contains all possible bifermions within a single model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' ∗claudio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='coriano@le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='infn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='it †paul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='frampton@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='com ‡dario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='melle@studenti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='unisalento.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='it §tom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='kephart@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='com ¶tcyuan@phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='sinica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='tw The standard model (SM) of particle theory has remained robust and only occasionally tantalising hints have appeared from experiment about how to extend it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' If and when these hints have become more definite they are likely to influence all of theoretical physics by clarifying the choices which Nature has made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' A recent disappointment was that the anomalies in B decays which had stubbornly remained for the eight years 2014-2022 at the 3σ level have now been withdrawn [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The present article is intended to be useful for the time when further discrepancies from the standard model appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' One attempt at grand unification [2] involves the gauge group SU(15) where all 15 states of a quark-lepton family are in the defining representation and every possible leptoquark is present in the adjoint representation which provides a useful classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The adjoint appears in 15 × 15∗ = 1 + 224 and contains 72 leptoquarks which transform in irreducible representations of the standard model gauge group (SU(3)C, SU(2)L)Y with Q = T3 + Y/2 in four sets of 18 as follows B = +1/3, L = +1, 2(3, 2)−5/3 Q = (−1/3, −4/3) ue−, de− (3, 2)1/3 Q = (2/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' − 1/3) uν, dν B = −1/3, L = +1, 2(3∗, 1)−4/3 Q = (−2/3) ¯uν (3∗, 1)−10/3 Q = (−5/3) ¯ue− (3∗, 3)−5/3 Q = (−5/3, −2/3, +1/3) ¯ue−, ¯uν, ¯dν B = +1/3, L = −1, 2(3, 1)4/3 Q = (2/3) e+d (3, 1)10/3 Q = (5/3) ¯ue− (3, 3)4/3 Q = (−1/3, 2/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='5/3) νd, e+d, e+u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' B = −1/3, L = −1, 2(3∗, 2)5/3 Q = (1/3, 4/3) e+¯u, e+ ¯d (3∗, 2)−1/3 Q = (−2/3, 1/3) ν¯u, e+¯u The adjoint describes the spin-one gauge bosons of SU(15) and also a spin- zero Higgs if it is used [3] for symmetry breaking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' A spin-one hypothesis would imply that a leptoquark is a gauge boson of SU(15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' In that case, if 1 at least the first two families are treated sequentially as 15’s, unless there is an ad hoc assumption motivated by the data [4], muon-electron LFU = Lepton Flavour Universality, meaning that the leptons e, µ have identical properties in every way except for their different masses, will be an inevitable consequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' A spin-zero hypothesis would imply bifermions in the product 15 × 15 = 105A + 120S as per their Yukawa interactions,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' hence we examine the decompositions of 15,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 105 and 120 into their SM components,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' which is easily done with the Mathematica package LieART [5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='6]: 15 = (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+ 1 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 4 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)−1 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+2 (1) 105 = (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)− 1 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 4 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)−2 +(3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 3)+ 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)−1 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 8 3 +(3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+ 7 3 + (6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 2 3 + (8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)−1 +(6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 2 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)− 7 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 8 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 3)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+1 +(8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)1 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+1 (2) and 120 = (6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 4 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)− 1 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 3)−2 +(3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)−1 +(1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)6 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+ 7 3 + (6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 8 3 + (6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 3)+ 2 3 + (8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2−1) +(6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 2 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)− 7 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 8 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 3)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+1 +(8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)1 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 2)+1 (3) The leptoquark (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 1)+ 2 3 which could have fit the now non-existent B anoma- lies is seen in both 105 and 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Being a weak singlet, it doesn’t contribute to the oblique parameters [7] that are tightly constrained by electroweak pre- cision data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The one disadvantage of SU(15), but only an aesthetic one and a stumbling block we must initially ignore, is that anomaly cancellation re- quires the addition of mirror fermions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' An advantage of SU(15) is the absence of proton decay because all of the adjoint components have well-defined B and L quantum numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Even if one rejects the SU(15) model for being vector- like, it is still an ideal testing ground and classification system of leptoquarks, diquarks and dileptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=', it is a perfect umbrella model for models with incomplete lists of bifermions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Smoking guns for SU(15) include a predicted 2 enhancement for B → K(∗)ν¯ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Because of the lepton mass dependence in the Higgs Yukawas, it predicts significant LFU-violating enhancements relative to the SM for the decays B+ → K+τ +τ − and Bs → τ +τ −.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' In an ingenious argument [8], it has been convincingly shown that violation of LFU implies the occurrence of LFV decays which are vanishing in the standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' These will include the decays τ → µγ, τ → µφ and Bs → τµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The dis- covery of such LFV processes could lend support for the additional particles we have discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' It will be exciting to learn from experiments about more violations of LFU, as well as any examples of LFV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Such additional input is necessary to further evolve the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' There has been extensive discussion of leptoquarks because they were temporarily suggested by the now-defunct B anomalies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Bileptons are suggested by the 331-model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' We are tempted to believe that the third and last type of bifermion, the biquark, appearing in the 224 of SU(15) may also exist in Nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The 224 has 76 components with B = L = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The remaining 148 include the 72 leptoquarks listed ut supra, 72 biquarks and 4 bileptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The 72 biquarks fall into two sets of 36: B = +2/3, L = 0, (3∗ + 6, 2)5/3 Q = (1/3, 4/3) uu, dd (3∗ + 6, 2)1/3 Q = (1/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' − 2/3) ud, dd and B = −2/3, L = +0, (3 + 6∗, 2)−5/3 Q = (−4/3, −1/3) ¯u¯u, ¯u ¯d (3 + 6∗, 2)1/3 Q = (−1/3, 2/3) ¯u ¯d, ¯d ¯d In the phenomenological analysis of tetraquarks (first discovered in 2003) and pentaquarks (2015), the name “diquark” is used for two quarks behaving together like a molecule, so a diquark is definitely a bound state and not an elementary particle like a biquark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' At present the study of tetraquarks and pentaquarks is successful [9] by using only diquarks without biquarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' It will be interesting to discover whether biquarks become necessary in these analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The distinction between diquark and biquark could be made using the same criterion as used in [10] to decide whether the deuteron is a bound state or elementary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Finally, we discuss the four bileptons in the 224 which are in two SU(2) doublets (Y −−, Y −) with B = 0, L = 2, and (Y ++, Y +) with B = 0, L = −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 3 In the context of the 331-model, they lead [11] to the prediction of a reso- nance in same-sign leptons with mass between 1 TeV and 4 TeV, and width ΓY ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='05 − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='10 TeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' The bilepton resonance in µ±µ± has been the subject of searches by the ATLAS and CMS Collaborations at the LHC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' In March 2022, ATLAS pub- lished an inconclusive result [12] about the existence of the bilepton, putting only a lower mass limit MY > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='08 TeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' CMS may have better momentum resolution and charge identification than ATLAS and may therefore be able to investigate the bilepton resonance proper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' At the time of writing, CMS began an in earnest search in October 2022 which is expected to be unblinded at some time in 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Of the three classes of elementary bifermion (biquark, leptoquark, bilepton) the one which appears nearest to confirmation at the present time is the bilepton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Acknowledgements The work of C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' is funded by the European Union, Next Generation EU, PNRR project ”National Centre for HPC, Big Data and Quantum Computing”, project code CN00000013 and by INFN iniziativa specifica QFT-HEP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' References [1] LHCb Collaboration, arXiv:2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='09153[hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [2] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Frampton and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Lee, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 64, 619 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Frampton and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Kephart, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' D42, 3892 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Cornella, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Faroughy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Fuentes-Martin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Isidori and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Neubert, JCAP 08:050 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='16558[hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [5] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Feger and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Kephart, Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 192, 166 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [6] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Feger, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Kephart and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Saskowski, Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 257, 107490 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 4 [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Peskin and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Takeuchi, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' D 46, 381-409 (1992) [8] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Glashow, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Guadagnoli and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Lane, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 114, 091801 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' arXiv:1411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='0565[hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Maiani and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Pilloni, arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='05141[hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Weinberg, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 137, B672 (1965).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [11] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Frampton, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 69, 2889 (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' [12] ATLAS Collaboration, ATLAS-CONF-2022-010 (11 March 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} +page_content=' 5' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'} diff --git a/FNE2T4oBgHgl3EQfSwf4/content/tmp_files/2301.03797v1.pdf.txt b/FNE2T4oBgHgl3EQfSwf4/content/tmp_files/2301.03797v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..7923b9c754928597b1da4b8881201b830d8b269b --- /dev/null +++ b/FNE2T4oBgHgl3EQfSwf4/content/tmp_files/2301.03797v1.pdf.txt @@ -0,0 +1,2048 @@ +Recommending Root-Cause and Mitigation Steps +for Cloud Incidents using Large Language Models +Toufique Ahmed∗§, Supriyo Ghosh†, Chetan Bansal† +Thomas Zimmermann‡, Xuchao Zhang†, Saravan Rajmohan† +∗UC Davis +†Microsoft +‡Microsoft Research +Abstract—Incident management for cloud services is a complex +process involving several steps and has a huge impact on both +service health and developer productivity. On-call engineers +require significant amount of domain knowledge and manual +effort for root causing and mitigation of production incidents. +Recent advances in artificial intelligence has resulted in state-of- +the-art large language models like GPT-3.x (both GPT-3.0 and +GPT-3.5), which have been used to solve a variety of problems +ranging from question answering to text summarization. In this +work, we do the first large-scale study to evaluate the effectiveness +of these models for helping engineers root cause and mitigate +production incidents. We do a rigorous study at Microsoft, on +more than 40,000 incidents and compare several large language +models in zero-shot, fine-tuned and multi-task setting using +semantic and lexical metrics. Lastly, our human evaluation with +actual incident owners show the efficacy and future potential of +using artificial intelligence for resolving cloud incidents. +Index Terms—Incident Management, Service Quality, GPT-3.x, +Large Language Models +I. INTRODUCTION +Large IT enterprises such as Amazon, Google, Microsoft, +and Salesforce have replaced the traditional shrink-wrapped +software and moved towards deploying applications and ser- +vices on cloud platforms. In today’s cloud systems, production +incidents (e.g., outage or performance degradation, unplanned +interruptions) adversely impact the customers and can be +expensive in terms of penalty associated with service level +agreement violations and engineering efforts required to mit- +igate the incidents. For example, one hour of downtime is +estimated to cost Amazon US$100 million on major shopping +days [1]. Despite continuous reliability efforts over the years, +cloud services still experience inevitable severe incidents. +Artificial Intelligence (AI) for IT Operations, also known +as AIOps, has increased in popularity. Data-driven and AI +techniques have been leveraged for automating parts of the +incident life-cycle, for example, incident prioritization [2], +retrieval of incidents with similar symptoms [3], and reducing +the time to mitigate incidents [4], [5]. However, on-call +engineers (OCEs) still spend a significant amount of manual +toil through multiple rounds of back and forth communication +for identifying root causes and mitigation steps. Motivated +by the recent successes of leveraging GPT-3 models for non- +trivial tasks [6], [7] and code generation [8], we apply such +§This work is done during the author’s internship at Microsoft Research. +models to incident management. We identified the following +two scenarios: +1) Find the incident’s root cause. Diagnosing incidents +typically requires significant time and communication be- +fore engineers can identify the root cause of the incident. +We investigate how effective large language models are +at suggesting root causes for incidents (RQ1). +2) Suggest the mitigation steps for the incident. After +a root cause has been located, engineers take actions to +mitigate the problem. We investigate how effective large +language models are at recommending the mitigation +steps for incidents (RQ2). +When applying large language models several considera- +tions and decisions need to be taken. Since the models were +not trained with incident management data, is fine-tuning of +the models necessary (RQ3)? Is it more effective to build one +model for each task (single-task) or one combined model that +supports both root causes and incidents (multiple task) (RQ4)? +Does the root cause help language models to find better +mitigation steps (RQ5)? Do the models perform better for +certain types of incidents (RQ6)? We address these questions +with a rigorous large-scale evaluation of 44,340 incidents +from 1,759 services of Microsoft. In addition to lexical and +semantic evaluation metrics that are typically reported for such +experiments, we present the results from a human validation, +where we asked incident owners to assess the correctness and +readability of suggested root causes and mitigation steps. The +original incident owners are the most qualified to assess the +performance of the models on incidents. In this paper, we +make the following contributions: +1) This is the first work to demonstrate the usefulness +of state-of-the-art large language models (LLMs) such +as GPT-3.x (both GPT-3.0 and GPT-3.5) for resolving +production incidents in a real world setting. (Section III) +2) We present a rigorous and large-scale study in Microsoft +on over 40,000 incidents from 1000+ cloud services with +six semantic and lexical metrics. (Section IV) +• Fine-tuning significantly improves the effectiveness of +LLMs for incident data. +• GPT-3 and GPT-3.5 models significantly outperform +encoder-decoder models in our experiments. +• Metrics such as BLEU-4 are useful to measure relative +1 +arXiv:2301.03797v1 [cs.SE] 10 Jan 2023 + +performance of models in different settings. How- +ever, manual inspection and validation with experts is +needed to assess the actual performance. +3) Our human study with the actual incident owners of pro- +duction incidents helps prove the efficacy of the proposed +approach. (Section V) +II. OVERVIEW +A. Incident management +Production incidents are inevitable in large-scale cloud +services and often severely affect the customer experience. +Also, they can be extremely expensive in terms of engineer- +ing resources required to root cause and mitigate them. An +incident life-cycle typically has the following four stages: +(1) Detection: The first step in the incident life-cycle is +detection where the incidents are either reported by internal +or external customers of a given service after they notice +anomalous behavior. Also, incidents can also be reported +via automated monitors which are configured by the service +owners. (2) Triaging: Once an incident is reported, a team +of OCEs analyze the problem and route the incident ticket to +appropriate engineering team. This process is often referred +as incident triaging. (3) Diagnosis: The incident diagnosis and +root cause identification process requires multiple iterations of +back and forth communication between engineers inspecting +the different aspects to understand the broad nature of the +incident and identify the root cause. (4) Mitigation: Based +on the identified root causes, actions are taken to mitigate the +problem so as to recover the service health and minimize the +impact on the service users. +Lately, AIOps (AI for IT Operations) has gained popularity +for automating various parts of the incident life-cycle by +combining data-driven and AI techniques with data-sources +like application logs, time series performance metrics and +service traces [2], [4], [5], [9]. Albeit significant efforts, +incident management in large cloud systems still requires a +huge amount of engineering effort and cost. More specifically, +even with plethora of historical incident data, root cause iden- +tification and mitigation remains notoriously challenging and +time consuming tasks. In this work, we propose to use large +language models such as GPT-3.x to automatically recommend +root causes and mitigation for new incidents by leveraging +historical incident data. +B. The promise of LLMs/GPT-3.x models +Large language models (LLMs) such as GPT-3.x [7] have +emerged as one of the hottest trends in natural language +processing over the last few years. With 175 billion parame- +ters, the GPT-3.x language models, which held the record for +being the largest neural network ever developed, is an order +of magnitude larger than prior language models. Using this +massive model architecture, GPT-3.x were trained using almost +all accessible data from the Internet, including CommonCrawl +[10], WebText [11], Wikipedia [12], and a corpus of books. +Title: Attach vm fails with connection timeout +Summary: The workspace is not associated with any vnet. Cus- +tomer has a vm which is already running inside a vnet. They like +to attach that vm into [product omitted]. We tried the UI and CLI +route, but still fails with same connection timeout error. Error points +that it resolves to some public ip [...] +Reference root cause: It is not supported to attach a private vm to +a public workspace directly. +Reference mitigation: Open a task to provide better official docu- +ment for customer on the topic of virtual machine. +Fig. 1: A sample production incident. +GPT-3.x models surpass the state-of-the-art models in a va- +riety of NLP tasks, including machine translation, question- +answering, and close tasks. Furthermore, the GPT-3.x models +achieved a significant milestone by showing that unsupervised +language models trained with adequate data can multi-task to +the same level of fine-tuned models using just a few examples +of the new tasks. As a result of its powerful text generation +capabilities in new tasks, GPT-3.x are used in a wide range +of categories and industries, from productivity and education +to creativity and gaming. For instance, GPT-3.x are used to +produce creative writing, including blog posts, advertisements, +and poetry, that mimics the literary style of well-known writers +like Shakespeare. +C. Root-causing and mitigating incidents +Incident root-causing and mitigation is a complex process +which requires significant amount of manual effort and, also, +domain knowledge about the services. Incidents can be caused +by various kind of issues such as code bugs, dependency +failures, infrastructure issues, configuration bugs, etc. Due to +the vast number of possibilities, it is non-trivial for the OCEs +to root cause the incidents. Similarly, once the root cause is +identified, there can be various mitigation steps which can +be taken such as code rollback, hotfix, infrastructure changes, +configuration update, etc. Identifying the correct mitigation +step is again non-trivial and requires domain knowledge and +experience. Human errors in root causing or mitigation of +incidents results in not just more effort and human toil but +also impact on the customers and the revenue. Fig. 1 shows +a real incident from a service where we can see the title and +summary provided by the customer along with the actual root +cause and mitigation. +In this study, we evaluate the effectiveness of large lan- +guage models like GPT-3.x and Codex for root causing and +mitigating production incidents. When an incident is created, +the author would specify a title for the incident and describe +any relevant details such as any error messages, anomalous +behavior and other details which could potentially help with +resolution. Once the OCE starts investigating the incident, they +might get more details by communicating with the incident +author or by looking at telemetry and logs. During the course +of the investigation, the OCE might often updates the incident +details. For our evaluation, we use the title and the summary +of a given incident at the time of incident creation as input +2 + +and generate the root cause and mitigation steps. This is to +ensure that we only use the information which was available +to the OCE when they started investigating the incident. +D. Research questions +We investigated several OpenAI GPT-3.x models (i.e., +Curie, Codex-cushman, Davinci, Code-davinci-002) to gener- +ate root causes and mitigation plans for the incident. This leads +to several RQs. +RQ1 Are fine-tuned GPT-3.x models effective at finding the +incident’s root cause? +The OpenAI models are not trained with the incident manage- +ment data since the data contain sensitive privacy information, +and Microsoft follows standard protocols to ensure the security +of the data. Therefore, the GPT-3.x models are not expected +to perform well in zero-shot/few-shot settings. In this paper, +we fine-tuned four different GPT-3.x models with different ca- +pacities and observed how the models performed at proposing +the root causes of the incident. +RQ2 Are fine-tuned GPT-3.x models capable of suggesting the +mitigation plan for the incident? +We are also interested in generating mitigation plans for the +incident using GPT-3.x models. Like root cause generation, we +fine-tune and evaluate the model using the input and criteria +we use for RQ1. +RQ3 How much fine-tuning improves over zero-shot learning +performance of GPT-3.x models? +Though we primarily focus on fine-tuning, GPT-3.x models +are reported to be effective at various downstream tasks with +zero-shot and few-shot training [7], [8]. In few-shot learning, +we use a few examples in the prompt as input to the model, and +the model generates the expected output. Zero-shot is similar +to few-shot training, but none of the examples are given. These +two settings are economically and environmentally beneficial +(reduced carbon footprint) because we are not updating any +parameters of the models. This paper will investigate how +the models perform at zero-shot settings. Note that few-shot +learning is unsuitable for our project because we have long +sequences in our dataset, and we observe the truncation of the +sequences when we infer only one sequence after fine-tuning. +RQ4 Does multi-task learning improve the performance of +GPT-3.x models at finding root causes and mitigation plans? +Multi-task learning is effective for some pre-trained mod- +els [13]. So far, we have discussed separate training models +and using the input independently to generate the incident’s +root cause and mitigation plans. We are interested in how GPT- +3.x models react to multi-task learning in our specific setting. +We combine all the training data for this experiment for both +tasks. However, during evaluation, we used the same test sets +used in RQ1 and RQ2. +RQ5 Do GPT-3.x models get better at proposing mitigation +plans if the root cause is given? +Mitigation plans for an incident depend on the specific root +cause. Different root causes may lead to different mitigation +plans. Moreover, the GPT-3.x models can be improved by +making the input larger or more informative. We will also +investigate whether providing the root cause in the input help +models find the mitigation plans. +RQ6 Do the models better propose mitigation plans for +machine-detected incidents than human-detected ones? +Incidents can be machine-detected (by some monitors) or +human-detected. Both types of incidents have specific char- +acteristics. Machine-detected incidents are generally triggered +when the monitor observes system changes like build failures, +resource availability, request counts, etc. On the contrary, +human-detected incidents are unique and may apply to a spe- +cific customer (e.g., webpage is not loading). In the research +question, we will investigate if the model performs well for +incidents belonging to a specific class. +E. Human validation +Root causes and mitigation plans can be written in different +forms. Unlike natural language translation or code summa- +rization, root causes and mitigation steps are much more open- +ended. Depending on the author, the root causes and mitigation +plans can vary from generic to specific. Automatic metrics may +fail to reflect the overall performance of the models ideally +because these metrics compare the models’ suggestions with +one reference, which may be completely different from the +models’ correct and relevant outputs. To better understand the +model’s performance, we went to the owner/resolver of the +specific incidents and presented the solutions from our models +and baselines. They assigned correctness and readability scores +to the model’s output. We will discuss our methodology and +findings from the human validation in Section V. +III. METHODOLOGY +A. Dataset Preparation +Thousands of incidents with different severity are being +detected (by both machines and humans) every day at Mi- +crosoft. The on-call engineers (OCEs) are working relentlessly +to provide seamless service to the customers. To manage +incidents at that scale, Microsoft has a well-designed website +for reporting and managing the incident. A database also keeps +track of the website’s data insertion, modification, and deletion +from incident reporting to mitigation. One of the inputs to the +model is the summary written at the time of incident reporting +or creation to prevent any data leakage from input to output. +In most cases, the OCEs do not follow any specific for- +mat to write incident summaries, root causes, and mitigation +plans. The fields, especially summaries, contain information in +multiple forms, including tables, links to prior incidents, and +images of individual monitor output or code snippets. This +is because the incidents are very different from each other, +and the utmost priority of the OCEs is to resolve the incident +rather than document the symptoms. Also, some incidents are +transient and auto-mitigated. No post-mortem is done if the +severity of low. Since GPT-3.x are text models, we discarded +the tables and images from the summaries. Hence, there is a +chance that we lost some critical information while discarding +that information. +3 + +We collected data for incidents from the database that +has the creation date between January 1, 2018, to July 15, +2022. Initially, we collected 123,953 instances for root causes +and 23,544 mitigations from the “Resolved” or “Mitigated” +incidents with severity levels 0-3 (most severe incidents belong +to level 0). The samples for mitigation are low because they +can be found in the post-mortem of the incident, and post- +mortem are not done for every incident. After collecting the +data, we observe many incidents with duplicate root causes and +mitigations. Some severe incidents/ denial of service trigger +hundreds of incident reports for the same event, all of which +have the exact root causes and mitigations. To fairly evaluate +the model, we remove the exact duplicates for root causes and +mitigation plans and end up with 57,520 root causes and 8,300 +mitigation plans. The average root causes and mitigations +lengths are 87 and 12 tokens, respectively. Some root causes +are very long, and it is difficult for the models and human +evaluators to generate and evaluate the models’ output. We +kept the root causes up to 100 tokens, allowing us to keep +73% of the instances for root causes. We also discarded root +causes and mitigation plans with less than three tokens because +those are not informative. +After deduplication and filtering, we sorted the data accord- +ing to the creation date to use only historical data for training +the model. We selected 35820, 3000 and 2000 root causes +for training, testing and validation. We have fewer instances +for mitigations. Hence, the training, test and validation sets +for mitigations have 5455, 2000 and 500 data, respectively. +Even after this rigorous filtering and deduplication of data, +some root causes and mitigations do not carry any useful +information (e.g., root cause in a different link, transient, and +auto-mitigated incidents). We manually went through 3000 +root causes and 2000 mitigation plans from test sets and +selected 2,621 root causes and 1,780 mitigation plans. 1 +B. OpenAI models and baselines +The recent advancement of the deep neural network models +is greatly influenced by the introduction of Transformer mod- +els [14]. Prior approaches (i.e., LSTM [15] and GRU [16]) +modeled the sequential dependencies of the generated text us- +ing recurrent architecture. These recurrent models use “Back- +Propagation through Time” (BPTT) to recursively propagate +loss values over gradients within the same recurrent units pro- +hibiting the possibility of parallel computation while capturing +the long-distance dependencies of the tokens in the sequence. +Bahdanau et al. introduced an attention mechanism that works +on top recurrent architecture and improves the performance of +recurrent neural models by providing an attention vector that +indicates the relevant part of the input to the target output [17]. +Transformer model completely removes the recurrence unit +and entirely relies on the attention mechanism. It uses a multi- +layer, multi-head self-attention architecture where the attention +mechanism can relate different positions of a single sequence +to compute a sequence representation. +1We cannot share the dataset because incident data can contain confidential +and private data and sharing such data would violate the terms of service. +Pre-trained models are currently achieving state-of-the-art +performance for various natural language and code tasks. +These pre-trained models work in 2 stages (i.e., pre-training +and fine-tuning). In the pre-training stage, we train the model +to learn statistics of language (or code) in a self-supervised +fashion from large-scale corpora. After that, we use a smaller +labeled dataset to fine-tune the model for specific tasks. It +is nearly infeasible to have sufficient labeled data to train +such high-capacity deep learning models. Pre-trained models +enable us to train such big models with the unlabeled data +in a self-supervised way in the pre-training stage. All the +recent pre-trained (encoder-only and encoder-decoder) models +(e.g., BERT [18], RoBERTA [19], BART [20], T5 [21]) and +decoder-only generative models (e.g., GPT [22], GPT-2 [23], +GPT-3 [7], OPT [24]) are basically Transformer models of +various capacity trained with different pre-training objectives. +The following subsections briefly discuss the baselines and +OpenAI models we used for our experiments. +1) Baselines encoder-decoder models: We can apply the +encoder-decoder models for both root cause and mitigation. +The encoder will encode the input, and the decoder will +generate the root cause or mitigation using the encoded +representation provided by the encoder. +Pre-trained NLP models (e.g., BERT [18], RoBERTa [19], +BART [20], T5 [21]) use different self-supervised pre- +training objectives to learn robust language representa- +tions. NLP models have programming language counterparts +(e.g., CodeBERT [25], GraphCodeBERT [26], PLBART [27], +CodeT5 [13], NatGen [28]) where the models are initialized +with the NLP models’ weights and continued pre-training +with code and associated natural language comments in most +cases. Though root cause and mitigation are natural language +descriptions, the vocabulary (e.g., identifiers) overlaps more +with the comments used in code models. Therefore we picked +both NLP and code models from OpenAI and baseline criteria +to see if the performance differs depending on the domain used +for pre-training. For baselining, we pick RoBERTa [19] and +CodeBERT [25] models because of two reasons: i) these two +models are architecturally identical with 125M parameters, ii) +Both models are widely used as baselines (in fact, CodeBERT +is the primary baseline model of the CodeXGLUE [29] dataset, +which is a popular benchmark of 10 SE tasks including +encoder-decoder tasks like code summarization and code trans- +lation). Note that many transformer-based encoder-decoder +models can be applied to this problem. However, comparing +with all the models is beyond the scope of the paper. +RoBERTa: BERT is the first model that introduced the pre- +training strategy that outperforms the traditional Transformer +models. It applied two pre-training strategies: Masked Lan- +guage Modeling (MLM) and NSP (Next Sentence Prediction). +In MLM pre-training, we randomly mask out 15% of the +tokens and ask the model to recover those tokens, whereas, in +NSP, we train the model to learn to predict the next sentence +following an input sentence. Liu et al. [19] propose RoBERTa +(A Robustly Optimized BERT Pre-training Approach), which +outperforms the BERT model with a few changes, such as +4 + +dynamic masking and dropping NSP, achieves better perfor- +mance. We apply RoBERTa as NLP baseline model. +CodeBERT: +CodeBERT +is +architecturally +identical +to +RoBERTa model that uses two pre-training objectives: MLM +and Replaced Token Detection (RTD) [30]. We can define RTD +as a binary classification problem where two data generators +(i.e., NL and PL) generate plausible alternatives for a set +of randomly masked positions. A discriminator is trained +to determine whether a word is the original one or not. +CodeBERT is pre-trained on CodeSerachNet [31] dataset. +2) OpenAI generative models: Radford et al. introduced +general task-agnostic generative pre-training of language mod- +els (GPT) and outperformed 9 out of 12 discriminatively +trained models that use architectures designed for the spe- +cific task [22]. In generative pre-training, we autoregressively +predict the probability of a token given the previous tokens +moving from left to right. This left-to-right autoregressive +training prevents the model from retrieving information from +future tokens. All the subsequent generative models (e.g., GPT- +2, GPT-3) use very similar pre-training objectives but have +a higher capacity than previous ones and are pre-trained on +a much larger dataset. Very large language models (LLMs) +like GPT-3.x have 175 billion parameters and are found to +be effective with few-shot learning replacing the need for +fine-tuning for a specific set of tasks. However, fine-tuning +GPT-3 models are still beneficial for some tasks [7]. This +paper evaluates our approach using four OpenAI [32] GPT- +3.x models: Curie, Codex, Davinci, and Code-davinci-002. +Curie: Curie is the fastest GPT-3 model with 6.7B parameters. +This model is trained with natural language data and performs +well on language translation, complex classification, text sen- +timent, and summarization tasks. This is the smallest model +we use for our experiments. +Codex: The Codex models are also GPT-3 models trained for +understanding and generating code. The training data contains +both natural language and billions of lines of public code +from GitHub. We use one model, Codex-cushman from Codex +family, with 12 billion parameters. Though the models are +pre-trained for code-related tasks, it somehow relevant to +incident management. Root cause and mitigation contain a lot +of terminology (e.g., filenames, identifiers) which relate more +to comments used in software development projects. +Davinci: Davinci is the biggest GPT-3 model (175 billion +parameters) we use for our experiments. It can perform tasks +with fewer instructions than other GPT-3 models. Davinci +usually performs better at understanding the content or creative +content generation task. It is also very good at solving logic +problems. However, training the 175 billion parameters model +is costly and requires a much longer period (almost four times +compared to Curie with the same dataset) and more resources. +Davinci is not trained to understand or generate code. +Code-davinci-002: Code-davinci-002 is the 175 billion pa- +rameters GPT-3.5 model we use for our experiments. Code- +davinci-002 is an upgraded and more capable version of Codex +model that was trained on a more recent dataset of text and +code corpus. +C. Model configuration +One of the limitations of pre-trained encoder-decoder mod- +els is that they can only encode 512 tokens. We observe +that several samples from our test set truncated in GPT-3 +model even though GPT-3 models support from 2048 tokens +(e.g., Curie, Codex) to 4000 tokens (e.g., Code-davinci-002). +Therefore, we can assume that the traditional encoder-encoder +models do not have enough capacity to encode our sequences. +Encoder-decoder models have been successful for problems +like code summarization [13], [25], [27], code translation [29], +and natural language translation [14], [20], [21]. We usually +generate one sample using beam search for each input and +compare the results with the reference. Generating one sample +is sufficient for these problems because the target text is +less open-ended. Besides, most of the information needed for +successful generation can be found in the input for this set of +problems. The models need to learn the syntactic alignment +between two programming languages for code translation. +Learning to transform conditional statements and loops from +one programming language to another may be enough to do a +successful translation, which is learnable from a few thousand +samples. For natural language translation learning the mapping +between the words from different natural languages is essential +to generate good quality translation. Code summarization is +slightly different from these two, where the input is much +longer than the output. However, Ahmed and Devanbu found +that all the necessary information for code summarization is +extracted from the identifiers, and obfuscating the identifiers +hurts the models [33]. Generating root causes and mitigation +plans is much more complex than these problems, where the +input may not contain handy information. The models need +to be able to generate more diverse and creative solutions +to answer the question. Our problem is more aligned with +code generation problems where the input does not carry +most information. For these types of problems, it is found +that instead of using the encoder-decoder model, decoder-only +models (e.g., GPT-3.x) are more successful where we only +focus on the following tokens considering the prior tokens +generated by the models. It is well-established that encoder- +decoder models are not as successful as decoder-only models +in code generation tasks. However, we still apply encoder- +decoder models to our problems and discuss our findings in +the following sections. For RoBERTa [19] and CodeBERT [25] +we use the exact setup that is used for the code summarization +task [31], [34]. We adjust the length to 512 tokens with a batch +size of 8 to provide as much as information to the model. +Full fine-tuning that retrains all the parameters is very +costly and challenging for the OpenAI models with billions +of parameters. We use LoRA (Low-Rank Adaptation), a novel +approach that significantly reduces the number of trainable +parameters by freezing the pre-trained model weights and +injecting trainable rank decomposition matrices into each layer +of the Transformer architecture [35]. Even though LoRA +reduces trainable parameters, it performs on-par or better than +fine-tuning in model quality on RoBERTa, DeBERTa, GPT- +5 + +2, and GPT-3. We fine-tuned the OpenAI GPT-3 (i.e., Curie, +Codex, Davinci) and GPT-3.5 (Code-davinci-002) models for +root causes and mitigation plans generation. We train both +models for 2000 steps (4 epochs) which OpenAI recommends. +For fine-tuning smaller models (i.e., Curie and Codex), we +use one NVIDIA V100 GPU, and for Davinci, we use four +NVIDIA V100 GPUs. For finetuning Code-davinci-002 model, +we use four NVIDIA A100 GPUs. We evaluated the models +on the validation set after every 100 steps and chose the model +that showed minimum training loss on the validation set. +As discussed earlier, the model needs to generate more +diverse and creative recommendations to solve problems like +the predictions of root causes and mitigation plans. Two +critical parameters to control the quality of the generated +outputs are temperature and top p, and it is recommended +to update one parameter. Following prior works [8], [36], we +decided to update the value of temperature. Higher temperature +encourages the model to take more risk, which is necessary +for the creative application [32]. Lower value performs argmax +sampling, which is very similar to what we do in encoder- +decoder model models like CodeBERT. Typically, a temper- +ature between 0.50–0.90 is the most common for creative +tasks. However, a high temperature is hurtful (makes the output +too diverge) [36]. We perform a grid search and choose 0.7 +for Curie, Codex, and Davinci models and 0.5 for Code- +davinci-002 experiments to minimize the divergence issue for +generating five samples. +D. Evaluation Metrics +We briefly describe the evaluation metrics used for the two +downstream tasks, root cause and mitigation generation. +1) Lexical Metrics: For lexical metrics, we employ the +smooth sentence BLEU-4 (Bilingual Evaluation Understudy) +[37] metric to calculate n-grams overlap from 1 to 4 between +the reference and generated texts. In addition, the Rouge met- +ric (Recall Oriented Understudy for Gisting Evaluation) [38] +is used to compare a candidate document to a set of reference +texts. Specifically, we choose ROUGE-L [38], which takes +into account sentence-level structural similarity and identifies +longest co-occurring in sequence n-grams based on Longest +Common Subsequence (LCS) [39] statistics. METEOR (Met- +ric for Evaluation of Translation with Explicit Ordering) [40] +is the final lexical metric we selected, which is based on the +harmonic mean of unigram precision and recall as well as +stemming and synonymy matching as extra features. +2) Semantic Metrics: Since the lexical metrics usually +conduct exact word matches and disregard the meaning of +words, we choose three semantic metrics to evaluate our +outcomes according to their semantic meanings. We use the +BERTScore [41], which leverages the pre-trained contextual +embeddings from the BERT [18] model and matches candidate +and reference sentence words based on cosine similarity. Then, +the BLEURT score [42] is selected to demonstrate the degree +to what extent the candidate is fluent and conveys the meaning +of the reference. Last, we select NUBIA (NeUral Based Inter- +changeability Assessor) [43], a recent neural-based measure +that incorporates the semantic similarity, logical inference +and sentence legibility from exposing layers of pre-trained +language models, including RoBERTa STS [19], RoBERTa +MNLI and GPT-2 [23]. +The semantic metric calculation takes significant time and +requires expensive GPU resources (Tables I and II took two +days on a single GPU). Therefore, we reported semantic met- +rics for the first two research questions, and for the remaining +research questions, we restricted ourselves to lexical ones that +are computationally less expensive. +IV. RESULT +A. How effective are fine-tuned GPT-3.x models in generating +incidents’ root cause recommendation? (RQ1) +Table I presents the effectiveness of our baseline encoder- +decoder models and fine-tuned GPT-3.x models for root cause +recommendation. We have 2621 test samples for evaluating the +models. We generated ten samples for the OpenAI models for +two reasons: i) using temperature, we can generate very diverse +and creative samples from GPT-3.x models. ii) we found that +GPT-3.x models can generate valuable suggestions even with +lower ranks. We observed the average BLEU-4 of all the +samples at a particular rank, and we found that all the OpenAI +GPT-3.x models produce examples with higher BLEU-4 even +at rank eight or lower. However, ten examples are too many for +a human OCE, and we restrict ourselves to five top suggestions +from the model. In Table I, for each metric, we have Top 1 +and Top 5. Top 1 presents the mean of the first candidates +for all the test samples; while calculating Top 5, we take the +maximum value from the first five candidates and then find +the average for all samples. This Top 5 gives an overall view +of how the models are performing. For our baseline encoder- +decoder models, we have only one sample for each model. +Surprisingly, the encoder-decoder models are doing really +good compared to GPT-3 models in all six automatic metrics. +In fact, all six metrics fail to distinguish significant differences +between the OpenAI models. The reason behind the success +of encoder-decoder models in automatic metrics is that these +models are less explorative and try to maximize the success de- +pending on argmax probabilities during decoding. Now “There +is a bug in the code” is a very common and generic sentence +that can be a part of any root causes. The models maximize the +success just by copying that particular segment, and automatic +metrics also fail here. We tried three semantic metrics to +resolve that issue, but the encoder-decoder models still benefit +from the automatic metric. Table III presents the number of +unique samples generated by the models. For OpenAI models +we only consider the first candidate to make a fair comparison. +We observe that the unique candidate count for RoBERTa +and CodeBERT are 6.10% and 16.67% of the total count, +whereas, for all the OpenAI GPT-3.x models, the percentages +are above 97%. Remember that we deduplicated the dataset, +and repeatedly generating the same samples should not help +here. In Section V, we interviewed the incident owners, and +the majority of them complained about the generic nature of +encoder-decoder models’ recommendations, and these models +6 + +TABLE I: Effectiveness of fine-tuned GPT-3.x models at finding root causes of the incidents +Model +BLEU-4 +ROUGE-L +METEOR +BERTScore +BLEURT +NUBIA +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +RoBERTa +4.21 +NA +12.83 +NA +9.89 +NA +85.38 +NA +35.66 +NA +33.94 +NA +CodeBERT +3.38 +NA +10.17 +NA +6.58 +NA +84.88 +NA +33.19 +NA +39.05 +NA +Curie +3.40 +6.29 +9.04 +15.44 +7.21 +13.65 +84.90 +86.36 +32.62 +40.08 +33.52 +49.76 +Codex +3.44 +6.25 +8.98 +15.51 +7.33 +13.82 +84.85 +86.33 +32.50 +40.11 +33.64 +49.77 +Davinci +3.34 +5.94 +8.53 +15.10 +6.67 +12.95 +83.13 +84.41 +31.06 +38.61 +35.28 +50.79 +Davinci-002 +4.24 +7.15 +11.43 +17.2 +10.42 +16.8 +85.42 +86.78 +36.77 +42.87 +32.3 +51.34 +%gain for Davinci-002 +23.26 +13.67 +26.44 +10.90 +42.16 +21.56 +0.61 +0.49 +12.72 +6.88 +-8.45 +1.08 +underperform at correctness criteria. Among OpenAI models, +GPT-3.5 (i.e., Code-davinci-002) model significantly outper- +forms all GPT-3 models as well as other baselines in terms of +all the 6 automated metrics. +Though the automatic metrics fail to detect the weaknesses +of the encoder-decoder models, these metrics are still widely +used. Human evaluation is hard to perform in every scenario, +and these metrics can be useful to find the models’ relative +performance. Therefore, even though we achieve a low score +on these metrics, these are useful while trying to capture the +relative performance of the model in different settings. Also, +getting a lower score with lexical metrics is not surprising +because lexical metrics only consider token overlaps and +root cause and mitigation are open-ended, and the same root +cause/mitigation can be written differently. In Section V, from +the interviews with OCEs, we found that suggestions with +lower BLEU-4 or other metrics are still helpful. +B. How effective are fine-tuned GPT-3.x models in recom- +mending mitigation plans for an incident? (RQ2) +Table II shows that we achieved a slightly higher mitigation +score (4.44-6.76 BLEU-4) than the root cause recommendation +(3.38-4.24 BLEU-4).We observed a similar and consistent +pattern (Table III) of the output as observed with root causes. +The encoder-decoder models generate generic comments (e.g., +“the issue is self-mitigated”, “fix deployed to all regions”) +like before, and those recommendations are mostly useless +for the OCEs. For both RQ1 and RQ2, the fine-tuned Davinci +model (even with 175 Billion parameters) is significantly un- +derperforming other baseline methods according to automatic +metrics. However, the Davinci and Code-davinci-002 models +are the best performing models according to the incident +owners (see Section V) +C. How much fine-tuning improves over zero-shot learning +performance of GPT-3.x models? (RQ3) +As discussed in Section II-D, we will investigate the per- +formance of OpenAI models in the zero-shot setting. Table IV +presents the performance of the OpenAI models for root cause +and mitigation. As expected, the model did not perform well in +this setting since the models were not trained on confidential +data from the incident management space. The models achieve +0.80-2.18 BLEU-4 for the top candidate, which is much lower +(210%) than what we achieved with fine-tuning the models +(5.47-6.76) while recommending mitigation steps. Though we +achieved a higher score for mitigation than root cause during +fine-tuning, in the zero-shot setting, the numbers for root cause +are slightly high (1.18-2.83 for the top candidates). The model +tries to complete the sequence depending on the given input. +Copying a few tokens from input may help the model because +the root cause is usually longer than mitigation and tends +to share more tokens with the input. Because of unigram +overlaps METEOR is doing better compared to other metrics +(BLEU-4 and ROUGE-L) because it looks for the unigram +precision and recall, making it lenient compared to BLEU-4 +and ROUGE-L. We observe another interesting phenomenon +here. Though the Davinci model was underperforming in RQ1 +and RQ2, it significantly outperforms the other OpenAI models +at zero-shot settings for both root cause and mitigation. This +is because the model has higher parameters and is trained on +more data enabling it to infer better without explicit training. +D. Does multi-task learning improve the performance of GPT- +3.x models at finding root causes and mitigation plans? (RQ4) +To evaluate the results of multi-task training in the root +cause recommendation and mitigating planning tasks, we com- +bine the training set of the two tasks for GPT-3.x models. The +models are then individually tested using the corresponding +test sets. Table V shows the results of root cause and mitigation +with multi-task training. Overall, we observe that multi-task +training does not significantly outperform training for a single +task. The performance of Curie and Codex models has fallen +by an average of 2.8% for BLEU-4, 2.0% for Rouge-L and +7.2% for Meteor. Only the Davinci model is marginally 6.2% +better than single task training in terms of BLEU-4 metric. +The performance of Code-davinci-002 is almost always lower +across all lexical metrics in a multi-task setting. Similar +to this, the results of mitigation generation reveals a 4.1% +performance decline in average for all the four models. The +lack of connection between the root cause and mitigation is +what mostly contributes to the decline in performance. It is +challenging to transfer knowledge from one task to the other +because of the distinct distribution in their answer spaces, +such as the variations in root cause and mitigation length and +concreteness. +E. Do GPT-3.x models get better at proposing mitigation +plans if the root cause is given? (RQ5) +We assess the performance of the mitigation generation +while the root cause is being revealed. Our training set of +mitigation is reduced from 5,455 to 2,973 as a result of the +missing root causes in the incidents, and we have 166 test +7 + +TABLE II: Effectiveness of fine-tuned GPT-3.x models at finding mitigation plans of the incidents +Model +BLEU-4 +ROUGE-L +METEOR +BERTScore +BLEURT +NUBIA +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +RoBERTa +4.44 +NA +7.10 +NA +4.52 +NA +86.33 +NA +26.80 +NA +14.90 +NA +CodeBERT +6.02 +NA +4.40 +NA +3.37 +NA +86.83 +NA +28.44 +NA +27.89 +NA +Curie +5.47 +10.62 +8.03 +16.31 +6.22 +12.75 +85.65 +87.13 +27.20 +37.23 +15.30 +25.46 +Codex +5.53 +10.62 +8.15 +16.23 +6.19 +13.15 +85.68 +87.35 +28.43 +37.92 +15.77 +26.33 +Davinci +5.54 +10.66 +8.10 +15.96 +6.08 +12.49 +85.72 +87.19 +27.15 +37.00 +15.71 +25.61 +Davinci-002 +6.76 +11.66 +10.22 +18.14 +8.23 +15.13 +86.17 +87.65 +30.19 +38.96 +17.58 +28.81 +%gain for Davinci-002 +22.02 +9.38 +25.40 +11.22 +32.32 +15.06 +0.52 +0.34 +6.19 +2.74 +11.48 +9.42 +TABLE III: Uniqueness of the models’ suggestions +Model +Root cause +Mitigation +# of unique +recommendations +In % of +total +# of unique +recommendations +In % of +total +RoBERTa +160 +6.10 +4 +0.22 +CodeBERT +437 +16.67 +2 +0.1 +Curie +2612 +99.65 +1669 +93.76 +Codex +2614 +99.73 +1743 +97.92 +Davinci +2587 +98.70 +1731 +97.24 +Davinci-002 +2614 +99.73 +1696 +95.28 +TABLE IV: Effectiveness of OpenAI models for recommend- +ing root causes and mitigation steps at zero-shot setting +Objective +Model +BLEU-4 +ROUGE-L +METEOR +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Root cause +Curie +1.26 +2.01 +4.75 +7.80 +7.94 +13.30 +Codex +1.18 +1.94 +3.80 +7.07 +6.58 +12.20 +Davinci +2.83 +4.37 +6.11 +11.55 +6.04 +11.87 +Davinci- +002 +1.35 +2.5 +4.89 +8.58 +7.65 +13.55 +Finetuned- +Davinci- +002 +4.24 +7.15 +11.43 +17.2 +10.42 +16.8 +% gain for +Finetuning +49.82 +63.62 +87.07 +48.92 +31.23 +23.99 +Mitigation +Curie +0.81 +1.50 +2.45 +4.59 +5.33 +9.40 +Codex +0.80 +1.57 +1.97 +4.05 +4.56 +8.55 +Davinci +2.18 +3.67 +3.84 +7.84 +4.99 +10.44 +Davinci- +002 +0.92 +1.89 +2.31 +4.52 +4.92 +9.2 +Finetuned- +Davinci- +002 +6.76 +11.66 +10.22 +18.14 +8.23 +15.13 +% gain for +Finetuning +210.1 +217.7 +166.2 +131.4 +54.4 +44.9 +samples to evaluate the model. Despite the sample reduction +in the training set, Table V reveals a considerable performance +gain with the additional root cause information: the average +for all three metrics is improved by 9.8% for the Curie +model, 8.3% for the Codex model, 5.4% for the Davinci +model and 26% for the Code-davinci-002. Nevertheless, we +observe that the performance gain of the Code-davinci-002 +model’s Top-5 recommendations is modest compared to the +improvement of the Top-1 results. Despite this, the overall +promising results highlight the significance of root cause +information in generating mitigation plans. +F. Do the models better propose mitigation plans for machine- +detected incidents than human-detected ones? (RQ6) +We analyze the mitigation generation performance of GPT- +3.x models for both machine and human detected incidents in +Table VII. We employ the same training set but separate the +test samples by the categories of human and machine detected +incidents. The testing samples consist of 592 incidents rec- +ognized by machines and 1188 incidents detected by humans. +TABLE V: Effectiveness of multi-task learning +Objective +Model +Multi- +tasking? +BLEU-4 +ROUGE-L +METEOR +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Root +Cause +Curie +No +3.40 +6.29 +9.04 +15.44 +7.21 +13.65 +Yes +3.30 +6.13 +8.66 +15.51 +6.60 +12.97 +Codex +No +3.44 +6.25 +8.98 +15.51 +7.33 +13.82 +Yes +3.42 +6.11 +8.64 +15.24 +6.53 +12.81 +Davinci +No +3.34 +5.94 +8.53 +15.10 +6.67 +12.95 +Yes +3.60 +6.27 +9.11 +15.66 +7.31 +13.64 +Davinci-002 No +4.24 +7.15 +11.43 +17.2 +10.42 +16.8 +Yes +4.24 +7.09 +11.32 +17.14 +10.32 +16.34 +Mitigation +Curie +No +5.47 +10.62 +8.03 +16.31 +6.22 +12.75 +Yes +5.49 +10.89 +7.98 +16.14 +5.92 +12.54 +Codex +No +5.53 +10.62 +8.15 +16.23 +6.19 +13.15 +Yes +5.15 +10.88 +7.49 +15.87 +5.55 +11.85 +Davinci +No +5.54 +10.66 +8.10 +15.96 +6.18 +12.49 +Yes +5.64 +10.74 +7.88 +15.97 +6.13 +12.99 +Davinci-002 No +6.76 +11.66 +10.22 +18.14 +8.23 +15.13 +Yes +6.58 +11.36 +10.04 +17.76 +7.91 +14.36 +TABLE VI: Effectiveness of GPT-3 models at proposing +mitigation plans given root causes +Model +Root-cause +given? +BLEU-4 +ROUGE-L +METEOR +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Curie +No +5.92 +11.29 +9.46 +17.76 +7.34 +13.35 +Yes +6.59 +12.40 +10.25 +18.61 +8.24 +16.00 +Codex +No +6.25 +11.23 +8.94 +17.62 +6.46 +13.00 +Yes +6.23 +12.03 +9.32 +18.48 +7.73 +15.96 +Davinci +No +6.35 +12.05 +8.75 +18.21 +7.28 +15.07 +Yes +7.02 +11.47 +9.49 +18.20 +8.40 +16.17 +Davinci-002 +No +6.8 +12 +9.48 +17.37 +8.15 +15.53 +Yes +8.6 +13.28 +11.56 +19.46 +10.9 +18.08 +%gain +26.47 +10.21 +21.94 +6.86 +33.74 +16.42 +Table VII demonstrates that machine-recognized incidents can +outperform those detected by humans by a factor of 9.5% +for BLEU-4, 20% for ROUGE-L and 23% for METEOR in +the context of Top-1 recommendations of Code-davinci-002 +model. It is due to the fact that machine detected incidents +usually adhere to certain patterns, which are easier for machine +learning models to recognize. +V. LOOKING THROUGH THE INCIDENT OWNERS’ EYES +A. Methodology +From our test sets for root causes and mitigation plans, we +selected the incidents with both root causes and mitigation, +so that each incident owner could evaluate both the models +in the same interview. Incident resolution is a complex task +requiring significant context and domain knowledge about +the service and also about the specific incidents. Hence, +we conducted this human evaluation with the actual owners +who root caused and mitigated the incidents. We chose 50 +recent incidents which occurred in the last two months, to +evaluate the models’ performance so that the incident owners +8 + +TABLE VII: Models’ performance on machine vs human +detected incidents +Model +Machine +detected? +BLEU-4 +ROUGE-L +METEOR +Top1 +Top5 +Top1 +Top5 +Top1 +Top5 +Curie +Yes +5.49 +10.54 +8.54 +16.63 +6.45 +13.13 +No +5.45 +10.65 +7.78 +16.15 +6.10 +12.56 +Codex +Yes +5.76 +10.54 +9.10 +16.84 +6.80 +13.88 +No +5.41 +10.67 +7.68 +15.93 +5.88 +12.78 +Davinci +Yes +5.56 +10.51 +8.49 +16.17 +6.34 +12.59 +No +5.52 +10.74 +7.91 +15.86 +5.95 +12.44 +Davinci-002 +Yes +7.18 +11.83 +11.5 +18.59 +9.41 +15.66 +No +6.56 +11.57 +9.58 +17.92 +7.65 +14.87 +%gain +9.45 +2.25 +20.04 +3.74 +23.01 +5.31 +could precisely remember what happened during managing +particular incidents. We reached out to all the incident owners +and 25 incident owners responded and each interview took +around 20-30 minutes. +We presented the outputs from all the models under con- +sideration. For both root causes and mitigation plans, we have +six pools of candidates. The first four pools are for OpenAI +models, each with six options (including “none”), and the last +two are for RoBERTa and CodeBERT, which has only one +candidate. For the OpenAI models, we ask the OCEs to select +the best option that might be relevant to the incident. After +that, we ask the OCEs to assign correctness and readability for +the chosen candidate on a scale of 1-5, with 5 being the best +score. Please note that for RoBERTa and CodeBERT, we only +have one option. Hence, we only ask to assign correctness and +readability scores to those candidates. We define correctness +and readability as follows: +Correctness: For this metric, we ask the incident owner to +check whether the model provides a helpful and relevant +suggestion compared to the actual root cause/mitigation. +Readability: Readability is the ease with which a reader +can understand a generated text. A text is readable if it is +grammatically correct, meaningful and easy to understand. +Note that a readable text does not need to be correct. +At the end, we asked the incident owners to assign an overall +score (1-5) indicating their perception about the usefulness of +LLMs for incident resolution and, also, asked them to share +their thoughts and comments regarding this. +B. Results +Table VIII presents the correctness and readability scores +assigned by the incident owners. We can see that candidates +from the Davinci and Code-davinci-002 pools have achieved +higher mean correctness scores than those selected from Curie +and Codex models for both root causes (2.88 and 2.56) and +mitigation plans (3.04 and 3.16). The mean readability score +ranges from 2.52 to 4.08 for all the models. The incident +owners expressed positive opinions about the readability of +the outputs, and all the models achieved higher readability +than correctness scores. We received a few recommendations +on how to improve the readability in the future (e.g., avoiding +use of acronyms and generating more specific or informative +comments). +As discussed before, the baseline encoder-decoder models +generate very generic comments, and the automatic metrics +fail to detect that. We can see the incident owners assign a +lower correctness score to RoBERTa and CodeBERT model, +and several OCEs pointed out the generic nature of the +recommendations generated by the encoder-decoder models. +Though the correctness score of the OpenAI models ranges +from 2.28 to 3.16, several OCEs pointed out that the models +recommend beneficial root causes and mitigation plans. For +example, the models succeeded in pinpointing some hard to +detect root causes: +“I am very impressed because one model found the right +root cause, which was very hard to detect. We found it in the +postmortem phase. However, I am a little worried that there +would not be enough information on the incident website. +Overall, I am impressed with the efficacy of the models.” +“Even if not always correct, these suggestions can guide +the OCE towards actual root cause. ML model can give +directions and can be valuable suggestions.” +We also took the maximum score assigned by the OpenAI +models and reported the average correctness and readability +score. The mean correctness and readability score ranges from +3.52 to 4.64 (median score 3-5), presenting the overall strength +of the models. We asked for the overall scores (1-5), and +Table IX shows that the incident owners found the overall +contribution promising and useful. More than 70% of incident +owners gave three or above for the recommendations of the +models. We found that at least one model is effective for most +incidents. We also found out why the automatic metrics fail +to provide valuable insights. +There is always another side to the coin, and we observe +that the models’ outputs are not helpful for some incidents. +The OCEs assigned lower scores to those incidents and here +are some of the concerns they mentioned: +“Based on just incident data it is difficult for the model to +predict root-cause and mitigation because not all data are +recorded in the database and some of them are classified.” +“Major concern is if the suggestion is incorrect, on-call +engineers may take longer time to investigate the problem.” +We observed some negative samples for the model because +a lack of discussion or other information results in the de- +privation of valuable signals from the input. However, the +model’s overall performance is quite promising, which can +be considered a stepping stone toward the automation of root +causes and mitigation plans in the future. +VI. DISCUSSION & THREATS +A. Do automatic metrics reflect human perception? +Automatic evaluation metrics are known to be representative +of human perception and are widely used in problems like nat- +ural language translation [14], [20], [21]. Though some recent +works looked into the effectiveness of these metrics in code +summarization and reported many pitfalls and weaknesses +of these metrics [44]–[47], researchers are still using them +for benchmarking. The best possible alternative to automatic +metrics is human validation or some form of automatic test +9 + +TABLE VIII: Correctness and readability scores assigned by the incident owners +Objective +Criteria +RoBERTA +CodeBERT +Curie +Codex +Davinci +Davinci-002 +Max +OpenAI +Mean +Median +Mean +Median +Mean +Median +Mean +Median +Mean +Median +Mean +Median +Mean +Median +Root cause +Correctness +1.56 +1 +1.72 +1 +2.40 +2 +2.40 +2 +2.88 +3 +2.56 +2 +3.52 +3 +Readability +3.56 +5 +3.68 +5 +3.08 +4 +3.52 +4 +3.56 +5 +3.8 +4 +4.52 +5 +Mitigation +Correctness +1.6 +1 +1.52 +1 +2.28 +2 +2.28 +1 +3.04 +3 +3.16 +3 +4.04 +4 +Readability +2.88 +2 +3.04 +4 +2.52 +2 +2.8 +3 +3.52 +4 +4.08 +4 +4.64 +5 +TABLE IX: Usefulness of LLMs for incident resolution +Score +# of incident +owners +In percent (%) +of total +5 +2 +7.41 +4 +9 +33.33 +3 +8 +29.63 +2 +6 +22.22 +1 +2 +7.41 +case evaluation (done in code generation tasks). The main +challenge in incident management is that even experts face +difficulties evaluating the incidents if they are not involved +in resolving particular incidents. In some cases, the OCEs +could not clearly remember the incidents if they happened +two months ago. Thus conducting a large-scale study is +quite challenging in this area. However, we interviewed 25 +incident owners and found that the models perform pretty +well even after achieving lower scores with automatic metrics. +We calculated the Pearson coefficient for all three lexical +metrics (i.e., BLEU-4, ROUGE-L, and METEOR) with the +correctness and readability score assigned by the OCEs. We +observed that the co-efficient varies from -0.42 to +0.62, +preventing us from getting specific patterns in the value. That +also indicates that these automatic metrics may not be coherent +with human perception for resolving cloud incidents. However, +more sample cases are needed to reach any concrete resolution. +B. Natural language or code? Which family of models are +better for incident management? +While choosing the models, we selected both natural lan- +guage (i.e., RoBERTa, Curie, Davinci) and code models (i.e., +CodeBERT, Codex-cushman, Code-davinci-002) to see which +family of models is beneficial for incident management. We +did not find any winners from these two groups. Davinci and +Code-davinci-002 models are found to be producing correct +and readable suggestions compared to other models. Note that +both of them have 175 billion parameters. We leave fine-tuning +larger code models or pre-training a model from scratch with +incident data for future research. +C. How the models’ performance can be improved? +We received several recommendations from the incident +owners. The main recommendation is to incorporate the dis- +cussions among the OCEs into the model. This will guide +the model to locate better suggestions. We also dropped many +incidents with summaries that written or updated at the time of +incident resolution. To fairly evaluate the model and prevent +possible data leakage (root cause and mitigation can be written +in summary if updated later), we discarded them from our +dataset. Incorporating them into our dataset after preventing +data leakage may improve the performance of the models. +We also lost some critical information while cleaning the +summaries (e.g., discarding images and tables). Incorporating +that information may also help. +D. Threats to Validity +There are several threats to our study. The semantic metrics +use pre-trained models at the core, and we use the default, +natural language models for the evaluation. A model pre- +trained with incident management text may result in some +changes in the performance evaluation. Also, we train and +evaluate the models with the services available within our +organization. These models may show unexpected behaviors +if evaluated on a different set of services from other organi- +zations. Some incidents owners expressed concerns about the +models’ efficacy with rare incidents, and rare incidents are +frequently reported at Microsoft. Another threat to our study +is the sample size of our human subject study. It is difficult to +achieve statistical significance on correctness and readability +scores with such small samples. However, it is challenging to +scale depending on the nature of the study. +VII. RELATED WORK +A. Incident management +Incident management in large cloud services has become +a popular topic of research in the Systems and Software +Engineering communities. Prior work in this space has focused +on two main directions. First, there has been several empirical +studies on analyzing incidents and outages in production +systems which have focused on studying incidents caused +by certain type of issues [48]–[51] or issues from specific +services and systems [52]–[54]. Second and more related to +our work is the use of machine learning and data driven +techniques for automating different aspects of incident life- +cycle such as triaging [55], [56], diagnosis [57]–[59] and +mitigation [5]. Different from prior work, this is the first effort +on leveraging state-of-the art language models for assisting +OCEs with incident resolution. We hope that this work will +also motivate future work which will merge traditional task- +specific discriminative models with LLMs to do end-to-end +automation of production incidents. +B. LLMs in Software Engineering +Even though this is the first work leveraging LLMs for +AIOps, several works in Software Engineering have tried to +solve other challenging problems with LLMs. Github Copi- +lot uses GPT-3 for automated code generation from natural +language inputs [8]. Several researchers have addressed code +generation [8], [36], docstring generation [8], [60], and code +10 + +repair [61], [62] problems. Bareiß et al. [63] show how few- +shot learning can be effective at (i) code mutation; (ii) test +oracle generation from natural language documentation; and +(iii) test case generation task. Jain et al. propose an approach +to augment large language models with post-processing steps +based on program analysis and synthesis techniques and +achieve better performance [64]. However, unlike code gener- +ation where we have both lexical and structural information +along with massive amount of training data, we explore the +problem of incident resolution using state-of-the-art LLMs +which has not been done before. +VIII. CONCLUSION +With this work, we show that state-of-the-art large language +models such as GPT-3 and GPT-3.5 are effective to help with +incident management, specifically, to identify root causes and +mitigation steps. To compare the effectiveness of the models, +we conducted a rigorous and large-scale study at Microsoft, +on over 40,000 incidents. To assess the actual usefulness of +the approach, we involved the actual owners of production +incidents. We expect that this paper is the first of many +studies that leverage LLMs to make incident management +more effective. Our next steps are to deploy the models in +production to assist the OCEs with incident resolution. We +are also planning to explore other usage scenarios for LLMs +such as incident summarization. +IX. ACKNOWLEDGEMENTS +We would like to thank the engineers who participated in the +validation of root causes and mitigation steps. We would like +to also acknowledge the contributors of the following people +across Microsoft: Oleg Losinets, Jim Kleewein. +REFERENCES +[1] S. +Wolfe, +“Amazon’s +one +hour +of +downtime +on +prime +day +may +have +cost +it +up +to +$100 +million +in +lost +sales,” +2018. +[Online]. +Available: +https://www.businessinsider.com/ +amazon-prime-day-website-issues-cost-it-millions-in-lost-sales-2018-7 +[2] J. Chen, S. Zhang, X. He, Q. Lin, H. Zhang, D. Hao, Y. Kang, F. Gao, +Z. Xu, Y. Dang et al., “How incidental are the incidents? characterizing +and prioritizing incidents for large-scale online service systems,” in Pro- +ceedings of the 35th IEEE/ACM International Conference on Automated +Software Engineering, 2020, pp. 373–384. +[3] A. Saha and S. C. Hoi, “Mining root cause knowledge from cloud service +incident investigations for aiops,” arXiv preprint arXiv:2204.11598, +2022. +[4] J. Chen, X. He, Q. Lin, H. Zhang, D. Hao, F. Gao, Z. Xu, Y. Dang, and +D. Zhang, “Continuous incident triage for large-scale online service sys- +tems,” in 2019 34th IEEE/ACM International Conference on Automated +Software Engineering (ASE). +IEEE, 2019, pp. 364–375. +[5] J. Jiang, W. Lu, J. Chen, Q. Lin, P. Zhao, Y. Kang, H. Zhang, Y. Xiong, +F. Gao, Z. Xu et al., “How to mitigate the incident? an effective +troubleshooting guide recommendation technique for online service +systems,” in Proceedings of the 28th ACM Joint Meeting on European +Software Engineering Conference and Symposium on the Foundations +of Software Engineering, 2020, pp. 1410–1420. +[6] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou, +“Chain of thought prompting elicits reasoning in large language models,” +arXiv preprint arXiv:2201.11903, 2022. +[7] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, +A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod- +els are few-shot learners,” Advances in neural information processing +systems, vol. 33, pp. 1877–1901, 2020. +[8] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, +H. Edwards, Y. Burda, N. Joseph, G. Brockman et al., “Evaluating large +language models trained on code,” arXiv preprint arXiv:2107.03374, +2021. +[9] Z. Chen, Y. Kang, L. Li, X. Zhang, H. Zhang, H. Xu, Y. Zhou, L. Yang, +J. Sun, Z. Xu et al., “Towards intelligent incident management: why +we need it and how we make it,” in Proceedings of the 28th ACM Joint +Meeting on European Software Engineering Conference and Symposium +on the Foundations of Software Engineering, 2020, pp. 1487–1497. +[10] “Common Crawl.” [Online]. Available: https://commoncrawl.org/ +[11] S. Kulkarni, A. Singh, G. Ramakrishnan, and S. Chakrabarti, “Collective +annotation of wikipedia entities in web text,” in Proceedings of the 15th +ACM SIGKDD international conference on Knowledge discovery and +data mining, 2009, pp. 457–466. +[12] “Wikipedia.” [Online]. Available: https://www.wikipedia.org/ +[13] Y. Wang, W. Wang, S. Joty, and S. C. Hoi, “Codet5: Identifier-aware +unified pre-trained encoder-decoder models for code understanding and +generation,” arXiv preprint arXiv:2109.00859, 2021. +[14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, +Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances +in neural information processing systems, 2017, pp. 5998–6008. +[15] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural +computation, vol. 9, no. 8, pp. 1735–1780, 1997. +[16] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of +gated recurrent neural networks on sequence modeling,” arXiv preprint +arXiv:1412.3555, 2014. +[17] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by +jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, +2014. +[18] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training +of deep bidirectional transformers for language understanding,” arXiv +preprint arXiv:1810.04805, 2018. +[19] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, +L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert +pretraining approach,” arXiv preprint arXiv:1907.11692, 2019. +[20] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, +V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence +pre-training for natural language generation, translation, and comprehen- +sion,” arXiv preprint arXiv:1910.13461, 2019. +[21] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, +Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of trans- +fer learning with a unified text-to-text transformer,” arXiv preprint +arXiv:1910.10683, 2019. +[22] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving +language understanding by generative pre-training,” 2018. +[23] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., +“Language models are unsupervised multitask learners,” OpenAI blog, +vol. 1, no. 8, p. 9, 2019. +[24] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, +M. Diab, X. Li, X. V. Lin et al., “Opt: Open pre-trained transformer +language models,” arXiv preprint arXiv:2205.01068, 2022. +[25] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, +T. Liu, D. Jiang et al., “Codebert: A pre-trained model for programming +and natural languages,” in Proceedings of the 2020 Conference on +Empirical Methods in Natural Language Processing: Findings, 2020, +pp. 1536–1547. +[26] D. Guo, S. Ren, S. Lu, Z. Feng, D. Tang, L. Shujie, L. Zhou, +N. Duan, A. Svyatkovskiy, S. Fu et al., “Graphcodebert: Pre-training +code representations with data flow,” in International Conference on +Learning Representations, 2020. +[27] W. Ahmad, S. Chakraborty, B. Ray, and K.-W. Chang, “Unified +pre-training for program understanding and generation,” in Proceedings +of +the +2021 +Conference +of +the +North +American +Chapter +of +the Association for Computational Linguistics: Human Language +Technologies. +Online: Association for Computational Linguistics, Jun. +2021, pp. 2655–2668. [Online]. Available: https://www.aclweb.org/ +anthology/2021.naacl-main.211 +[28] S. Chakraborty, T. Ahmed, Y. Ding, P. T. Devanbu, and B. Ray, “Natgen: +generative pre-training by “naturalizing” source code,” in Proceedings +of the 30th ACM Joint European Software Engineering Conference and +Symposium on the Foundations of Software Engineering, 2022, pp. 18– +30. +[29] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B. +Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou, +11 + +M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng, +S. Fu, and S. Liu, “Codexglue: A machine learning benchmark dataset +for code understanding and generation,” CoRR, vol. abs/2102.04664, +2021. +[30] K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre- +training text encoders as discriminators rather than generators,” arXiv +preprint arXiv:2003.10555, 2020. +[31] H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt, +“Codesearchnet challenge: Evaluating the state of semantic code search,” +arXiv preprint arXiv:1909.09436, 2019. +[32] “Openai.” [Online]. Available: https://openai.com/ +[33] T. Ahmed and P. Devanbu, “Multilingual training for software engineer- +ing,” in Proceedings of the 44th International Conference on Software +Engineering, 2022, pp. 1443–1455. +[34] “Codexglue – code-to-text.” [Online]. Available: https://github.com/ +microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text +[35] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, +and W. Chen, “Lora: Low-rank adaptation of large language models,” +arXiv preprint arXiv:2106.09685, 2021. +[36] F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic +evaluation of large language models of code,” in Proceedings of the +6th ACM SIGPLAN International Symposium on Machine Programming, +2022, pp. 1–10. +[37] C.-Y. Lin and F. J. Och, “Orange: a method for evaluating automatic +evaluation metrics for machine translation,” in COLING 2004: Proceed- +ings of the 20th International Conference on Computational Linguistics, +2004, pp. 501–507. +[38] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” +in Text summarization branches out, 2004, pp. 74–81. +[39] D. S. Hirschberg, “Algorithms for the longest common subsequence +problem,” Journal of the ACM (JACM), vol. 24, no. 4, pp. 664–675, +1977. +[40] S. Banerjee and A. Lavie, “Meteor: An automatic metric for mt evalua- +tion with improved correlation with human judgments,” in Proceedings +of the acl workshop on intrinsic and extrinsic evaluation measures for +machine translation and/or summarization, 2005, pp. 65–72. +[41] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: +Evaluating text generation with bert,” arXiv preprint arXiv:1904.09675, +2019. +[42] T. Sellam, D. Das, and A. P. Parikh, “Bleurt: Learning robust metrics +for text generation,” arXiv preprint arXiv:2004.04696, 2020. +[43] H. Kane, M. Y. Kocyigit, A. Abdalla, P. Ajanoh, and M. Coulibali, +“Nubia: Neural based interchangeability assessor for text generation,” +2020. +[44] E. Shia, Y. Wangb, L. Dub, J. Chenc, S. Hanb, H. Zhangd, D. Zhangb, +and H. Suna, “On the evaluation of neural code summarization,” in Pro- +ceedings of the 44th International Conference on Software Engineering +(ICSE), 2022. +[45] D. Roy, S. Fakhoury, and V. Arnaoudova, “Reassessing automatic +evaluation metrics for code summarization tasks,” in Proceedings of the +29th ACM Joint Meeting on European Software Engineering Conference +and Symposium on the Foundations of Software Engineering, 2021, pp. +1105–1116. +[46] D. Gros, H. Sezhiyan, P. Devanbu, and Z. Yu, “Code to comment ?trans- +lation?: Data, metrics, baselining & evaluation,” in 2020 35th IEEE/ACM +International Conference on Automated Software Engineering (ASE). +IEEE, 2020, pp. 746–757. +[47] S. Haque, Z. Eberhart, A. Bansal, and C. McMillan, “Semantic similarity +metrics for evaluating source code summarization,” arXiv preprint +arXiv:2204.01632, 2022. +[48] T. Leesatapornwongsa, J. F. Lukman, S. Lu, and H. S. Gunawi, “Taxdc: +A taxonomy of non-deterministic concurrency bugs in datacenter dis- +tributed systems,” in Proceedings of the Twenty-First International +Conference on Architectural Support for Programming Languages and +Operating Systems, 2016, pp. 517–530. +[49] A. Alquraan, H. Takruri, M. Alfatafta, and S. Al-Kiswany, “An analysis +of {Network-Partitioning} failures in cloud systems,” in 13th USENIX +Symposium on Operating Systems Design and Implementation (OSDI +18), 2018, pp. 51–68. +[50] Y. Gao, W. Dou, F. Qin, C. Gao, D. Wang, J. Wei, R. Huang, L. Zhou, +and Y. Wu, “An empirical study on crash recovery bugs in large-scale +distributed systems,” in Proceedings of the 2018 26th ACM Joint Meeting +on European Software Engineering Conference and Symposium on the +Foundations of Software Engineering, 2018, pp. 539–550. +[51] Y. Zhang, J. Yang, Z. Jin, U. Sethi, K. Rodrigues, S. Lu, and D. Yuan, +“Understanding and detecting software upgrade failures in distributed +systems,” in Proceedings of the ACM SIGOPS 28th Symposium on +Operating Systems Principles, 2021, pp. 116–131. +[52] S. Ghosh, M. Shetty, C. Bansal, and S. Nath, “How to fight produc- +tion incidents? an empirical study on a large-scale cloud service,” in +Proceedings of the 13th Symposium on Cloud Computing, 2022, pp. +126–141. +[53] H. Liu, S. Lu, M. Musuvathi, and S. Nath, “What bugs cause production +cloud incidents?” in Proceedings of the Workshop on Hot Topics in +Operating Systems, 2019, pp. 155–162. +[54] D. Yuan, Y. Luo, X. Zhuang, G. R. Rodrigues, X. Zhao, Y. Zhang, P. U. +Jain, and M. Stumm, “Simple testing can prevent most critical failures: +An analysis of production failures in distributed {Data-Intensive} sys- +tems,” in 11th USENIX Symposium on Operating Systems Design and +Implementation (OSDI 14), 2014, pp. 249–265. +[55] J. Chen, X. He, Q. Lin, Y. Xu, H. Zhang, D. Hao, F. Gao, Z. Xu, Y. Dang, +and D. Zhang, “An empirical investigation of incident triage for online +service systems,” in 2019 IEEE/ACM 41st International Conference on +Software Engineering: Software Engineering in Practice (ICSE-SEIP), +2019, pp. 111–120. +[56] J. Chen, X. He, Q. Lin, H. Zhang, D. Hao, F. Gao, Z. Xu, Y. Dang, and +D. Zhang, “Continuous incident triage for large-scale online service sys- +tems,” in 2019 34th IEEE/ACM International Conference on Automated +Software Engineering (ASE), 2019, pp. 364–375. +[57] V. Nair, A. Raul, S. Khanduja, V. Bahirwani, Q. Shao, S. Sellamanickam, +S. Keerthi, S. Herbert, and S. Dhulipalla, “Learning a hierarchical +monitoring system for detecting and diagnosing service issues,” in +Proceedings of the 21th ACM SIGKDD International Conference on +Knowledge Discovery and Data Mining, 2015, pp. 2029–2038. +[58] C. Bansal, S. Renganathan, A. Asudani, O. Midy, and M. Janakiraman, +“Decaf: Diagnosing and triaging performance issues in large-scale +cloud services,” in 2020 IEEE/ACM 42nd International Conference on +Software Engineering: Software Engineering in Practice (ICSE-SEIP), +2020. +[59] C. Luo, J.-G. Lou, Q. Lin, Q. Fu, R. Ding, D. Zhang, and Z. Wang, +“Correlating events with time series for incident diagnosis,” in Proceed- +ings of the 20th ACM SIGKDD international conference on Knowledge +discovery and data mining, 2014, pp. 1583–1592. +[60] T. Ahmed and P. Devanbu, “Few-shot training llms for project-specific +code-summarization,” arXiv preprint arXiv:2207.04237, 2022. +[61] Z. Fan, X. Gao, A. Roychoudhury, and S. H. Tan, “Improving automat- +ically generated code from codex via automated program repair,” arXiv +preprint arXiv:2205.10583, 2022. +[62] H. Joshi, J. Cambronero, S. Gulwani, V. Le, I. Radicek, and G. Ver- +bruggen, “Repair is nearly generation: Multilingual program repair with +llms,” arXiv preprint arXiv:2208.11640, 2022. +[63] P. Bareiß, B. Souza, M. d’Amorim, and M. Pradel, “Code generation +tools (almost) for free? a study of few-shot, pre-trained language models +on code,” arXiv preprint arXiv:2206.01335, 2022. +[64] N. Jain, S. Vaidyanath, A. Iyer, N. Natarajan, S. Parthasarathy, S. Ra- +jamani, and R. Sharma, “Jigsaw: Large language models meet program +synthesis,” in Proceedings of the 44th International Conference on +Software Engineering, 2022, pp. 1219–1231. +12 + diff --git a/FNE2T4oBgHgl3EQfSwf4/content/tmp_files/load_file.txt b/FNE2T4oBgHgl3EQfSwf4/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec88e6083a09236f485a7da1f6d976246cb06535 --- /dev/null +++ b/FNE2T4oBgHgl3EQfSwf4/content/tmp_files/load_file.txt @@ -0,0 +1,1623 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf,len=1622 +page_content='Recommending Root-Cause and Mitigation Steps for Cloud Incidents using Large Language Models Toufique Ahmed∗§, Supriyo Ghosh†, Chetan Bansal† Thomas Zimmermann‡, Xuchao Zhang†, Saravan Rajmohan† ∗UC Davis †Microsoft ‡Microsoft Research Abstract—Incident management for cloud services is a complex process involving several steps and has a huge impact on both service health and developer productivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' On-call engineers require significant amount of domain knowledge and manual effort for root causing and mitigation of production incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Recent advances in artificial intelligence has resulted in state-of- the-art large language models like GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x (both GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='0 and GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5), which have been used to solve a variety of problems ranging from question answering to text summarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In this work, we do the first large-scale study to evaluate the effectiveness of these models for helping engineers root cause and mitigate production incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We do a rigorous study at Microsoft, on more than 40,000 incidents and compare several large language models in zero-shot, fine-tuned and multi-task setting using semantic and lexical metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lastly, our human evaluation with actual incident owners show the efficacy and future potential of using artificial intelligence for resolving cloud incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Index Terms—Incident Management, Service Quality, GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x, Large Language Models I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' INTRODUCTION Large IT enterprises such as Amazon, Google, Microsoft, and Salesforce have replaced the traditional shrink-wrapped software and moved towards deploying applications and ser- vices on cloud platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In today’s cloud systems, production incidents (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', outage or performance degradation, unplanned interruptions) adversely impact the customers and can be expensive in terms of penalty associated with service level agreement violations and engineering efforts required to mit- igate the incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For example, one hour of downtime is estimated to cost Amazon US$100 million on major shopping days [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Despite continuous reliability efforts over the years, cloud services still experience inevitable severe incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Artificial Intelligence (AI) for IT Operations, also known as AIOps, has increased in popularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Data-driven and AI techniques have been leveraged for automating parts of the incident life-cycle, for example, incident prioritization [2], retrieval of incidents with similar symptoms [3], and reducing the time to mitigate incidents [4], [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, on-call engineers (OCEs) still spend a significant amount of manual toil through multiple rounds of back and forth communication for identifying root causes and mitigation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Motivated by the recent successes of leveraging GPT-3 models for non- trivial tasks [6], [7] and code generation [8], we apply such §This work is done during the author’s internship at Microsoft Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' models to incident management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We identified the following two scenarios: 1) Find the incident’s root cause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Diagnosing incidents typically requires significant time and communication be- fore engineers can identify the root cause of the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We investigate how effective large language models are at suggesting root causes for incidents (RQ1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 2) Suggest the mitigation steps for the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' After a root cause has been located, engineers take actions to mitigate the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We investigate how effective large language models are at recommending the mitigation steps for incidents (RQ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' When applying large language models several considera- tions and decisions need to be taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Since the models were not trained with incident management data, is fine-tuning of the models necessary (RQ3)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Is it more effective to build one model for each task (single-task) or one combined model that supports both root causes and incidents (multiple task) (RQ4)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Does the root cause help language models to find better mitigation steps (RQ5)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Do the models perform better for certain types of incidents (RQ6)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We address these questions with a rigorous large-scale evaluation of 44,340 incidents from 1,759 services of Microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In addition to lexical and semantic evaluation metrics that are typically reported for such experiments, we present the results from a human validation, where we asked incident owners to assess the correctness and readability of suggested root causes and mitigation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The original incident owners are the most qualified to assess the performance of the models on incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In this paper, we make the following contributions: 1) This is the first work to demonstrate the usefulness of state-of-the-art large language models (LLMs) such as GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x (both GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='0 and GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5) for resolving production incidents in a real world setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (Section III) 2) We present a rigorous and large-scale study in Microsoft on over 40,000 incidents from 1000+ cloud services with six semantic and lexical metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (Section IV) Fine-tuning significantly improves the effectiveness of LLMs for incident data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' GPT-3 and GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 models significantly outperform encoder-decoder models in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Metrics such as BLEU-4 are useful to measure relative 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='03797v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='SE] 10 Jan 2023 performance of models in different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' How- ever, manual inspection and validation with experts is needed to assess the actual performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 3) Our human study with the actual incident owners of pro- duction incidents helps prove the efficacy of the proposed approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (Section V) II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' OVERVIEW A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incident management Production incidents are inevitable in large-scale cloud services and often severely affect the customer experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Also, they can be extremely expensive in terms of engineer- ing resources required to root cause and mitigate them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' An incident life-cycle typically has the following four stages: (1) Detection: The first step in the incident life-cycle is detection where the incidents are either reported by internal or external customers of a given service after they notice anomalous behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Also, incidents can also be reported via automated monitors which are configured by the service owners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (2) Triaging: Once an incident is reported, a team of OCEs analyze the problem and route the incident ticket to appropriate engineering team.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This process is often referred as incident triaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (3) Diagnosis: The incident diagnosis and root cause identification process requires multiple iterations of back and forth communication between engineers inspecting the different aspects to understand the broad nature of the incident and identify the root cause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (4) Mitigation: Based on the identified root causes, actions are taken to mitigate the problem so as to recover the service health and minimize the impact on the service users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lately, AIOps (AI for IT Operations) has gained popularity for automating various parts of the incident life-cycle by combining data-driven and AI techniques with data-sources like application logs, time series performance metrics and service traces [2], [4], [5], [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Albeit significant efforts, incident management in large cloud systems still requires a huge amount of engineering effort and cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' More specifically, even with plethora of historical incident data, root cause iden- tification and mitigation remains notoriously challenging and time consuming tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In this work, we propose to use large language models such as GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x to automatically recommend root causes and mitigation for new incidents by leveraging historical incident data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The promise of LLMs/GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models Large language models (LLMs) such as GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x [7] have emerged as one of the hottest trends in natural language processing over the last few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' With 175 billion parame- ters, the GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x language models, which held the record for being the largest neural network ever developed, is an order of magnitude larger than prior language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Using this massive model architecture, GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x were trained using almost all accessible data from the Internet, including CommonCrawl [10], WebText [11], Wikipedia [12], and a corpus of books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Title: Attach vm fails with connection timeout Summary: The workspace is not associated with any vnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Cus- tomer has a vm which is already running inside a vnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' They like to attach that vm into [product omitted].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We tried the UI and CLI route, but still fails with same connection timeout error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Error points that it resolves to some public ip [.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='] Reference root cause: It is not supported to attach a private vm to a public workspace directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Reference mitigation: Open a task to provide better official docu- ment for customer on the topic of virtual machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1: A sample production incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models surpass the state-of-the-art models in a va- riety of NLP tasks, including machine translation, question- answering, and close tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Furthermore, the GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models achieved a significant milestone by showing that unsupervised language models trained with adequate data can multi-task to the same level of fine-tuned models using just a few examples of the new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' As a result of its powerful text generation capabilities in new tasks, GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x are used in a wide range of categories and industries, from productivity and education to creativity and gaming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For instance, GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x are used to produce creative writing, including blog posts, advertisements, and poetry, that mimics the literary style of well-known writers like Shakespeare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Root-causing and mitigating incidents Incident root-causing and mitigation is a complex process which requires significant amount of manual effort and, also, domain knowledge about the services.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incidents can be caused by various kind of issues such as code bugs, dependency failures, infrastructure issues, configuration bugs, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Due to the vast number of possibilities, it is non-trivial for the OCEs to root cause the incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Similarly, once the root cause is identified, there can be various mitigation steps which can be taken such as code rollback, hotfix, infrastructure changes, configuration update, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Identifying the correct mitigation step is again non-trivial and requires domain knowledge and experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Human errors in root causing or mitigation of incidents results in not just more effort and human toil but also impact on the customers and the revenue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1 shows a real incident from a service where we can see the title and summary provided by the customer along with the actual root cause and mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In this study, we evaluate the effectiveness of large lan- guage models like GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x and Codex for root causing and mitigating production incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' When an incident is created, the author would specify a title for the incident and describe any relevant details such as any error messages, anomalous behavior and other details which could potentially help with resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Once the OCE starts investigating the incident, they might get more details by communicating with the incident author or by looking at telemetry and logs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' During the course of the investigation, the OCE might often updates the incident details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For our evaluation, we use the title and the summary of a given incident at the time of incident creation as input 2 and generate the root cause and mitigation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This is to ensure that we only use the information which was available to the OCE when they started investigating the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Research questions We investigated several OpenAI GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', Curie, Codex-cushman, Davinci, Code-davinci-002) to gener- ate root causes and mitigation plans for the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This leads to several RQs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RQ1 Are fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models effective at finding the incident’s root cause?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The OpenAI models are not trained with the incident manage- ment data since the data contain sensitive privacy information, and Microsoft follows standard protocols to ensure the security of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Therefore, the GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models are not expected to perform well in zero-shot/few-shot settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In this paper, we fine-tuned four different GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models with different ca- pacities and observed how the models performed at proposing the root causes of the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RQ2 Are fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models capable of suggesting the mitigation plan for the incident?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We are also interested in generating mitigation plans for the incident using GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Like root cause generation, we fine-tune and evaluate the model using the input and criteria we use for RQ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RQ3 How much fine-tuning improves over zero-shot learning performance of GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though we primarily focus on fine-tuning, GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models are reported to be effective at various downstream tasks with zero-shot and few-shot training [7], [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In few-shot learning, we use a few examples in the prompt as input to the model, and the model generates the expected output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zero-shot is similar to few-shot training, but none of the examples are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' These two settings are economically and environmentally beneficial (reduced carbon footprint) because we are not updating any parameters of the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This paper will investigate how the models perform at zero-shot settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Note that few-shot learning is unsuitable for our project because we have long sequences in our dataset, and we observe the truncation of the sequences when we infer only one sequence after fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RQ4 Does multi-task learning improve the performance of GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models at finding root causes and mitigation plans?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Multi-task learning is effective for some pre-trained mod- els [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' So far, we have discussed separate training models and using the input independently to generate the incident’s root cause and mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We are interested in how GPT- 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models react to multi-task learning in our specific setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We combine all the training data for this experiment for both tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, during evaluation, we used the same test sets used in RQ1 and RQ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RQ5 Do GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models get better at proposing mitigation plans if the root cause is given?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Mitigation plans for an incident depend on the specific root cause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Different root causes may lead to different mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Moreover, the GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models can be improved by making the input larger or more informative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We will also investigate whether providing the root cause in the input help models find the mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RQ6 Do the models better propose mitigation plans for machine-detected incidents than human-detected ones?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incidents can be machine-detected (by some monitors) or human-detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Both types of incidents have specific char- acteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Machine-detected incidents are generally triggered when the monitor observes system changes like build failures, resource availability, request counts, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' On the contrary, human-detected incidents are unique and may apply to a spe- cific customer (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', webpage is not loading).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In the research question, we will investigate if the model performs well for incidents belonging to a specific class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Human validation Root causes and mitigation plans can be written in different forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Unlike natural language translation or code summa- rization, root causes and mitigation steps are much more open- ended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Depending on the author, the root causes and mitigation plans can vary from generic to specific.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Automatic metrics may fail to reflect the overall performance of the models ideally because these metrics compare the models’ suggestions with one reference, which may be completely different from the models’ correct and relevant outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' To better understand the model’s performance, we went to the owner/resolver of the specific incidents and presented the solutions from our models and baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' They assigned correctness and readability scores to the model’s output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We will discuss our methodology and findings from the human validation in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' METHODOLOGY A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dataset Preparation Thousands of incidents with different severity are being detected (by both machines and humans) every day at Mi- crosoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The on-call engineers (OCEs) are working relentlessly to provide seamless service to the customers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' To manage incidents at that scale, Microsoft has a well-designed website for reporting and managing the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' A database also keeps track of the website’s data insertion, modification, and deletion from incident reporting to mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' One of the inputs to the model is the summary written at the time of incident reporting or creation to prevent any data leakage from input to output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In most cases, the OCEs do not follow any specific for- mat to write incident summaries, root causes, and mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The fields, especially summaries, contain information in multiple forms, including tables, links to prior incidents, and images of individual monitor output or code snippets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This is because the incidents are very different from each other, and the utmost priority of the OCEs is to resolve the incident rather than document the symptoms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Also, some incidents are transient and auto-mitigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' No post-mortem is done if the severity of low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Since GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x are text models, we discarded the tables and images from the summaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hence, there is a chance that we lost some critical information while discarding that information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 3 We collected data for incidents from the database that has the creation date between January 1, 2018, to July 15, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Initially, we collected 123,953 instances for root causes and 23,544 mitigations from the “Resolved” or “Mitigated” incidents with severity levels 0-3 (most severe incidents belong to level 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The samples for mitigation are low because they can be found in the post-mortem of the incident, and post- mortem are not done for every incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' After collecting the data, we observe many incidents with duplicate root causes and mitigations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Some severe incidents/ denial of service trigger hundreds of incident reports for the same event, all of which have the exact root causes and mitigations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' To fairly evaluate the model, we remove the exact duplicates for root causes and mitigation plans and end up with 57,520 root causes and 8,300 mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The average root causes and mitigations lengths are 87 and 12 tokens, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Some root causes are very long, and it is difficult for the models and human evaluators to generate and evaluate the models’ output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We kept the root causes up to 100 tokens, allowing us to keep 73% of the instances for root causes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We also discarded root causes and mitigation plans with less than three tokens because those are not informative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' After deduplication and filtering, we sorted the data accord- ing to the creation date to use only historical data for training the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We selected 35820, 3000 and 2000 root causes for training, testing and validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We have fewer instances for mitigations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hence, the training, test and validation sets for mitigations have 5455, 2000 and 500 data, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Even after this rigorous filtering and deduplication of data, some root causes and mitigations do not carry any useful information (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', root cause in a different link, transient, and auto-mitigated incidents).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We manually went through 3000 root causes and 2000 mitigation plans from test sets and selected 2,621 root causes and 1,780 mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' OpenAI models and baselines The recent advancement of the deep neural network models is greatly influenced by the introduction of Transformer mod- els [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Prior approaches (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', LSTM [15] and GRU [16]) modeled the sequential dependencies of the generated text us- ing recurrent architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' These recurrent models use “Back- Propagation through Time” (BPTT) to recursively propagate loss values over gradients within the same recurrent units pro- hibiting the possibility of parallel computation while capturing the long-distance dependencies of the tokens in the sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' introduced an attention mechanism that works on top recurrent architecture and improves the performance of recurrent neural models by providing an attention vector that indicates the relevant part of the input to the target output [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Transformer model completely removes the recurrence unit and entirely relies on the attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It uses a multi- layer, multi-head self-attention architecture where the attention mechanism can relate different positions of a single sequence to compute a sequence representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1We cannot share the dataset because incident data can contain confidential and private data and sharing such data would violate the terms of service.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Pre-trained models are currently achieving state-of-the-art performance for various natural language and code tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' These pre-trained models work in 2 stages (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', pre-training and fine-tuning).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In the pre-training stage, we train the model to learn statistics of language (or code) in a self-supervised fashion from large-scale corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' After that, we use a smaller labeled dataset to fine-tune the model for specific tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It is nearly infeasible to have sufficient labeled data to train such high-capacity deep learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Pre-trained models enable us to train such big models with the unlabeled data in a self-supervised way in the pre-training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' All the recent pre-trained (encoder-only and encoder-decoder) models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', BERT [18], RoBERTA [19], BART [20], T5 [21]) and decoder-only generative models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', GPT [22], GPT-2 [23], GPT-3 [7], OPT [24]) are basically Transformer models of various capacity trained with different pre-training objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The following subsections briefly discuss the baselines and OpenAI models we used for our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1) Baselines encoder-decoder models: We can apply the encoder-decoder models for both root cause and mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The encoder will encode the input, and the decoder will generate the root cause or mitigation using the encoded representation provided by the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Pre-trained NLP models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', BERT [18], RoBERTa [19], BART [20], T5 [21]) use different self-supervised pre- training objectives to learn robust language representa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' NLP models have programming language counterparts (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', CodeBERT [25], GraphCodeBERT [26], PLBART [27], CodeT5 [13], NatGen [28]) where the models are initialized with the NLP models’ weights and continued pre-training with code and associated natural language comments in most cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though root cause and mitigation are natural language descriptions, the vocabulary (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', identifiers) overlaps more with the comments used in code models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Therefore we picked both NLP and code models from OpenAI and baseline criteria to see if the performance differs depending on the domain used for pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For baselining, we pick RoBERTa [19] and CodeBERT [25] models because of two reasons: i) these two models are architecturally identical with 125M parameters, ii) Both models are widely used as baselines (in fact, CodeBERT is the primary baseline model of the CodeXGLUE [29] dataset, which is a popular benchmark of 10 SE tasks including encoder-decoder tasks like code summarization and code trans- lation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Note that many transformer-based encoder-decoder models can be applied to this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, comparing with all the models is beyond the scope of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RoBERTa: BERT is the first model that introduced the pre- training strategy that outperforms the traditional Transformer models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It applied two pre-training strategies: Masked Lan- guage Modeling (MLM) and NSP (Next Sentence Prediction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In MLM pre-training, we randomly mask out 15% of the tokens and ask the model to recover those tokens, whereas, in NSP, we train the model to learn to predict the next sentence following an input sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [19] propose RoBERTa (A Robustly Optimized BERT Pre-training Approach), which outperforms the BERT model with a few changes, such as 4 dynamic masking and dropping NSP, achieves better perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We apply RoBERTa as NLP baseline model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' CodeBERT: CodeBERT is architecturally identical to RoBERTa model that uses two pre-training objectives: MLM and Replaced Token Detection (RTD) [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We can define RTD as a binary classification problem where two data generators (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', NL and PL) generate plausible alternatives for a set of randomly masked positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' A discriminator is trained to determine whether a word is the original one or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' CodeBERT is pre-trained on CodeSerachNet [31] dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 2) OpenAI generative models: Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' introduced general task-agnostic generative pre-training of language mod- els (GPT) and outperformed 9 out of 12 discriminatively trained models that use architectures designed for the spe- cific task [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In generative pre-training, we autoregressively predict the probability of a token given the previous tokens moving from left to right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This left-to-right autoregressive training prevents the model from retrieving information from future tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' All the subsequent generative models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', GPT- 2, GPT-3) use very similar pre-training objectives but have a higher capacity than previous ones and are pre-trained on a much larger dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Very large language models (LLMs) like GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x have 175 billion parameters and are found to be effective with few-shot learning replacing the need for fine-tuning for a specific set of tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, fine-tuning GPT-3 models are still beneficial for some tasks [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This paper evaluates our approach using four OpenAI [32] GPT- 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models: Curie, Codex, Davinci, and Code-davinci-002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Curie: Curie is the fastest GPT-3 model with 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='7B parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This model is trained with natural language data and performs well on language translation, complex classification, text sen- timent, and summarization tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This is the smallest model we use for our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Codex: The Codex models are also GPT-3 models trained for understanding and generating code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The training data contains both natural language and billions of lines of public code from GitHub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We use one model, Codex-cushman from Codex family, with 12 billion parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though the models are pre-trained for code-related tasks, it somehow relevant to incident management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Root cause and mitigation contain a lot of terminology (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', filenames, identifiers) which relate more to comments used in software development projects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Davinci: Davinci is the biggest GPT-3 model (175 billion parameters) we use for our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It can perform tasks with fewer instructions than other GPT-3 models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Davinci usually performs better at understanding the content or creative content generation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It is also very good at solving logic problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, training the 175 billion parameters model is costly and requires a much longer period (almost four times compared to Curie with the same dataset) and more resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Davinci is not trained to understand or generate code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Code-davinci-002: Code-davinci-002 is the 175 billion pa- rameters GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 model we use for our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Code- davinci-002 is an upgraded and more capable version of Codex model that was trained on a more recent dataset of text and code corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Model configuration One of the limitations of pre-trained encoder-decoder mod- els is that they can only encode 512 tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We observe that several samples from our test set truncated in GPT-3 model even though GPT-3 models support from 2048 tokens (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', Curie, Codex) to 4000 tokens (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', Code-davinci-002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Therefore, we can assume that the traditional encoder-encoder models do not have enough capacity to encode our sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Encoder-decoder models have been successful for problems like code summarization [13], [25], [27], code translation [29], and natural language translation [14], [20], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We usually generate one sample using beam search for each input and compare the results with the reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Generating one sample is sufficient for these problems because the target text is less open-ended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Besides, most of the information needed for successful generation can be found in the input for this set of problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The models need to learn the syntactic alignment between two programming languages for code translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Learning to transform conditional statements and loops from one programming language to another may be enough to do a successful translation, which is learnable from a few thousand samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For natural language translation learning the mapping between the words from different natural languages is essential to generate good quality translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Code summarization is slightly different from these two, where the input is much longer than the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, Ahmed and Devanbu found that all the necessary information for code summarization is extracted from the identifiers, and obfuscating the identifiers hurts the models [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Generating root causes and mitigation plans is much more complex than these problems, where the input may not contain handy information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The models need to be able to generate more diverse and creative solutions to answer the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Our problem is more aligned with code generation problems where the input does not carry most information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For these types of problems, it is found that instead of using the encoder-decoder model, decoder-only models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x) are more successful where we only focus on the following tokens considering the prior tokens generated by the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It is well-established that encoder- decoder models are not as successful as decoder-only models in code generation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, we still apply encoder- decoder models to our problems and discuss our findings in the following sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For RoBERTa [19] and CodeBERT [25] we use the exact setup that is used for the code summarization task [31], [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We adjust the length to 512 tokens with a batch size of 8 to provide as much as information to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Full fine-tuning that retrains all the parameters is very costly and challenging for the OpenAI models with billions of parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We use LoRA (Low-Rank Adaptation), a novel approach that significantly reduces the number of trainable parameters by freezing the pre-trained model weights and injecting trainable rank decomposition matrices into each layer of the Transformer architecture [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Even though LoRA reduces trainable parameters, it performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT- 5 2, and GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We fine-tuned the OpenAI GPT-3 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', Curie, Codex, Davinci) and GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 (Code-davinci-002) models for root causes and mitigation plans generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We train both models for 2000 steps (4 epochs) which OpenAI recommends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For fine-tuning smaller models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', Curie and Codex), we use one NVIDIA V100 GPU, and for Davinci, we use four NVIDIA V100 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For finetuning Code-davinci-002 model, we use four NVIDIA A100 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We evaluated the models on the validation set after every 100 steps and chose the model that showed minimum training loss on the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' As discussed earlier, the model needs to generate more diverse and creative recommendations to solve problems like the predictions of root causes and mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Two critical parameters to control the quality of the generated outputs are temperature and top p, and it is recommended to update one parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Following prior works [8], [36], we decided to update the value of temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Higher temperature encourages the model to take more risk, which is necessary for the creative application [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lower value performs argmax sampling, which is very similar to what we do in encoder- decoder model models like CodeBERT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Typically, a temper- ature between 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='50–0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='90 is the most common for creative tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, a high temperature is hurtful (makes the output too diverge) [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We perform a grid search and choose 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='7 for Curie, Codex, and Davinci models and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 for Code- davinci-002 experiments to minimize the divergence issue for generating five samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Evaluation Metrics We briefly describe the evaluation metrics used for the two downstream tasks, root cause and mitigation generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1) Lexical Metrics: For lexical metrics, we employ the smooth sentence BLEU-4 (Bilingual Evaluation Understudy) [37] metric to calculate n-grams overlap from 1 to 4 between the reference and generated texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In addition, the Rouge met- ric (Recall Oriented Understudy for Gisting Evaluation) [38] is used to compare a candidate document to a set of reference texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Specifically, we choose ROUGE-L [38], which takes into account sentence-level structural similarity and identifies longest co-occurring in sequence n-grams based on Longest Common Subsequence (LCS) [39] statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' METEOR (Met- ric for Evaluation of Translation with Explicit Ordering) [40] is the final lexical metric we selected, which is based on the harmonic mean of unigram precision and recall as well as stemming and synonymy matching as extra features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 2) Semantic Metrics: Since the lexical metrics usually conduct exact word matches and disregard the meaning of words, we choose three semantic metrics to evaluate our outcomes according to their semantic meanings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We use the BERTScore [41], which leverages the pre-trained contextual embeddings from the BERT [18] model and matches candidate and reference sentence words based on cosine similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Then, the BLEURT score [42] is selected to demonstrate the degree to what extent the candidate is fluent and conveys the meaning of the reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Last, we select NUBIA (NeUral Based Inter- changeability Assessor) [43], a recent neural-based measure that incorporates the semantic similarity, logical inference and sentence legibility from exposing layers of pre-trained language models, including RoBERTa STS [19], RoBERTa MNLI and GPT-2 [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The semantic metric calculation takes significant time and requires expensive GPU resources (Tables I and II took two days on a single GPU).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Therefore, we reported semantic met- rics for the first two research questions, and for the remaining research questions, we restricted ourselves to lexical ones that are computationally less expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RESULT A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' How effective are fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models in generating incidents’ root cause recommendation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (RQ1) Table I presents the effectiveness of our baseline encoder- decoder models and fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models for root cause recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We have 2621 test samples for evaluating the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We generated ten samples for the OpenAI models for two reasons: i) using temperature, we can generate very diverse and creative samples from GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' ii) we found that GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models can generate valuable suggestions even with lower ranks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We observed the average BLEU-4 of all the samples at a particular rank, and we found that all the OpenAI GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models produce examples with higher BLEU-4 even at rank eight or lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, ten examples are too many for a human OCE, and we restrict ourselves to five top suggestions from the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In Table I, for each metric, we have Top 1 and Top 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Top 1 presents the mean of the first candidates for all the test samples;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' while calculating Top 5, we take the maximum value from the first five candidates and then find the average for all samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This Top 5 gives an overall view of how the models are performing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For our baseline encoder- decoder models, we have only one sample for each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Surprisingly, the encoder-decoder models are doing really good compared to GPT-3 models in all six automatic metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In fact, all six metrics fail to distinguish significant differences between the OpenAI models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The reason behind the success of encoder-decoder models in automatic metrics is that these models are less explorative and try to maximize the success de- pending on argmax probabilities during decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Now “There is a bug in the code” is a very common and generic sentence that can be a part of any root causes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The models maximize the success just by copying that particular segment, and automatic metrics also fail here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We tried three semantic metrics to resolve that issue, but the encoder-decoder models still benefit from the automatic metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Table III presents the number of unique samples generated by the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For OpenAI models we only consider the first candidate to make a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We observe that the unique candidate count for RoBERTa and CodeBERT are 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10% and 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67% of the total count, whereas, for all the OpenAI GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models, the percentages are above 97%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Remember that we deduplicated the dataset, and repeatedly generating the same samples should not help here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In Section V, we interviewed the incident owners, and the majority of them complained about the generic nature of encoder-decoder models’ recommendations, and these models 6 TABLE I: Effectiveness of fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models at finding root causes of the incidents Model BLEU-4 ROUGE-L METEOR BERTScore BLEURT NUBIA Top1 Top5 Top1 Top5 Top1 Top5 Top1 Top5 Top1 Top5 Top1 Top5 RoBERTa 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='21 NA 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='83 NA 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='89 NA 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='38 NA 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 NA 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 NA CodeBERT 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='38 NA 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='17 NA 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='58 NA 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 NA 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='19 NA 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='05 NA Curie 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='29 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='21 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='90 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='36 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 Codex 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='25 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='98 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='51 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='82 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='85 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='50 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='64 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='77 Davinci 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='53 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='95 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='41 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='06 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='61 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='79 Davinci-002 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='43 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='78 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='77 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='87 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='3 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 %gain for Davinci-002 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='26 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='90 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='16 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='72 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 underperform at correctness criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Among OpenAI models, GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', Code-davinci-002) model significantly outper- forms all GPT-3 models as well as other baselines in terms of all the 6 automated metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though the automatic metrics fail to detect the weaknesses of the encoder-decoder models, these metrics are still widely used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Human evaluation is hard to perform in every scenario, and these metrics can be useful to find the models’ relative performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Therefore, even though we achieve a low score on these metrics, these are useful while trying to capture the relative performance of the model in different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Also, getting a lower score with lexical metrics is not surprising because lexical metrics only consider token overlaps and root cause and mitigation are open-ended, and the same root cause/mitigation can be written differently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In Section V, from the interviews with OCEs, we found that suggestions with lower BLEU-4 or other metrics are still helpful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' How effective are fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models in recom- mending mitigation plans for an incident?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (RQ2) Table II shows that we achieved a slightly higher mitigation score (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 BLEU-4) than the root cause recommendation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='38-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 BLEU-4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='We observed a similar and consistent pattern (Table III) of the output as observed with root causes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The encoder-decoder models generate generic comments (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “the issue is self-mitigated”, “fix deployed to all regions”) like before, and those recommendations are mostly useless for the OCEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For both RQ1 and RQ2, the fine-tuned Davinci model (even with 175 Billion parameters) is significantly un- derperforming other baseline methods according to automatic metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, the Davinci and Code-davinci-002 models are the best performing models according to the incident owners (see Section V) C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' How much fine-tuning improves over zero-shot learning performance of GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (RQ3) As discussed in Section II-D, we will investigate the per- formance of OpenAI models in the zero-shot setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Table IV presents the performance of the OpenAI models for root cause and mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' As expected, the model did not perform well in this setting since the models were not trained on confidential data from the incident management space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The models achieve 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='80-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='18 BLEU-4 for the top candidate, which is much lower (210%) than what we achieved with fine-tuning the models (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='47-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76) while recommending mitigation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though we achieved a higher score for mitigation than root cause during fine-tuning, in the zero-shot setting, the numbers for root cause are slightly high (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='18-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='83 for the top candidates).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The model tries to complete the sequence depending on the given input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Copying a few tokens from input may help the model because the root cause is usually longer than mitigation and tends to share more tokens with the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Because of unigram overlaps METEOR is doing better compared to other metrics (BLEU-4 and ROUGE-L) because it looks for the unigram precision and recall, making it lenient compared to BLEU-4 and ROUGE-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We observe another interesting phenomenon here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though the Davinci model was underperforming in RQ1 and RQ2, it significantly outperforms the other OpenAI models at zero-shot settings for both root cause and mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This is because the model has higher parameters and is trained on more data enabling it to infer better without explicit training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Does multi-task learning improve the performance of GPT- 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models at finding root causes and mitigation plans?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (RQ4) To evaluate the results of multi-task training in the root cause recommendation and mitigating planning tasks, we com- bine the training set of the two tasks for GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The models are then individually tested using the corresponding test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Table V shows the results of root cause and mitigation with multi-task training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Overall, we observe that multi-task training does not significantly outperform training for a single task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The performance of Curie and Codex models has fallen by an average of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8% for BLEU-4, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='0% for Rouge-L and 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2% for Meteor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Only the Davinci model is marginally 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2% better than single task training in terms of BLEU-4 metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The performance of Code-davinci-002 is almost always lower across all lexical metrics in a multi-task setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Similar to this, the results of mitigation generation reveals a 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='1% performance decline in average for all the four models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The lack of connection between the root cause and mitigation is what mostly contributes to the decline in performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It is challenging to transfer knowledge from one task to the other because of the distinct distribution in their answer spaces, such as the variations in root cause and mitigation length and concreteness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Do GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models get better at proposing mitigation plans if the root cause is given?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (RQ5) We assess the performance of the mitigation generation while the root cause is being revealed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Our training set of mitigation is reduced from 5,455 to 2,973 as a result of the missing root causes in the incidents, and we have 166 test 7 TABLE II: Effectiveness of fine-tuned GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models at finding mitigation plans of the incidents Model BLEU-4 ROUGE-L METEOR BERTScore BLEURT NUBIA Top1 Top5 Top1 Top5 Top1 Top5 Top1 Top5 Top1 Top5 Top1 Top5 RoBERTa 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 NA 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 NA 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 NA 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 NA 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='80 NA 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='90 NA CodeBERT 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='02 NA 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 NA 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='37 NA 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='83 NA 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 NA 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='89 NA Curie 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='47 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='03 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='31 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='75 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='20 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='30 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='46 Codex 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='53 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='19 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='68 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='35 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='43 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='77 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 Davinci 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='54 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='96 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='72 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='19 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='00 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='71 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='61 Davinci-002 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='14 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='17 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='19 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='96 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='58 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='81 %gain for Davinci-002 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='02 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='38 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='32 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='19 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='74 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='48 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 TABLE III: Uniqueness of the models’ suggestions Model Root cause Mitigation # of unique recommendations In % of total # of unique recommendations In % of total RoBERTa 160 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 CodeBERT 437 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='1 Curie 2612 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 1669 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 Codex 2614 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='73 1743 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 Davinci 2587 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='70 1731 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 Davinci-002 2614 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='73 1696 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 TABLE IV: Effectiveness of OpenAI models for recommend- ing root causes and mitigation steps at zero-shot setting Objective Model BLEU-4 ROUGE-L METEOR Top1 Top5 Top1 Top5 Top1 Top5 Root cause Curie 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='26 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='01 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='75 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='80 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='30 Codex 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='18 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='80 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='07 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='58 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='20 Davinci 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='83 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='37 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='55 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='87 Davinci- 002 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='35 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='89 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='58 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='55 Finetuned- Davinci- 002 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='43 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8 % gain for Finetuning 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='82 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='07 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='99 Mitigation Curie 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='81 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='50 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='45 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='59 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 Codex 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='80 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='57 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='97 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='05 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='55 Davinci 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='18 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='84 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='84 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='99 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 Davinci- 002 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='89 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='31 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2 Finetuned- Davinci- 002 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='14 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 % gain for Finetuning 210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='1 217.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='7 166.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='4 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='9 samples to evaluate the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Despite the sample reduction in the training set, Table V reveals a considerable performance gain with the additional root cause information: the average for all three metrics is improved by 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8% for the Curie model, 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='3% for the Codex model, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='4% for the Davinci model and 26% for the Code-davinci-002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Nevertheless, we observe that the performance gain of the Code-davinci-002 model’s Top-5 recommendations is modest compared to the improvement of the Top-1 results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Despite this, the overall promising results highlight the significance of root cause information in generating mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Do the models better propose mitigation plans for machine- detected incidents than human-detected ones?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (RQ6) We analyze the mitigation generation performance of GPT- 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='x models for both machine and human detected incidents in Table VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We employ the same training set but separate the test samples by the categories of human and machine detected incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The testing samples consist of 592 incidents rec- ognized by machines and 1188 incidents detected by humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' TABLE V: Effectiveness of multi-task learning Objective Model Multi- tasking?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' BLEU-4 ROUGE-L METEOR Top1 Top5 Top1 Top5 Top1 Top5 Root Cause Curie No 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='29 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='21 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 Yes 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='30 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='51 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='60 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='97 Codex No 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='25 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='98 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='51 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='82 Yes 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='64 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='53 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='81 Davinci No 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='53 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='95 Yes 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='60 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='27 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='31 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='64 Davinci-002 No 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='43 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8 Yes 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='09 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='32 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='14 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='32 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 Mitigation Curie No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='47 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='03 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='31 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='75 Yes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='89 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='98 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='14 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='54 Codex No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='53 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='19 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 Yes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='87 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='55 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='85 Davinci No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='54 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='96 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='18 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 Yes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='64 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='74 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='97 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='99 Davinci-002 No 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='14 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 Yes 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='58 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='36 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='91 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='36 TABLE VI: Effectiveness of GPT-3 models at proposing mitigation plans given root causes Model Root-cause given?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' BLEU-4 ROUGE-L METEOR Top1 Top5 Top1 Top5 Top1 Top5 Curie No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='29 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='46 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='35 Yes 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='59 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='25 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='61 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='24 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='00 Codex No 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='25 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='46 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='00 Yes 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='23 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='03 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='32 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='48 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='73 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='96 Davinci No 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='35 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='05 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='75 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='21 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='07 Yes 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='02 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='47 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='20 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='17 Davinci-002 No 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8 12 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='48 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='37 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='53 Yes 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='46 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='9 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 %gain 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='47 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='21 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='94 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='86 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='74 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 Table VII demonstrates that machine-recognized incidents can outperform those detected by humans by a factor of 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5% for BLEU-4, 20% for ROUGE-L and 23% for METEOR in the context of Top-1 recommendations of Code-davinci-002 model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It is due to the fact that machine detected incidents usually adhere to certain patterns, which are easier for machine learning models to recognize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' LOOKING THROUGH THE INCIDENT OWNERS’ EYES A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Methodology From our test sets for root causes and mitigation plans, we selected the incidents with both root causes and mitigation, so that each incident owner could evaluate both the models in the same interview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incident resolution is a complex task requiring significant context and domain knowledge about the service and also about the specific incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hence, we conducted this human evaluation with the actual owners who root caused and mitigated the incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We chose 50 recent incidents which occurred in the last two months, to evaluate the models’ performance so that the incident owners 8 TABLE VII: Models’ performance on machine vs human detected incidents Model Machine detected?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' BLEU-4 ROUGE-L METEOR Top1 Top5 Top1 Top5 Top1 Top5 Curie Yes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='54 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='54 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='63 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='45 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13 No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='45 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='78 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='15 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 Codex Yes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='76 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='54 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='84 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='80 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='41 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='67 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='68 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='93 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='78 Davinci Yes 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='51 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='49 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='17 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='34 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='59 No 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='74 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='91 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='86 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='95 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='44 Davinci-002 Yes 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='18 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='83 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='59 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='41 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='66 No 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='57 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='58 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='92 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='65 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='87 %gain 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='45 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='25 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='74 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='01 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='31 could precisely remember what happened during managing particular incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We reached out to all the incident owners and 25 incident owners responded and each interview took around 20-30 minutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We presented the outputs from all the models under con- sideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For both root causes and mitigation plans, we have six pools of candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The first four pools are for OpenAI models, each with six options (including “none”), and the last two are for RoBERTa and CodeBERT, which has only one candidate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For the OpenAI models, we ask the OCEs to select the best option that might be relevant to the incident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' After that, we ask the OCEs to assign correctness and readability for the chosen candidate on a scale of 1-5, with 5 being the best score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Please note that for RoBERTa and CodeBERT, we only have one option.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hence, we only ask to assign correctness and readability scores to those candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We define correctness and readability as follows: Correctness: For this metric, we ask the incident owner to check whether the model provides a helpful and relevant suggestion compared to the actual root cause/mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Readability: Readability is the ease with which a reader can understand a generated text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' A text is readable if it is grammatically correct, meaningful and easy to understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Note that a readable text does not need to be correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' At the end, we asked the incident owners to assign an overall score (1-5) indicating their perception about the usefulness of LLMs for incident resolution and, also, asked them to share their thoughts and comments regarding this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Results Table VIII presents the correctness and readability scores assigned by the incident owners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We can see that candidates from the Davinci and Code-davinci-002 pools have achieved higher mean correctness scores than those selected from Curie and Codex models for both root causes (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56) and mitigation plans (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The mean readability score ranges from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 for all the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The incident owners expressed positive opinions about the readability of the outputs, and all the models achieved higher readability than correctness scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We received a few recommendations on how to improve the readability in the future (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', avoiding use of acronyms and generating more specific or informative comments).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' As discussed before, the baseline encoder-decoder models generate very generic comments, and the automatic metrics fail to detect that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We can see the incident owners assign a lower correctness score to RoBERTa and CodeBERT model, and several OCEs pointed out the generic nature of the recommendations generated by the encoder-decoder models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though the correctness score of the OpenAI models ranges from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='16, several OCEs pointed out that the models recommend beneficial root causes and mitigation plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' For example, the models succeeded in pinpointing some hard to detect root causes: “I am very impressed because one model found the right root cause, which was very hard to detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We found it in the postmortem phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, I am a little worried that there would not be enough information on the incident website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Overall, I am impressed with the efficacy of the models.” “Even if not always correct, these suggestions can guide the OCE towards actual root cause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' ML model can give directions and can be valuable suggestions.” We also took the maximum score assigned by the OpenAI models and reported the average correctness and readability score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The mean correctness and readability score ranges from 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='64 (median score 3-5), presenting the overall strength of the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We asked for the overall scores (1-5), and Table IX shows that the incident owners found the overall contribution promising and useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' More than 70% of incident owners gave three or above for the recommendations of the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We found that at least one model is effective for most incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We also found out why the automatic metrics fail to provide valuable insights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' There is always another side to the coin, and we observe that the models’ outputs are not helpful for some incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The OCEs assigned lower scores to those incidents and here are some of the concerns they mentioned: “Based on just incident data it is difficult for the model to predict root-cause and mitigation because not all data are recorded in the database and some of them are classified.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' “Major concern is if the suggestion is incorrect,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' on-call engineers may take longer time to investigate the problem.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We observed some negative samples for the model because a lack of discussion or other information results in the de- privation of valuable signals from the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, the model’s overall performance is quite promising, which can be considered a stepping stone toward the automation of root causes and mitigation plans in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' DISCUSSION & THREATS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Do automatic metrics reflect human perception?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Automatic evaluation metrics are known to be representative of human perception and are widely used in problems like nat- ural language translation [14], [20], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Though some recent works looked into the effectiveness of these metrics in code summarization and reported many pitfalls and weaknesses of these metrics [44]–[47], researchers are still using them for benchmarking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The best possible alternative to automatic metrics is human validation or some form of automatic test 9 TABLE VIII: Correctness and readability scores assigned by the incident owners Objective Criteria RoBERTA CodeBERT Curie Codex Davinci Davinci-002 Max OpenAI Mean Median Mean Median Mean Median Mean Median Mean Median Mean Median Mean Median Root cause Correctness 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='72 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='40 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 3 Readability 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='68 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='56 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 5 Mitigation Correctness 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='6 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='28 1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='16 3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 4 Readability 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='88 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04 4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='8 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='52 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='08 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='64 5 TABLE IX: Usefulness of LLMs for incident resolution Score # of incident owners In percent (%) of total 5 2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='41 4 9 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='33 3 8 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='63 2 6 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='22 1 2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='41 case evaluation (done in code generation tasks).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The main challenge in incident management is that even experts face difficulties evaluating the incidents if they are not involved in resolving particular incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' In some cases, the OCEs could not clearly remember the incidents if they happened two months ago.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Thus conducting a large-scale study is quite challenging in this area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, we interviewed 25 incident owners and found that the models perform pretty well even after achieving lower scores with automatic metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We calculated the Pearson coefficient for all three lexical metrics (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', BLEU-4, ROUGE-L, and METEOR) with the correctness and readability score assigned by the OCEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We observed that the co-efficient varies from -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='42 to +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='62, preventing us from getting specific patterns in the value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' That also indicates that these automatic metrics may not be coherent with human perception for resolving cloud incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, more sample cases are needed to reach any concrete resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Natural language or code?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Which family of models are better for incident management?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' While choosing the models, we selected both natural lan- guage (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', RoBERTa, Curie, Davinci) and code models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', CodeBERT, Codex-cushman, Code-davinci-002) to see which family of models is beneficial for incident management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We did not find any winners from these two groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Davinci and Code-davinci-002 models are found to be producing correct and readable suggestions compared to other models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Note that both of them have 175 billion parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We leave fine-tuning larger code models or pre-training a model from scratch with incident data for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' How the models’ performance can be improved?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We received several recommendations from the incident owners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The main recommendation is to incorporate the dis- cussions among the OCEs into the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' This will guide the model to locate better suggestions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We also dropped many incidents with summaries that written or updated at the time of incident resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' To fairly evaluate the model and prevent possible data leakage (root cause and mitigation can be written in summary if updated later), we discarded them from our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incorporating them into our dataset after preventing data leakage may improve the performance of the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We also lost some critical information while cleaning the summaries (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', discarding images and tables).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incorporating that information may also help.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Threats to Validity There are several threats to our study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' The semantic metrics use pre-trained models at the core, and we use the default, natural language models for the evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' A model pre- trained with incident management text may result in some changes in the performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Also, we train and evaluate the models with the services available within our organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' These models may show unexpected behaviors if evaluated on a different set of services from other organi- zations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Some incidents owners expressed concerns about the models’ efficacy with rare incidents, and rare incidents are frequently reported at Microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Another threat to our study is the sample size of our human subject study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' It is difficult to achieve statistical significance on correctness and readability scores with such small samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, it is challenging to scale depending on the nature of the study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' RELATED WORK A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Incident management Incident management in large cloud services has become a popular topic of research in the Systems and Software Engineering communities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Prior work in this space has focused on two main directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' First, there has been several empirical studies on analyzing incidents and outages in production systems which have focused on studying incidents caused by certain type of issues [48]–[51] or issues from specific services and systems [52]–[54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Second and more related to our work is the use of machine learning and data driven techniques for automating different aspects of incident life- cycle such as triaging [55], [56], diagnosis [57]–[59] and mitigation [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Different from prior work, this is the first effort on leveraging state-of-the art language models for assisting OCEs with incident resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We hope that this work will also motivate future work which will merge traditional task- specific discriminative models with LLMs to do end-to-end automation of production incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' LLMs in Software Engineering Even though this is the first work leveraging LLMs for AIOps, several works in Software Engineering have tried to solve other challenging problems with LLMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Github Copi- lot uses GPT-3 for automated code generation from natural language inputs [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Several researchers have addressed code generation [8], [36], docstring generation [8], [60], and code 10 repair [61], [62] problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bareiß et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [63] show how few- shot learning can be effective at (i) code mutation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' (ii) test oracle generation from natural language documentation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' and (iii) test case generation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jain et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' propose an approach to augment large language models with post-processing steps based on program analysis and synthesis techniques and achieve better performance [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' However, unlike code gener- ation where we have both lexical and structural information along with massive amount of training data, we explore the problem of incident resolution using state-of-the-art LLMs which has not been done before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' CONCLUSION With this work, we show that state-of-the-art large language models such as GPT-3 and GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='5 are effective to help with incident management, specifically, to identify root causes and mitigation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' To compare the effectiveness of the models, we conducted a rigorous and large-scale study at Microsoft, on over 40,000 incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' To assess the actual usefulness of the approach, we involved the actual owners of production incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We expect that this paper is the first of many studies that leverage LLMs to make incident management more effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Our next steps are to deploy the models in production to assist the OCEs with incident resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We are also planning to explore other usage scenarios for LLMs such as incident summarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' IX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' ACKNOWLEDGEMENTS We would like to thank the engineers who participated in the validation of root causes and mitigation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' We would like to also acknowledge the contributors of the following people across Microsoft: Oleg Losinets, Jim Kleewein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' REFERENCES [1] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wolfe, “Amazon’s one hour of downtime on prime day may have cost it up to $100 million in lost sales,” 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='businessinsider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='com/ amazon-prime-day-website-issues-cost-it-millions-in-lost-sales-2018-7 [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' He, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “How incidental are the incidents?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' characterizing and prioritizing incidents for large-scale online service systems,” in Pro- ceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 373–384.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Saha and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hoi, “Mining root cause knowledge from cloud service incident investigations for aiops,” arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11598, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [4] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' He, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, “Continuous incident triage for large-scale online service sys- tems,” in 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 364–375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jiang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xiong, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “How to mitigate the incident?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' an effective troubleshooting guide recommendation technique for online service systems,” in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1410–1420.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wei, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Schuurmans, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bosma, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Le, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, “Chain of thought prompting elicits reasoning in large language models,” arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11903, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [7] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Brown, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Mann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ryder, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Subbiah, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kaplan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dhariwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Neelakantan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shyam, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sastry, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Askell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Language mod- els are few-shot learners,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1877–1901, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Tworek, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jun, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Yuan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Pinto, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kaplan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Edwards, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Burda, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Joseph, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Brockman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='03374, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [9] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sun, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Towards intelligent incident management: why we need it and how we make it,” in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1487–1497.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [10] “Common Crawl.” [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Available: https://commoncrawl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='org/ [11] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kulkarni, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Singh, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ramakrishnan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chakrabarti, “Collective annotation of wikipedia entities in web text,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 457–466.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [12] “Wikipedia.” [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='org/ [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Joty, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hoi, “Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation,” arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='00859, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gomez, Ł.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 5998–6008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hochreiter and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Schmidhuber, “Long short-term memory,” Neural computation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gulcehre, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Cho, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='3555, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [17] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bahdanau, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Cho, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='0473, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Devlin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lee, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04805, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [19] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ott, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Goyal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Du, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Joshi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Levy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lewis, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zettlemoyer, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11692, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [20] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lewis, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Goyal, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ghazvininejad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Mohamed, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Levy, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Stoyanov, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehen- sion,” arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='13461, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [21] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Raffel, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shazeer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Roberts, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Narang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Matena, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Li, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu, “Exploring the limits of trans- fer learning with a unified text-to-text transformer,” arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10683, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Radford, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Narasimhan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Salimans, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Improving language understanding by generative pre-training,” 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Radford, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Child, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Luan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Amodei, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Language models are unsupervised multitask learners,” OpenAI blog, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 8, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Roller, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Goyal, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Artetxe, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dewan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Diab, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Opt: Open pre-trained transformer language models,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='01068, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [25] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Feng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Guo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Tang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Feng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shou, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Qin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Codebert: A pre-trained model for programming and natural languages,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1536–1547.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [26] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Feng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Tang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shujie, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Duan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Svyatkovskiy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=', “Graphcodebert: Pre-training code representations with data flow,” in International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [27] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ahmad, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chakraborty, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ray, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chang, “Unified pre-training for program understanding and generation,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Online: Association for Computational Linguistics, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 2655–2668.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='aclweb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='org/ anthology/2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='naacl-main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='211 [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chakraborty, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ahmed, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ding, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Devanbu, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ray, “Natgen: generative pre-training by “naturalizing” source code,” in Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 18– 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Svyatkovskiy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Blanco, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Clement, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Drain, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jiang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Tang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, 11 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Tufano, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Duan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sundaresan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Deng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu, “Codexglue: A machine learning benchmark dataset for code understanding and generation,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' abs/2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04664, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [30] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Clark, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Luong, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Le, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Manning, “Electra: Pre- training text encoders as discriminators rather than generators,” arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10555, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [31] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Husain, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gazit, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Allamanis, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Brockschmidt, “Codesearchnet challenge: Evaluating the state of semantic code search,” arXiv preprint arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='09436, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [32] “Openai.” [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Available: https://openai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='com/ [33] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ahmed and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Devanbu, “Multilingual training for software engineer- ing,” in Proceedings of the 44th International Conference on Software Engineering, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1443–1455.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [34] “Codexglue – code-to-text.” [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Available: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='com/ microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text [35] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wallis, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Allen-Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='09685, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [36] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Alon, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Neubig, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [37] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Och, “Orange: a method for evaluating automatic evaluation metrics for machine translation,” in COLING 2004: Proceed- ings of the 20th International Conference on Computational Linguistics, 2004, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 501–507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [38] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, 2004, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 74–81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [39] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hirschberg, “Algorithms for the longest common subsequence problem,” Journal of the ACM (JACM), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 24, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 664–675, 1977.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [40] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Banerjee and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lavie, “Meteor: An automatic metric for mt evalua- tion with improved correlation with human judgments,” in Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 2005, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 65–72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [41] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kishore, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Weinberger, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Artzi, “Bertscore: Evaluating text generation with bert,” arXiv preprint arXiv:1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='09675, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [42] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sellam, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Das, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Parikh, “Bleurt: Learning robust metrics for text generation,” arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04696, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [43] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kane, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Kocyigit, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Abdalla, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ajanoh, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Coulibali, “Nubia: Neural based interchangeability assessor for text generation,” 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [44] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shia, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wangb, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dub, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chenc, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hanb, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhangd, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhangb, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Suna, “On the evaluation of neural code summarization,” in Pro- ceedings of the 44th International Conference on Software Engineering (ICSE), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [45] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Roy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fakhoury, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Arnaoudova, “Reassessing automatic evaluation metrics for code summarization tasks,” in Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1105–1116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [46] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gros, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sezhiyan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Devanbu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Yu, “Code to comment ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='trans- lation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' : Data, metrics, baselining & evaluation,” in 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 746–757.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [47] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Haque, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Eberhart, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bansal, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' McMillan, “Semantic similarity metrics for evaluating source code summarization,” arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='01632, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [48] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Leesatapornwongsa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lukman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gunawi, “Taxdc: A taxonomy of non-deterministic concurrency bugs in datacenter dis- tributed systems,” in Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 517–530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [49] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Alquraan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Takruri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Alfatafta, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Al-Kiswany, “An analysis of {Network-Partitioning} failures in cloud systems,” in 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 51–68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [50] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dou, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Qin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wei, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Huang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhou, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wu, “An empirical study on crash recovery bugs in large-scale distributed systems,” in Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 539–550.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [51] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jin, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sethi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Rodrigues, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Yuan, “Understanding and detecting software upgrade failures in distributed systems,” in Proceedings of the ACM SIGOPS 28th Symposium on Operating Systems Principles, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 116–131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [52] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ghosh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shetty, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bansal, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Nath, “How to fight produc- tion incidents?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' an empirical study on a large-scale cloud service,” in Proceedings of the 13th Symposium on Cloud Computing, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 126–141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [53] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Musuvathi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Nath, “What bugs cause production cloud incidents?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' in Proceedings of the Workshop on Hot Topics in Operating Systems, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 155–162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [54] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Yuan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhuang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Rodrigues, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jain, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Stumm, “Simple testing can prevent most critical failures: An analysis of production failures in distributed {Data-Intensive} sys- tems,” in 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 249–265.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [55] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' He, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, “An empirical investigation of incident triage for online service systems,” in 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 111–120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [56] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' He, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Hao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, “Continuous incident triage for large-scale online service sys- tems,” in 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 364–375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [57] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Nair, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Raul, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Khanduja, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bahirwani, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Shao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sellamanickam, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Keerthi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Herbert, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Dhulipalla, “Learning a hierarchical monitoring system for detecting and diagnosing service issues,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 2029–2038.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [58] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bansal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Renganathan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Asudani, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Midy, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Janakiraman, “Decaf: Diagnosing and triaging performance issues in large-scale cloud services,” in 2020 IEEE/ACM 42nd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [59] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lou, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Lin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ding, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Zhang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Wang, “Correlating events with time series for incident diagnosis,” in Proceed- ings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1583–1592.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [60] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ahmed and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Devanbu, “Few-shot training llms for project-specific code-summarization,” arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='04237, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [61] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Fan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gao, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Roychoudhury, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Tan, “Improving automat- ically generated code from codex via automated program repair,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='10583, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [62] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Joshi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Cambronero, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Gulwani, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Le, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Radicek, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ver- bruggen, “Repair is nearly generation: Multilingual program repair with llms,” arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='11640, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [63] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Bareiß, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Souza, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' d’Amorim, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Pradel, “Code generation tools (almost) for free?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' a study of few-shot, pre-trained language models on code,” arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content='01335, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' [64] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Vaidyanath, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Iyer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Natarajan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Parthasarathy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Ra- jamani, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' Sharma, “Jigsaw: Large language models meet program synthesis,” in Proceedings of the 44th International Conference on Software Engineering, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 1219–1231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} +page_content=' 12' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE2T4oBgHgl3EQfSwf4/content/2301.03797v1.pdf'} diff --git a/GdFIT4oBgHgl3EQfWysJ/content/tmp_files/2301.11240v1.pdf.txt b/GdFIT4oBgHgl3EQfWysJ/content/tmp_files/2301.11240v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae2136b6a893d09e286b902405aa69bd00e8e3c1 --- /dev/null +++ b/GdFIT4oBgHgl3EQfWysJ/content/tmp_files/2301.11240v1.pdf.txt @@ -0,0 +1,1565 @@ +Draft version January 27, 2023 +Typeset using LATEX twocolumn style in AASTeX63 +Hubble Constant Measurement from Three Large-Separation Quasars Strongly Lensed by Galaxy +Clusters +Kate Napier,1 Keren Sharon,1 H˚akon Dahle,2 Matthew Bayliss,3 Michael D. Gladders,4 Guillaume Mahler,5, 6 +Jane R. Rigby,7 and Michael Florian8 +1Department of Astronomy, University of Michigan, 1085 S University Ave, Ann Arbor, MI 48109, USA +2Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo, Norway +3Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA +4Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA +5Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, UK +6 Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK +7Observational Cosmology Lab, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA +8Steward Observatory, University of Arizona, 933 North Cherry Ave., Tucson, AZ 85721, USA +(Received ; Revised ; Accepted ) +Submitted to ApJ +ABSTRACT +Tension between cosmic microwave background-based and distance ladder-based determinations of +the Hubble constant H0 motivates pursuit of independent methods that are not subject to the same +systematic effects. A promising alternative, proposed by Refsdal in 1964, relies on the inverse scaling of +H0 with the delay between the arrival times of at least two images of a strongly-lensed variable source +such as a quasar. To date, Refsdal’s method has mostly been applied to quasars lensed by individual +galaxies rather than by galaxy clusters. +Using the three quasars strongly lensed by galaxy clus- +ters (SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745) that have both multiband Hubble +Space Telescope data and published time delay measurements, we derive H0, accounting for the sys- +tematic and statistical sources of uncertainty. While a single time delay measurement does not yield a +well-constrained H0 value, analyzing the systems together tightens the constraint. Combining the six +time delays measured in the three cluster-lensed quasars gives H0 = 71.5 ± 6.1 km s−1 Mpc−1. To +reach 1% uncertainty in H0, we estimate that a sample size of order of 500 time delay measurements of +similar quality as those from SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745 would be +needed. Improving the lens modeling uncertainties by a factor of two may reduce the needed sample +size to 120 time delays, potentially reachable in the next decade. +Keywords: galaxy clusters; quasars; time delay; Hubble constant +1. INTRODUCTION +The Hubble parameter H0, which describes the cur- +rent expansion rate of the Universe, has been sought +since the discovery in the 1920s that the Universe is +expanding (Lemaˆıtre 1927; Hubble 1929). At the turn +of the last century, measurements of H0 started con- +verging around H0 = 70 km s−1 Mpc−1. However, as +H0 measurements have become increasingly precise, the +Corresponding author: Kate Napier +kanapier@umich.edu +so-called ‘Hubble Tension’ has arisen between the esti- +mates from early- and late-Universe probes. The Planck +Collaboration reported H0 = 67.4 ± 0.5 km s−1 Mpc−1 +(Planck Collaboration et al. 2020). They used density +fluctuations encoded in the Cosmic Microwave Back- +ground (CMB) at the surface of last scattering to deter- +mine H at that epoch, then used a spatially flat cosmo- +logical model to extrapolate to H0. By contrast, the “Su- +pernovae, H0, for the Equation of State of Dark Energy” +(SH0ES) collaboration combined Gaia parallaxes and +multi-band HST photometry of Milky Way Cepheids to +calibrate the extragalactic distance scale and derive H0 +arXiv:2301.11240v1 [astro-ph.CO] 26 Jan 2023 + +2 +Napier et al. += 73.2 ± 1.3 km s−1 Mpc−1 (Riess et al. 2021). The +Planck and SH0ES values, which respectively capture +the early and late-time physics of the Universe, differ +by 4.2σ. Freedman (2021) applied an updated Tip of +the Red Giant Branch (TRGB) calibration to a distant +sample of Type Ia supernovae from the Carnegie Su- +pernova Project and obtained H0 = 69.8 ± 0.6 (stat) +± 1.6 (sys) km s−1 Mpc−1, consistent with the CMB +value, and within 2σ of the SH0ES value, owing to the +larger uncertainties. The discrepancy between different +H0 methods may indicate a deviation from the standard +Λ Cold Dark Matter (ΛCDM) model, and therefore new +physics, or the presence of unknown or underestimated +systematics. +Either way, this tension remotivates the +pursuit of other H0 determination methods that are not +prone to the same systematics. +An alternative H0 determination method, proposed +by Refsdal (1964), uses the fact that H0 scales inversely +with the delay between the arrival times of at least two +images of a strongly-lensed variable source, such as a +quasar or a supernova. Due to the rarity of galaxy clus- +ters lensing quasars or supernovae, the Refsdal H0 tech- +nique has primarily been sought with galaxy-scale lenses +(see e.g., the recent reviews by Moresco et al. 2022; Bir- +rer et al. 2022). +Of the >300 known lensed quasars, the vast major- +ity are lensed by individual galaxies (Lemon et al. 2019, +2022). Quasars lensed by individual galaxies have been +used to obtain H0. For example, the H0 Lenses in COS- +MOGRAIL’s Wellspring (H0LiCOW) collaboration per- +formed a joint analysis of six galaxy-lensed quasars to +obtain H0 = 73.3+1.7 +−1.8 km s−1 Mpc−1 (Wong et al. 2020). +This value seems to be consistent with the Cepheid- +calibrated measurement from the SH0ES collaboration. +Birrer et al. (2020) found a smaller H0 value, and a +larger uncertainty, H0 = 67.4+4.1 +−3.2 km s−1 Mpc−1, sta- +tistically consistent with the CMB and TRGB measure- +ments. The smaller H0 value was driven by making an +assumption that the H0 lens galaxy population is drawn +from a parent population with the same statistical prop- +erties as the Sloan Lens ACS lenses. +Kochanek (2020) argued that although the uncertain- +ties of H0 values from galaxy-lensed quasars are typ- +ically reported as 4 - 8% for individual gravitational +lenses, it is likely that any current estimate of H0 from +time delays has an uncertainty of at least 10%. As dis- +cussed in Kochanek (2020, 2021), the main uncertainty +with galaxy lenses is the mean surface mass density of +the lens within the Einstein radius where most lensed +images are found. +The distribution of baryonic mat- +ter in the lens galaxy significantly contributes to the +mass. +Most galaxy-scale lenses are early-type galax- +ies, and local measurements show that these galaxies +exhibit color gradients. Color gradients indicate spatial +variation in age and metallicity, and thus, produce corre- +sponding gradients in the mass-to-light ratio of the bary- +onic mass. A galaxy’s evolutionary history and growth +through mergers will complexly affect these gradients. +Resolved JWST and Extremeley Large Telescope ob- +servations of the stellar kinematics in the lens galaxies +may significantly reduce these sources of systematic er- +rors (Birrer & Treu 2021). +What has remained largely unexplored until now is de- +termining H0 by using quasars that are strongly lensed +by galaxy clusters. For several reasons, cluster-lensed +quasars can potentially overcome some of the difficul- +ties faced by individual galaxy lenses. First, since galaxy +clusters have deeper potential wells than galaxies, clus- +ter lenses produce longer time delays of order months to +years compared to typically a month in galaxy lenses. +Consequently, the observationally measured time delay +values will have smaller fractional uncertainty, which +then will propagate to reduced uncertainty in H0 due +to the inverse scaling of H0 with time delays. Second, +the light curves of cluster-lensed quasars are less likely +to be affected by microlensing from stars in the lens +plane, because the mass distribution is dominated by +dark matter at the projected radius at which the im- +ages appear. Third, galaxy cluster mass distributions +are less affected by complex baryonic physics than those +of galaxy lenses; the complex baryonic surface density of +galaxy-scale lenses may be a significant source of system- +atic uncertainty. A challenge that must be contended +with, however, is the complexity of cluster lenses. +Two inputs are necessary to use cluster-lensed quasars +to determine H0. The first is an observational measure- +ment of the time delay between the multiple quasar im- +ages, and the second is an accurate mapping of the pro- +jected density of the dark and luminous mass at the clus- +ter core. High accuracy lens models require space-based +resolution and spectroscopic follow-up. Of the six pub- +lished cluster-lensed quasars to date (Inada et al. 2003, +2006; Dahle et al. 2013; Shu et al. 2018, 2019; Martinez +et al. 2022), only three have the necessary data available +to determine H0: SDSS J1004+4112, SDSS J1029+2623, +and SDSS J2222+2745. In this paper, we use the avail- +able archival HST data and the latest measurements of +time delay and spectroscopic redshifts of background +sources from the literature to obtain an independent +measurement of H0 from these three systems. +This paper is organized as follows: In Section 2, we +outline the theory of observational gravitational lensing +time delay and its dependence on H0. In Section 3 we +detail the lens modeling procedure. +In Sections 4, 5, + +H0 from Cluster-Lensed Quasars +3 +and 6, we give an overview of the three cluster-lensed +quasar systems used in this H0 analysis and provide +details about their HST and spectroscopic data, time +delays, and lens models. In Section 7, we present our +constraints on H0. We conclude in Section 8 with a dis- +cussion of our H0 result and the future prospects of the +time delay H0 method. +Throughout the paper, we adopt the standard ΛCDM +flat cosmological model with Ωm = 0.3 and ΩΛ = 0.7. +2. TIME DELAY ANALYSIS +The Refsdal H0 method is possible due to the mea- +surable delay between the arrival time of two or more +images of a variable source such as a quasar. Under the +thin lens approximation, a packet of light that travels +from the source to the observer will be delayed by time +t given by the arrival time surface (Schneider 1985): +t(⃗θ, ⃗β) = 1 + zl +c +dlds +dls +[1 +2(⃗θ − ⃗β)2 − ψ(⃗θ)], +(1) +where zl is the redshift of the lens, dl, ds, and dls are an- +gular diameter distances from the observer to the lens, +to the source, and between the lens and the source, re- +spectively; ⃗θ is the image position in the image plane; +⃗β is the unobserved source position; and ψ(⃗θ) is the +gravitational lensing potential. The arrival time t is a +combination of the path length and the gravitational +time delay (t = tgeometric + tgrav). +The last term, +τ(θ; β) = [ 1 +2(⃗θ − ⃗β)2 − ψ(⃗θ)], is also known as the Fer- +mat potential. The multiple images of a strongly-lensed +source appear in the stationary points of the arrival time +surface, that is, in the minima, maxima, and saddle +points. H0 is incorporated in Eq. 1 because of its in- +verse scaling with the angular diameter distances: +dA(z1, z2) = +1 +(1 + z2) +c +H0 +z2 +� +z1 +dz +E(z; Ωm, ΩΛ), +(2) +where E(z; Ωm, ΩΛ) is a dimensionless function given by +E(z; Ωm, ΩΛ) = +� +Ωm(1 + z)3 + ΩΛ + (1 − Ωm − ΩΛ)(1 + z)2. +The matter density and vacuum energy density param- +eters are Ωm and ΩΛ, respectively. Conveniently, H0 is +disentangled from the other cosmological parameters in +the angular diameter distance equation (Eq. 2). After +substituting Eq. 2 into dlds/dls in Eq. 1, the time delay +is determined by solving Eq. 1 for two image positions +corresponding to the same source position and taking +the difference. The time delay between the images thus +becomes: +∆t = +� 1 +H0 +� � 1 + zl +1 + zs +� +� +� +� +� +� +zl� +0 +dz +E(z) +zs� +0 +dz +E(z) +zs� +zl +dz +E(z) +� +� +� +� +� × +�1 +2[(⃗θ1 − ⃗β)2 − (⃗θ2 − ⃗β)2] − [ψ(⃗θ1) − ψ(⃗θ2)] +� +(3) +The first term on the right-hand side of the time delay +equation gives the Hubble parameter; the second term is +a direct observable; the third term contains the depen- +dence on cosmological parameters other than H0; and +the last term is solved by the strong gravitational lens +model. We neglect the higher order effects of the cos- +mological parameters and take the third term in Eq. 3 +to be constant. The left-hand side of the equation is the +measurement of the time delay, e.g., from monitoring +and comparing the observed light curves of two images +of the variable source. +Once we compute a model of the lensing mass distribu- +tion (see Section 3), we determine the model-predicted +excess arrival time surface (Eq. 3) with respect to one of +the quasar images. Assuming our lens model is a correct +description of the matter distribution, we then leverage +the fact that the time delay scales inversely with H0. +We compare the model-predicted time delays between +images to the observational measurements of the time +delays to obtain H0 via: +H0 = H0,model × +∆tmodel +∆tmeasured +(4) +where H0,model is the H0 value used to generate the Fer- +mat potential from the lensing analysis, ∆tmodel is the +model-predicted time delay between the quasar images, +and ∆tmeasured is the observational measurement of the +time delay between the pair of quasar images. +3. LENS MODELING +We computed the lens models with the publicly avail- +able software Lenstool (Jullo et al. 2007). Lenstool +is a ‘parametric’ modeling algorithm which describes +the lensing mass distribution as a linear combination of +galaxy-scale, group-scale, and cluster-scale halos, each +of which is parameterized as a pseudo-isothermal el- +lipsoidal mass distribution (PIEMD, also called dPIE; +El´ıasd´ottir et al. 2007). A PIEMD halo has seven pa- +rameters whose values can either be fixed or varied: po- +sition (x, y); ellipticity e = (a2-b2)/(a2+b2), where a and +b are the semi-major and semi-minor axes, respectively; +position angle θ; core radius rc; truncation radius rcut; + +4 +Napier et al. +and effective velocity dispersion σ0. The parameters of +the group-scale and cluster-scale halos are typically al- +lowed to vary. The exception is rcut for the cluster-scale +halos as this radius usually occurs outside the region +where strong lensing evidence is found, and thus, can- +not be constrained. +Lenstool uses a Markov Chain Monte Carlo (MCMC) +sampling of parameter space. The best-fit model is iden- +tified as the one that minimizes the scatter between the +model-predicted and observed image locations in the im- +age plane (“image plane minimization”) or minimizes +the scatter between the predicted source locations of +multiple images in the source plane (“source plane min- +imization”). The lens models employ the strong lens- +ing evidence of multiply-imaged galaxies (arcs), whose +positions and redshifts are used as model constraints. +The availability of lensing constraints strongly affects +the accuracy of lens models, as they are used as local +solutions of the lensing equations and constrain the pro- +jected mass density distribution at the cluster’s core. +The mass distribution and magnification are sensitive +to the accurate identifications and positions of multiple +images and to the redshifts of the lensed galaxies. It is +necessary to include a few spectroscopic redshifts in the +lens model in order to avoid incorrect results (Johnson +& Sharon 2016). +To select cluster-member galaxies, we followed the +procedure of Gladders & Yee (2000), by selecting galax- +ies that fall on the cluster red sequence in a color- +magnitude diagram. For SDSS J1029+2623 we also in- +corporated spectroscopic redshift information (see Sec- +tion 5). +The galaxy-scale halos’ positional parame- +ters (x, y, e, θ) are measured with Source Extractor +(Bertin & Arnouts 1996) and fixed. The rcore, rcut, and +σ0 of the galaxy-scale halos are scaled to their observed +luminosity following the scaling relations in Limousin +et al. (2005). +4. SDSS J1004+4112 +SDSS J1004+4112 was the first discovered galaxy clus- +ter strongly lensing a quasar (Inada et al. 2003). The +cluster at z = 0.68 strongly lenses a quasar at z = 1.734 +into five images, with a maximum image separation of +14.′′6 (Table 1). The cluster also strongly lenses several +background sources at z = 2.74 (Sharon et al. 2005), z = +3.288 (Sharon 2008; Oguri 2010), and z = 3.332 (Sharon +et al. 2005) (Table 2). +We used archival HST +multi-color imaging from +the +Advanced +Camera +for +Surveys +(ACS). +The +SDSS J1004+4112 imaging data include GO-10509 (PI: +Kochanek) ACS/F814W, F555W, F435W (10 orbits); +GO-9744 (PI: Kochanek) ACS/F814W, F555W (2 or- +bits); and GO-10793 (PI: Gal-Yam) ACS/F814W (1 +orbit). +These data were originally proposed to iden- +tify multiply-imaged galaxies to construct a mass model +(Sharon et al. 2005), search for the fifth quasar image +(Inada et al. 2005), derive ΩΛ, perform a weak lens- +ing analysis, and search for supernovae in massive high- +redshift clusters (Sharon et al. 2010). These data also +enabled studies of the spectral energy distribution of the +quasar host galaxy (Ross et al. 2009), the ultraviolet up- +turn in red sequence galaxies (Ali et al. 2018), and active +galactic nuclei (AGN) in massive clusters (Klesman & +Sarajedini 2012). +We modeled SDSS J1004+4112 using one cluster-scale +halo, one brightest cluster galaxy (BCG)-scale halo, and +a galaxy-scale halo for each of the cluster member galax- +ies, four of which have their parameters optimized in- +stead of adopting the scaling relations from Limousin +et al. (2005). +We modeled the cluster using both source-plane min- +imization and image-plane minimization, and evaluated +the quality of the models obtained by each approach. +While formally the image-plane minimization resulted +in a better image-plane scatter, these models produced +additional quasar images that are not observed. There- +fore, we proceeded with the source-plane minimization +for SDSS J1004+4112 for the remainder of the analysis. +We note that the best-fit lens model produced large scat- +ter between the observed and model-predicted positions +in the image plane for quasar image C. In our results, we +checked what happens when image C is removed from +the H0 measurement. +The model consists of 27 free parameters and 78 +constraints. +The HST data and the lens model for +SDSS J1004+4112 are shown in Figure 1. The redshifts +of the arcs in our lens model are the same as those used +by For´es-Toribio et al. (2022). The strong lensing mass +model parameters are reported in Table 3. +The measured time delay between images A and +B (∆tAB = -38.4 ± 2.0 days) was first published in +Fohlmeister et al. (2007). In this notation, a positive +value of the time delay means image A leads the other +image. In addition to reporting a refined value of ∆tAB += -40.6 ± 1.8 days, Fohlmeister et al. (2008) measured +the time delay between images A and C (∆tAC = - +821.6 ± 2.1 days) and set a lower limit of ∆tAD > +1250 days. After the completion of a 14.5 year mon- +itoring campaign at the 1.2m Fred Lawrence Whipple +Observatory (FLWO), Mu˜noz et al. (2022) recently pre- +sented new light curves for the four brightest images in +SDSS J1004+4112, resulting in updated time delay val- +ues of ∆tAB = -43.01 ± 0.27, ∆tAC = -825.23 ± 0.46 +days, and ∆tAD = 1633.23 ± 0.97 days (Table 4). + +H0 from Cluster-Lensed Quasars +5 +SDSS J1029+2623 +SDSS 1029+2623 +SDSS J1004+4112 +SDSS J1029+2623 +SDSS J2222+2745 +Figure 1. Hubble Space Telescope imaging of the three cluster-lensed quasars used to derive H0. We computed the lens models +for SDSS J1004+4112 and SDSS J1029+2623. SDSS J2222+2745 is reproduced from Sharon et al. (2017). The positions of the +quasar images are denoted with the cyan letters. The critical curves, the loci of maximum magnification at a specified source +redshift, are generated at the quasar redshifts – z = 1.734, z = 2.1992, and z = 2.805, for SDSS J1004+4112, SDSS J1029+2623, +and SDSS J2222+2745, respectively, and are plotted in red. +5. SDSS J1029+2623 +SDSS J1029+2623 is a cluster at z = 0.588 that is +strongly lensing a quasar at z = 2.1992 into three im- +ages (Inada et al. 2006; Oguri et al. 2008). The quasar +images are in a naked cusp configuration with a maxi- +mum image separation of 22.′′5 (Table 1). +Acebron et al. (2022) reported spectroscopic redshifts +of several galaxies in the field, based on Multi Unit Spec- +troscopic Explorer (MUSE) spectroscopy from the Very +Large Telescope. They refined the redshift measurement +of the quasar to z = 2.1992 (formerly reported as z += 2.197, Inada et al. (2006)). The other spectroscop- +ically confirmed objects from MUSE include a doubly- +imaged galaxy at z=2.1812, a septuply-imaged galaxy +at z=3.0275, a quadruply-imaged galaxy at z=3.0278, +a doubly-imaged galaxy at z=1.0232, and a quadruply- +imaged galaxy at z=5.0622 (Acebron et al. 2022) (Table +2). +We used archival HST +multi-color imaging from +GO-12195 (PI: Oguri): +WFC3/F160W (2 orbits), +ACS/F814W +(3 +orbits), +and +ACS/F475W +(2 +or- +bits). These data were originally proposed to identify +multiply-imaged galaxies to construct a mass model that +could be used to better understand the anomalous flux +ratios between two of the quasar images and the dynam- +ical state of the cluster (Oguri et al. 2013). These HST +data also enabled a weak lensing analysis and a mor- +phology study of the quasar host galaxy (Oguri et al. +2013). +Our lens model, which builds on the results from +Acebron et al. (2022) and Oguri et al. (2013), con- +tains 48 constraints and 33 free parameters. All of the +model constraints are taken from Acebron et al. (2022). +The model includes two cluster-scale dark matter ha- +los that were allowed to vary in position around the +two BCGs as well as two galaxy-scale halos that were +fixed to the BCGs’ positions. Additionally, a foreground +galaxy (z=0.5111 from MUSE) and a background galaxy +(z=0.6735 from MUSE) along the line of sight are both +modeled at the cluster redshift since Lenstool does not +yet implement a multi-plane lensing framework. This +approach improves the accuracy of the lensing analysis +outputs compared to omitting these interlopers from the +model (Raney et al. 2020). +Our lens model differs from Acebron et al. (2022) in +the following ways. Whereas Acebron et al. (2022) in- +clude a model (Model 1) with an external shear compo- +nent, we opted to not include this component as its phys- +ical effect on the measured time delay is not well under- +stood. Additionally, for consistency with the other clus- +ters modeled in this paper, our galaxy-scale halos have +ellipticities, whereas Acebron et al. (2022) use spherical +halos. We constructed our galaxy catalog as described in +Section 3, taking into account the MUSE spectroscopy +to determine the red sequence (see Sharon et al. 2022). +We used the ACS F814W vs. F475W for selection. We +identified the red sequence by fitting a line to the spec- +troscopic members in this phase space, with four itera- +tions of sigma clipping. +We found that the source-plane minimization did a +better job at predicting the quasar image positions in +this cluster than the image-plane minimization, possi- +bly due to the close proximity of quasar images B and +C. Once a best-fit model was obtained, we examined the +posterior distribution of image predictions and rejected +from the MCMC sampling steps that did not produce + +6 +Napier et al. +this lensing configuration, i.e., not producing two sep- +arate images for A and B on either side of the critical +curve. Since these two images lie very close to the crit- +ical curve, some parameter combinations produce solu- +tions in which these two images merge and only image +A of the quasar remains, in contrast to the observed +lensing evidence. +The +HST +data +and +the +lens +model +for +SDSS J1029+2623 are shown in Figure 1. The strong +lensing mass model parameters are reported in Table 5. +Fohlmeister et al. (2013) published the time delay +measurement between images A and B (∆tAB = 744 +± 10 days) based on photometric monitoring campaign +at the FLWO 1.2m. +6. SDSS J2222+2745 +SDSS J2222+2745, discovered by Dahle et al. (2013), +is a cluster at z = 0.49 that strongly lenses a quasar at +z = 2.805. The quasar is imaged six times (Sharon et al. +2017) with a maximum image separation of 15.′′1 (Table +1). +Spectroscopy of other lensed galaxies was obtained by +the Gemini North Telescope. These data include triply- +imaged and doubly-imaged knots from a galaxy at z = +4.562 and a doubly-imaged galaxy at z = 2.2963 (Sharon +et al. 2017). +We used archival HST multi-color imaging from GO- +13337 (PI: Sharon): WFC3/F160W, F110W (1 orbit) +and ACS/F814W, F606W, F435W (6 orbits). +These +data were originally proposed to detect any additional +quasar images and to compute a mass model (Sharon +et al. 2017). Additionally, these HST data have enabled +a spatially resolved study of the Lyman-alpha emission +in the quasar host galaxy (Bayliss et al. 2017). +We adopted the lens model from Sharon et al. +(2017) with 32 constraints and 31 free parameters. +SDSS J2222+2745 is modeled with one cluster-scale halo +and 167 galaxy-scale halos. Sharon et al. (2017) included +as constraints triply-imaged and doubly-imaged knots at +the quasar’s redshift of z = 2.805, and triply-imaged and +doubly-imaged knots from a galaxy at z = 4.562. Two +separate triply-imaged galaxies had their redshifts left +as free parameters, with priors of 2.0 ≤ z ≤ 4.0 and +3.8 ≤ z ≤ 5.0, respectively, based on photometric red- +shift analysis. The HST data and the lens model for +SDSS J2222+2745 are shown in Figure 1. +Table 5 of +Sharon et al. (2017) lists the strong lensing mass model +parameters. +Dahle et al. (2015) first published the time delay mea- +surements between images A and B (∆tAB = 47.7 ± 6.0 +days) and A and C (∆tAC = -722 ± 24 days). Then Dyr- +land (2019) reported updated values for the time delays +between images A and B (∆tAB = 42.44 +1.36 +−1.44 days) and +images A and C (∆tAC = -696.65 +2.00 +−2.10 days). These +measurements were based on data from a monitoring +campaign at the 2.5m Nordic Optical Telescope. +In the analysis that follows, +we used the most +up-to-date time delay values for SDSS J1004+4112, +SDSS J1029+2623, and SDSS J2222+2745 which are +listed in Table 4. +Figure +2. +Constraints +on +H0 +from +three +cluster- +lensed quasars, SDSS J1004+4112, SDSS J1029+2623, and +SDSS J2222+2745. +The histograms are created from 100 +random models sampled from the MCMC. Overplotted are +Gaussian fits to the distributions. Whereas individual time +delay measurements produce H0 values with an average of +32% error, the error is decreased to 8.8% when the systems +are analyzed together. The inverse-variance weighted mean +of H0 is 71.5 km s−1 Mpc−1 (solid gray line) and the standard +error of the weighted mean is 6.1 km s−1 Mpc−1. +7. RESULTS +Using the outputs of the lens models described in +the previous sections, we computed the model-predicted +time delay values for each of the quasar images in each +cluster field with respect to image A of the quasar +(Equation 3 and Table 6). +The time delay is a sensitive function of the posi- +tions of the source (⃗β) and its multiple images (⃗θ1,⃗θ2). +The unobservable source position and the locations of +its multiple images are strongly coupled to the time +delay, since stationary points in the arrival time sur- + +Predicted Ho from time delays +0.06 +2222 AB +2222 AC +1029 AB +0.05 +1004 AB +1004 AC +1004 AD +0.04 +0.03 +0.02 +0.01 +0.00 +0 +50 +100 +150 +200 +250 +300 +Hokm/s/MpcH0 from Cluster-Lensed Quasars +7 +face determine the image-plane positions of multiple im- +ages of any given source-plane position (see Section 2). +It is therefore important to measure time delays self- +consistently, by obtaining the time delay at the image +positions predicted by the same lensing potential. Lens +models are never perfect, and small scatter between ob- +served and predicted position is expected. To maintain +this self-consistency, we calculated the source position +⃗β by ray-tracing the observed position of image A (⃗θA) +through the lens equation, and used the same lens model +to predict the image-plane positions of its counter im- +ages (⃗θ2,⃗θ3,...). The time delay was then calculated from +these predicted positions, rather than the observed posi- +tions, which may be slightly shifted from the stationary +points in the Fermat potential. The scatter in the image +or source plane contributes to the error budget through +the MCMC exploration of the parameter space. An al- +ternative approach to determining the source position +would be averaging the predicted source locations from +all the quasar images, and calculating the predicted im- +age locations of the average source. +Using Equation 4, we computed the H0 value cor- +responding to each independent published time delay +value and corresponding predicted time delays. To gen- +erate the 1σ uncertainties in H0, we used 100 random +models from the MCMC sampling of the parameter +space for each cluster. +The number of measured time delays in each field de- +termines the number of H0 measurements derived from +each cluster: three from SDSS J1004+4112, one from +SDSS J1029+2623, and two from SDSS J2222+2745, for +a total of six H0 measurements. Table 7 lists the derived +H0 values and uncertainties, obtained for the “best” lens +model, i.e., the one producing the smallest scatter, and +for the full posterior distribution. +The resulting H0 measurement from each quasar pair +has large uncertainties due to the complexity of the lens +and systematic uncertainties in the lens modeling pro- +cess. However, given that all three of these systems re- +side in the same universe, they all must have the same +H0; we can leverage these three independent lines of +sight, with six time delays, to obtain a tighter constraint +than what is possible from a single time delay. We com- +bine the results from the six time delays by taking the +inverse-variance weighted mean of the six H0 measure- +ments, sampled from their posterior distributions, mak- +ing sure to account for the correlation between measure- +ments made in the same line of sight. We note that the +observational time delay measurement uncertainties are +negligible compared to the lens modeling uncertainties. +The inverse-variance weighted mean and the standard +error of the weighted mean of H0 is 71.5 ± 6.1 km s−1 +Mpc−1 (Fig. 2). Combining the H0 values derived from +multiple time delay values improves the constraints on +H0, decreasing the uncertainty from ∼32% for an in- +dividual H0 measurement to 8.8% for the sample. +If +SDSS J1004+4112’s quasar image C is excluded from the +analysis (see Section 4), we obtain H0 = 73.7 ± 7.5 km +s−1 Mpc−1. +8. DISCUSSION +Our analysis provides an independent H0 measure- +ment that is not sensitive to the same systematics as +other methods. Albeit with a larger fractional uncer- +tainty, our H0 measurement (71.5 ± 6.1 km s−1 Mpc−1) +falls between the lower H0 values from CMB (67.4 ± 0.5 +km s−1 Mpc−1, Planck Collaboration et al. (2020)) and +TRGB (69.8 ± 0.6 (stat) ± 1.6 (sys), Freedman (2021)) +and the higher H0 value from Cepheids (73.2 ± 1.3 km +s−1 Mpc−1, Riess et al. (2021)), and is consistent with +all three. +Increasing the number of systems used for a com- +bined time-delay measurement of H0 will improve this +method’s competitiveness with CMB-based and distance +ladder-based methods. +Although three other cluster- +lensed quasars are published in the literature, none +has all the necessary time delays measurements, space- +resolution imaging, and spectroscopic redshifts of sec- +ondary arcs for a measurement of H0. +All three of +the other published cluster-lensed quasars have ongo- +ing photometric monitoring campaigns to measure their +time delays. Additionally, one of the other three sys- +tems, COOL J0542-2125 (Martinez et al. 2022) will be +observed by HST in Cycle 30 (GO-17243; PI: Napier). +To estimate the improvement in the H0 constraint +from a sample of twice as many time delay measure- +ments, we simulated H0 distributions guided by the +existing sample, as follows. +We randomly selected +six integer H0 values between 50-150 as this is the +range spanned by the peaks of the six H0 distribu- +tions from SDSS J1004+4112, SDSS J1029+2623, and +SDSS J2222+2745. We then randomly assigned to these +six H0 values the standard deviation of one of the six H0 +distributions (Table 7), and produced the correspond- +ing Gaussian distributions. We repeated this simulation +process 100 times. Incorporating these new six H0 dis- +tributions for a total of 12 constraints, and averaging +the 100 iterations, gave a standard error of the weighted +mean of 4.5 km s−1 Mpc−1. +Therefore, doubling the +number of systems results in a ∼30% improvement in +the constraint on H0, reducing the uncertainty on H0 +from 8.8% to 6.3%. +A 1% uncertainty measurement of H0 from cluster- +lensed quasars would be competitive with the cur- + +8 +Napier et al. +rent precision level of CMB and distance ladder meth- +ods. +Extending the simulation described above to a +larger number of systems, we estimated that ∼500 +time delay measurements from cluster-lensed quasars +would achieve a 1% uncertainty level on H0 from +cluster lensed-quasars. +Based on SDSS J1004+4112, +SDSS J1029+2623, and SDSS J2222+2745 each having +an average of two time delay measurements, a sample +size of 250 cluster-lensed quasars would be needed to +produce 500 time delay measurements. Future surveys +are expected to detect of order ∼50 such systems in the +next decade (Robertson et al. 2020). +Therefore, this +increase in sample size alone will not achieve 1% uncer- +tainty in H0; to reach 1% with of order of 50 systems +(100 time delays) will require a decrease in the lens mod- +eling uncertainties by about a factor of two, on average. +Future work will explore whether this decrease in the +uncertainties is feasible. +ACKNOWLEDGMENTS +Based on observations made with the NASA/ESA +Hubble Space Telescope, obtained from the Multimis- +sion Archive at the Space Telescope Science Institute +(MAST) at the Space Telescope Science Institute, which +is operated by the Association of Universities for Re- +search in Astronomy, Incorporated, under NASA con- +tract NAS 5-26555. +These archival observations are +associated with programs GO-10509, GO-9744, GO- +10793, GO-12195, and GO-13337. +Support for HST +program AR-16150, which enabled this work, was pro- +vided through grants from the STScI under NASA con- +tract NAS5-26555. Co-author GM acknowledges fund- +ing from the European Union’s Horizon 2020 research +and innovation programme under the Marie Sk�lodowska- +Curie grant agreement NoMARACHAS - DLV- 896778. +We thank Ana Acebron for her useful discussions about +SDSS J1029+2623. +Facilities: HST(ACS); HST(WFC3); HST(MAST) +Software: +Lenstool (Jullo et al. 2007); Source +Extractor (Bertin & Arnouts 1996) + +H0 from Cluster-Lensed Quasars +9 +REFERENCES +Acebron, A., Grillo, C., Bergamini, P., et al. 2022, ApJ, +926, 86, doi: 10.3847/1538-4357/ac3d35 +Ali, S. S., Bremer, M. N., Phillipps, S., & De Propris, R. +2018, MNRAS, 480, 2236, doi: 10.1093/mnras/sty1988 +Bayliss, M. B., Sharon, K., Acharyya, A., et al. 2017, +ApJL, 845, L14, doi: 10.3847/2041-8213/aa831a +Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, +doi: 10.1051/aas:1996164 +Birrer, S., Millon, M., Sluse, D., et al. 2022, arXiv e-prints, +arXiv:2210.10833, doi: 10.48550/arXiv.2210.10833 +Birrer, S., & Treu, T. 2021, A&A, 649, A61, +doi: 10.1051/0004-6361/202039179 +Birrer, S., Shajib, A. J., Galan, A., et al. 2020, A&A, 643, +A165, doi: 10.1051/0004-6361/20203886110.48550/arXiv. +2007.02941 +Dahle, H., Gladders, M. D., Sharon, K., Bayliss, M. B., & +Rigby, J. R. 2015, ApJ, 813, 67, +doi: 10.1088/0004-637X/813/1/67 +Dahle, H., Gladders, M. D., Sharon, K., et al. 2013, ApJ, +773, 146, doi: 10.1088/0004-637X/773/2/146 +Dyrland, K. 2019, Master’s thesis, University of Oslo +El´ıasd´ottir, ´A., Limousin, M., Richard, J., et al. 2007, +arXiv e-prints, arXiv:0710.5636. +https://arxiv.org/abs/0710.5636 +Fohlmeister, J., Kochanek, C. S., Falco, E. E., Morgan, +C. W., & Wambsganss, J. 2008, ApJ, 676, 761, +doi: 10.1086/528789 +Fohlmeister, J., Kochanek, C. S., Falco, E. E., et al. 2013, +ApJ, 764, 186, doi: 10.1088/0004-637X/764/2/186 +—. 2007, ApJ, 662, 62, doi: 10.1086/518018 +For´es-Toribio, R., Mu˜noz, J. A., Kochanek, C. S., & +Mediavilla, E. 2022, ApJ, 937, 35, +doi: 10.3847/1538-4357/ac8c40 +Freedman, W. L. 2021, ApJ, 919, 16, +doi: 10.3847/1538-4357/ac0e9510.48550/arXiv.2106.15656 +Gladders, M. D., & Yee, H. K. C. 2000, AJ, 120, 2148, +doi: 10.1086/301557 +Hubble, E. 1929, Proceedings of the National Academy of +Science, 15, 168, doi: 10.1073/pnas.15.3.168 +Inada, N., Oguri, M., Pindor, B., et al. 2003, Nature, 426, +810, doi: 10.1038/nature02153 +Inada, N., Oguri, M., Keeton, C. R., et al. 2005, PASJ, 57, +L7, doi: 10.1093/pasj/57.3.L7 +Inada, N., Oguri, M., Morokuma, T., et al. 2006, ApJL, +653, L97, doi: 10.1086/510671 +Johnson, T. L., & Sharon, K. 2016, ApJ, 832, 82, +doi: 10.3847/0004-637X/832/1/82 +Jullo, E., Kneib, J. P., Limousin, M., et al. 2007, New +Journal of Physics, 9, 447, +doi: 10.1088/1367-2630/9/12/447 +Klesman, A. J., & Sarajedini, V. L. 2012, MNRAS, 425, +1215, doi: 10.1111/j.1365-2966.2012.21508.x +Kochanek, C. S. 2020, MNRAS, 493, 1725, +doi: 10.1093/mnras/staa344 +—. 2021, MNRAS, 501, 5021, doi: 10.1093/mnras/staa4033 +Lemaˆıtre, G. 1927, Annales de la Société +Scientifique de Bruxelles, 47, 49 +Lemon, C., Anguita, T., Auger, M., et al. 2022, arXiv +e-prints, arXiv:2206.07714. +https://arxiv.org/abs/2206.07714 +Lemon, C. A., Auger, M. W., & McMahon, R. G. 2019, +MNRAS, 483, 4242, doi: 10.1093/mnras/sty3366 +Limousin, M., Kneib, J.-P., & Natarajan, P. 2005, MNRAS, +356, 309, doi: 10.1111/j.1365-2966.2004.08449.x +Martinez, M. N., Napier, K. A., Cloonan, A. P., et al. 2022, +arXiv e-prints, arXiv:2209.03972. +https://arxiv.org/abs/2209.03972 +Moresco, M., Amati, L., Amendola, L., et al. 2022, arXiv +e-prints, arXiv:2201.07241. +https://arxiv.org/abs/2201.07241 +Mu˜noz, J. A., Kochanek, C. S., Fohlmeister, J., et al. 2022, +arXiv e-prints, arXiv:2206.08597. +https://arxiv.org/abs/2206.08597 +Oguri, M. 2010, PASJ, 62, 1017, +doi: 10.1093/pasj/62.4.1017 +Oguri, M., Ofek, E. O., Inada, N., et al. 2008, ApJL, 676, +L1, doi: 10.1086/586897 +Oguri, M., Schrabback, T., Jullo, E., et al. 2013, MNRAS, +429, 482, doi: 10.1093/mnras/sts351 +Planck Collaboration, Aghanim, N., Akrami, Y., et al. +2020, A&A, 641, A6, doi: 10.1051/0004-6361/201833910 +Raney, C. A., Keeton, C. R., & Brennan, S. 2020, MNRAS, +492, 503, doi: 10.1093/mnras/stz3116 +Refsdal, S. 1964, MNRAS, 128, 307, +doi: 10.1093/mnras/128.4.307 +Riess, A. G., Casertano, S., Yuan, W., et al. 2021, ApJL, +908, L6, doi: 10.3847/2041-8213/abdbaf +Robertson, A., Smith, G. P., Massey, R., et al. 2020, +MNRAS, 495, 3727, doi: 10.1093/mnras/staa1429 +Ross, N. R., Assef, R. J., Kochanek, C. S., Falco, E., & +Poindexter, S. D. 2009, ApJ, 702, 472, +doi: 10.1088/0004-637X/702/1/472 +Schneider, P. 1985, A&A, 143, 413 +Sharon, K. 2008, PhD thesis, Tel Aviv University, Israel + +10 +Napier et al. +Sharon, K., Chen, M. C., Mahler, G., Coe, D., & the +RELICS Collaboration. 2022, arXiv e-prints, +arXiv:2208.08483. https://arxiv.org/abs/2208.08483 +Sharon, K., Ofek, E. O., Smith, G. P., et al. 2005, ApJL, +629, L73, doi: 10.1086/452633 +Sharon, K., Gal-Yam, A., Maoz, D., et al. 2010, ApJ, 718, +876, doi: 10.1088/0004-637X/718/2/876 +Sharon, K., Bayliss, M. B., Dahle, H., et al. 2017, ApJ, 835, +5, doi: 10.3847/1538-4357/835/1/5 +Shu, Y., Koposov, S. E., Evans, N. W., et al. 2019, +MNRAS, 489, 4741, doi: 10.1093/mnras/stz2487 +Shu, Y., Marques-Chaves, R., Evans, N. W., & +P´erez-Fournon, I. 2018, MNRAS, 481, L136, +doi: 10.1093/mnrasl/sly174 +Wong, K. C., Suyu, S. H., Chen, G. C. F., et al. 2020, +MNRAS, 498, 1420, doi: 10.1093/mnras/stz3094 + +H0 from Cluster-Lensed Quasars +11 +Target +QSO Image +QSO z +RA [J2000] +Decl. [J2000] +µ +SDSS J1004+4112 +A +1.734 +151.1450074 +41.2109193 +26.0±5.4 +B +1.734 +151.1454888 +41.2119003 +9.2±1.0 +C +1.734 +151.1409266 +41.2096668 +6.0±0.5 +D +1.734 +151.1419060 +41.2136092 +9.2±1.9 +E +1.734 +151.1423413 +41.2122017 +0.3±0.05 +SDSS J1029+2623 +A +2.1992 +157.3081009 +26.3883044 +6.1±0.4 +B +2.1992 +157.3093619 +26.39446237 +24.7±4.2 +C +2.1992 +157.3095755 +26.3939894 +3.7±8.0 +SDSS J2222+2745 +A +2.805 +335.537707 +27.760543 +15.4±5.7 +B +2.805 +335.53669 +27.761119 +8.0±4.3 +C +2.805 +335.53296 +27.760505 +7.1±2.3 +D +2.805 +335.536205 +27.758901 +1.3±0.4 +E +2.805 +335.536007 +27.758248 +0.8±0.2 +F +2.805 +335.535874 +27.759723 +1.0±0.4 +Table 1. The quasar image positions and redshifts. Also included are the magnifications at the observed positions of the quasar +images. +System +ID +R.A. [J2000] +Decl. [J2000] +z +SDSS J1004+4112 +QSO-A +151.1450074 +41.2109193 +1.734 +QSO-B +151.1454888 +41.2119003 +1.734 +QSO-C +151.1409266 +41.2096668 +1.734 +QSO-D +151.1419060 +41.2136092 +1.734 +QSO-E +151.1423413 +41.2122017 +1.734 +2.1 +151.1418821 +41.2102917 +2.74 +2.2 +151.1468800 +41.2153908 +2.74 +21.1 +151.1417325 +41.2103272 +2.74 +21.2 +151.1470383 +41.2153011 +2.74 +21.3 +151.1419526 +41.2116044 +2.74 +22.1 +151.1416225 +41.2103033 +2.74 +22.2 +151.1471250 +41.2152436 +2.74 +3.1 +151.1414121 +41.2099250 +3.288 +3.2 +151.1476847 +41.2152121 +3.288 +31.1 +151.1413250 +41.2099825 +3.288 +31.2 +151.1477393 +41.2151976 +3.288 +32.1 +151.1412104 +41.2100544 +3.288 +32.2 +151.1478065 +41.2151979 +3.288 +33.1 +151.1411279 +41.2101547 +3.288 +33.2 +151.1478809 +41.2151884 +3.288 +33.3 +151.1418864 +41.2116948 +3.288 +4.1 +151.1439081 +41.2165866 +3.332 +4.2 +151.1382517 +41.2153846 +3.332 +4.3 +151.1379048 +41.2149959 +3.332 +4.4 +151.1434099 +41.2103752 +3.332 + +12 +Napier et al. +41.1 +151.1441118 +41.2165193 +3.332 +41.2 +151.1383309 +41.2153297 +3.332 +41.3 +151.1378932 +41.2148820 +3.332 +41.4 +151.1434562 +41.2102573 +3.332 +42.1 +151.1444522 +41.2163884 +3.332 +42.2 +151.1383940 +41.2153469 +3.332 +42.3 +151.1378407 +41.2148091 +3.332 +42.4 +151.1434818 +41.2101761 +3.332 +43.1 +151.1445319 +41.2162919 +3.332 +43.2 +151.1384506 +41.2154232 +3.332 +43.3 +151.1376594 +41.2145747 +3.332 +43.4 +151.1435603 +41.2101349 +3.332 +43.5 +151.1424833 +41.2118271 +3.332 +SDSS J1029+2623 +QSO-A +157.3081009 +26.38830445 +2.1992 +QSO-B +157.3093619 +26.39446237 +2.1992 +QSO-C +157.3095755 +26.3939894 +2.1992 +1.1 +157.2980611 +26.3907404 +· · · +1.2 +157.2978817 +26.3924467 +· · · +1.3 +157.3008758 +26.3974054 +· · · +2.1 +157.2981743 +26.3915325 +2.1812 +2.3 +157.3014749 +26.3977063 +2.1812 +3.1 +157.2990642 +26.3923892 +3.0275 +3.2 +157.3074114 +26.3913469 +3.0275 +3.3 +157.3041512 +26.3982630 +3.0275 +3.4 +157.3015481 +26.3880193 +3.0275 +3.5 +157.3017377 +26.3879213 +3.0275 +3.6 +157.3018385 +26.3878900 +3.0275 +3.7 +157.3032208 +26.3919632 +3.0275 +4.1 +157.2992278 +26.3925219 +3.0278 +4.2 +157.3076382 +26.3913247 +3.0278 +4.3 +157.3043869 +26.3981437 +3.0278 +4.4 +157.3023985 +26.3877048 +3.0278 +4.5 +157.3035100 +26.3920169 +3.0278 +5.1 +157.3019777 +26.3946563 +1.0232 +5.3 +157.3008781 +26.3917377 +1.0232 +7.1 +157.3075794 +26.3951262 +5.0622 +7.2 +157.3064130 +26.3960500 +5.0622 +7.3 +157.3014210 +26.3936610 +5.0622 +7.4 +157.3012420 +26.3938020 +5.0622 +Table 2. Positions and spectroscopic redshifts of the multiply-imaged +background sources used as constraints in the strong lens models for +SDSS J1004+4112 and SDSS J1029+2623. See Table 1 from Sharon et al. +(2017) for the lensing constraints for SDSS J2222+2745. + +H0 from Cluster-Lensed Quasars +13 +Component No. +∆ R.A. [′′] +∆ Decl. [′′] +e +θ [deg] +σ0 [km s−1] +rcut [kpc] +rcore [kpc] +1 +-0.085+2.56 +−0.53 +3.07+5.83 +−1.30 +0.17+0.022 +−0.030 +66.39+3.70 +−3.22 +987+245 +−84 +[1500] +126.27+112.43 +−33.97 +2 +[0] +[0] +[0.40] +63.98+4.34 +−5.31 +461+48 +−52 +181.42+13.77 +−28.04 +5.65+0.99 +−1.62 +3 +[1.963] +[-1.832] +0.42+0.25 +−0.19 +[349.480] +235+10 +−14 +30.30+7.045 +−12.29 +2.68+0.99 +−0.68 +4 +[7.659] +[-9.821] +0.43+0.22 +−0.29 +[131.13] +127+33 +−29 +20.13+6.64 +−8.33 +1.62+1.48 +−1.06 +5 +[-8.463] +[-3.877] +0.44+0.24 +−0.27 +[133.89] +114+31 +−28 +13.28+2.97 +−2.97 +2.26+0.92 +−1.20 +6 +[11.220] +[11.401] +0.42+0.29 +−0.29 +150.24+22.22 +−34.44 +76+9 +−7 +22.465.79 +−6.85 +3.18+0.85 +−0.85 +Table 3. Strong lensing mass model parameters for SDSS J1004+4112. Median values and the 1σ confidence level from the +MCMC are reported. The coordinates ∆ R.A. and ∆ Decl. are listed in arcseconds measured east and north from the core +of Component No. 2 at [RA, Dec] = [151.142381, 41.212131]. The other parameters are the ellipticity e, the position angle +θ, the velocity dispersion σ0, the cut radius rcut, and the core radius rcore. The parameters listed in square brackets were not +optimized. +Target Name +z clus- +ter +z QSO +no. +QSO +im +widest +sepa- +ration +[′′] +no. +of +lensed +sources +no. +of +spec- +zs +time delay (days) +Reference +SDSS J1004+4112 +0.68 +1.734 +5 +14.6 +4 +4 +∆tAB = −43.01 ± 0.27 +Mu˜noz+(2022) +∆tAC = −825.23 ± 0.46 +∆tAD = 1633.23 ± 0.97 +SDSS J1029+2623 +0.58 +2.1992 +3 +22.5 +7 +6 +∆tAB = 744 ± 10 +Fohlmeister+(2013) +SDSS J2222+2745 +0.49 +2.805 +6 +15.1 +5 +3 +∆tAB = 42.44+1.36 +−1.44 +Dyrland (2019) +∆tAC = −696.65+2.00 +−2.10 +Table 4. The three large separation lensed QSOs in the HST archive. The listed time delays are the most up-to-date values +from the literature. See Fohlmeister et al. (2008) and Dahle et al. (2015) for previous measurements for SDSS J1004+4112 and +SDSS J2222+2745, respectively. +Component No. +∆ R.A. [′′] +∆ Decl. [′′] +e +θ [deg] +σ0 [km s−1] +rcut [kpc] +rcore [kpc] +1 +-10.01+0.53 +−0.62 +0.71+0.25 +−0.23 +0.53+0.031 +−0.034 +172.80+2.24 +−2.27 +650+21 +−20 +[1500] +31.39+4.37 +−3.78 +2 +3.04+1.16 +−1.38 +3.62+0.46 +−0.58 +0.55+0.052 +−0.055 +17.25+4.87 +−5.10 +528+30 +−20 +[1500] +37.95+6.42 +−6.62 +3 +2.48+1.35 +−1.25 +-0.11+1.83 +−2.35 +0.61+0.10 +−0.062 +45.57+7.24 +−9.24 +385+43 +−52 +[1500] +57.82+9.47 +−11.86 +4 +[-3.808] +[-1.354] +0.51+0.19 +−0.21 +69.07+19.26 +−15.61 +202+20 +−19 +33.64+7.88 +−6.82 +1.92+0.52 +−0.86 +5 +[-19.7] +[-8.8] +[0.0] +[0.0] +169+30 +−24 +89.94+19.27 +−19.47 +[0.0] +6 +-23.87+0.13 +−0.11 +6.50+0.14 +−0.12 +0.30+0.29 +−0.20 +52.06+26.58 +−38.88 +64+7 +−5 +32.65+11.13 +−16.82 +0.51+0.30 +−0.31 +Table 5. Strong lensing mass model parameters for SDSS J1029+2623. Median values and the 1σ confidence level from the +MCMC are reported. The coordinates ∆ R.A. and ∆ Decl. are listed in arcseconds measured east and north from [RA, Dec] += [157.302047, 26.392209]. The other parameters are the ellipticity e, the position angle θ, the velocity dispersion σ0, the cut +radius rcut, and the core radius rcore. The parameters listed in square brackets were not optimized. +System +∆tAB +∆tAC +∆tAD +∆tAE +∆tAF +SDSS J1004+4112 +-11 +-783 +1294 +1776 +N/A +SDSS J1029+2623 +1060 +1054 +N/A +N/A +N/A +SDSS J2222+2745 +54 +-693 +485 +564 +431 +Table 6. Predicted time delay (in days) from the ‘best’ lens model for each cluster. The values are measured at the model- +predicted locations of the quasar images, assuming H0= 70 km s−1 Mpc−1. + +14 +Napier et al. +System +H0 (km s−1 Mpc−1) +H0 (km s−1 Mpc−1) +(from best model) +(mean ± 1σ) +SDSS J1004+4112 +AB +17.4 +56.4 ± 35.0 +AC +66.4 +55.8 ± 17.9 +AD +55.5 +69.3 ± 8.2 +SDSS J1029+2623 +AB +99.7 +93.6 ± 37.8 +SDSS J2222+2745 +AB +89.1 +109.0 ± 24.1 +AC +69.6 +74.8 ± 15.8 +Table 7. H0 constraints from the time delay measurements in SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745. +The middle column is the H0 value from the ‘best’ lens model for each cluster. The right column lists the mean and 1σ from +the Gaussian distribution fit to the H0 values determined from 100 random models drawn from the MCMC. + diff --git a/GdFIT4oBgHgl3EQfWysJ/content/tmp_files/load_file.txt b/GdFIT4oBgHgl3EQfWysJ/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ff25c6a8135cae9b9fc04434587e8164144187d --- /dev/null +++ b/GdFIT4oBgHgl3EQfWysJ/content/tmp_files/load_file.txt @@ -0,0 +1,1345 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf,len=1344 +page_content='Draft version January 27, 2023 Typeset using LATEX twocolumn style in AASTeX63 Hubble Constant Measurement from Three Large-Separation Quasars Strongly Lensed by Galaxy Clusters Kate Napier,1 Keren Sharon,1 H˚akon Dahle,2 Matthew Bayliss,3 Michael D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Gladders,4 Guillaume Mahler,5, 6 Jane R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Rigby,7 and Michael Florian8 1Department of Astronomy, University of Michigan, 1085 S University Ave, Ann Arbor, MI 48109, USA 2Institute of Theoretical Astrophysics, University of Oslo, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Box 1029,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Blindern,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' NO-0315 Oslo,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Norway 3Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' University of Cincinnati,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Cincinnati,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' OH 45221,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' USA 4Department of Astronomy and Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' University of Chicago,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 5640 South Ellis Avenue,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Chicago,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' IL 60637,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' USA 5Centre for Extragalactic Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Durham University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' South Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Durham DH1 3LE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' UK 6 Institute for Computational Cosmology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Durham University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' South Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Durham DH1 3LE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' UK 7Observational Cosmology Lab,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Code 665,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' NASA Goddard Space Flight Center,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Greenbelt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' MD 20771,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' USA 8Steward Observatory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' University of Arizona,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 933 North Cherry Ave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Tucson, AZ 85721, USA (Received ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Revised ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Accepted ) Submitted to ApJ ABSTRACT Tension between cosmic microwave background-based and distance ladder-based determinations of the Hubble constant H0 motivates pursuit of independent methods that are not subject to the same systematic effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A promising alternative, proposed by Refsdal in 1964, relies on the inverse scaling of H0 with the delay between the arrival times of at least two images of a strongly-lensed variable source such as a quasar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' To date, Refsdal’s method has mostly been applied to quasars lensed by individual galaxies rather than by galaxy clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Using the three quasars strongly lensed by galaxy clus- ters (SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745) that have both multiband Hubble Space Telescope data and published time delay measurements, we derive H0, accounting for the sys- tematic and statistical sources of uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' While a single time delay measurement does not yield a well-constrained H0 value, analyzing the systems together tightens the constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Combining the six time delays measured in the three cluster-lensed quasars gives H0 = 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 km s−1 Mpc−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' To reach 1% uncertainty in H0, we estimate that a sample size of order of 500 time delay measurements of similar quality as those from SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745 would be needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Improving the lens modeling uncertainties by a factor of two may reduce the needed sample size to 120 time delays, potentially reachable in the next decade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Keywords: galaxy clusters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' quasars;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' time delay;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Hubble constant 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' INTRODUCTION The Hubble parameter H0, which describes the cur- rent expansion rate of the Universe, has been sought since the discovery in the 1920s that the Universe is expanding (Lemaˆıtre 1927;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Hubble 1929).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' At the turn of the last century, measurements of H0 started con- verging around H0 = 70 km s−1 Mpc−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' However, as H0 measurements have become increasingly precise, the Corresponding author: Kate Napier kanapier@umich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='edu so-called ‘Hubble Tension’ has arisen between the esti- mates from early- and late-Universe probes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The Planck Collaboration reported H0 = 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 km s−1 Mpc−1 (Planck Collaboration et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' They used density fluctuations encoded in the Cosmic Microwave Back- ground (CMB) at the surface of last scattering to deter- mine H at that epoch, then used a spatially flat cosmo- logical model to extrapolate to H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' By contrast, the “Su- pernovae, H0, for the Equation of State of Dark Energy” (SH0ES) collaboration combined Gaia parallaxes and multi-band HST photometry of Milky Way Cepheids to calibrate the extragalactic distance scale and derive H0 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='11240v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='CO] 26 Jan 2023 2 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' = 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 km s−1 Mpc−1 (Riess et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The Planck and SH0ES values, which respectively capture the early and late-time physics of the Universe, differ by 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Freedman (2021) applied an updated Tip of the Red Giant Branch (TRGB) calibration to a distant sample of Type Ia supernovae from the Carnegie Su- pernova Project and obtained H0 = 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 (stat) ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 (sys) km s−1 Mpc−1, consistent with the CMB value, and within 2σ of the SH0ES value, owing to the larger uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The discrepancy between different H0 methods may indicate a deviation from the standard Λ Cold Dark Matter (ΛCDM) model, and therefore new physics, or the presence of unknown or underestimated systematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Either way, this tension remotivates the pursuit of other H0 determination methods that are not prone to the same systematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' An alternative H0 determination method, proposed by Refsdal (1964), uses the fact that H0 scales inversely with the delay between the arrival times of at least two images of a strongly-lensed variable source, such as a quasar or a supernova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Due to the rarity of galaxy clus- ters lensing quasars or supernovae, the Refsdal H0 tech- nique has primarily been sought with galaxy-scale lenses (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', the recent reviews by Moresco et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Bir- rer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Of the >300 known lensed quasars, the vast major- ity are lensed by individual galaxies (Lemon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2019, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Quasars lensed by individual galaxies have been used to obtain H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' For example, the H0 Lenses in COS- MOGRAIL’s Wellspring (H0LiCOW) collaboration per- formed a joint analysis of six galaxy-lensed quasars to obtain H0 = 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 km s−1 Mpc−1 (Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' This value seems to be consistent with the Cepheid- calibrated measurement from the SH0ES collaboration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Birrer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2020) found a smaller H0 value, and a larger uncertainty, H0 = 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4+4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 km s−1 Mpc−1, sta- tistically consistent with the CMB and TRGB measure- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The smaller H0 value was driven by making an assumption that the H0 lens galaxy population is drawn from a parent population with the same statistical prop- erties as the Sloan Lens ACS lenses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Kochanek (2020) argued that although the uncertain- ties of H0 values from galaxy-lensed quasars are typ- ically reported as 4 - 8% for individual gravitational lenses, it is likely that any current estimate of H0 from time delays has an uncertainty of at least 10%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' As dis- cussed in Kochanek (2020, 2021), the main uncertainty with galaxy lenses is the mean surface mass density of the lens within the Einstein radius where most lensed images are found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The distribution of baryonic mat- ter in the lens galaxy significantly contributes to the mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Most galaxy-scale lenses are early-type galax- ies, and local measurements show that these galaxies exhibit color gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Color gradients indicate spatial variation in age and metallicity, and thus, produce corre- sponding gradients in the mass-to-light ratio of the bary- onic mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A galaxy’s evolutionary history and growth through mergers will complexly affect these gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Resolved JWST and Extremeley Large Telescope ob- servations of the stellar kinematics in the lens galaxies may significantly reduce these sources of systematic er- rors (Birrer & Treu 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' What has remained largely unexplored until now is de- termining H0 by using quasars that are strongly lensed by galaxy clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' For several reasons, cluster-lensed quasars can potentially overcome some of the difficul- ties faced by individual galaxy lenses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' First, since galaxy clusters have deeper potential wells than galaxies, clus- ter lenses produce longer time delays of order months to years compared to typically a month in galaxy lenses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Consequently, the observationally measured time delay values will have smaller fractional uncertainty, which then will propagate to reduced uncertainty in H0 due to the inverse scaling of H0 with time delays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Second, the light curves of cluster-lensed quasars are less likely to be affected by microlensing from stars in the lens plane, because the mass distribution is dominated by dark matter at the projected radius at which the im- ages appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Third, galaxy cluster mass distributions are less affected by complex baryonic physics than those of galaxy lenses;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' the complex baryonic surface density of galaxy-scale lenses may be a significant source of system- atic uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A challenge that must be contended with, however, is the complexity of cluster lenses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Two inputs are necessary to use cluster-lensed quasars to determine H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The first is an observational measure- ment of the time delay between the multiple quasar im- ages, and the second is an accurate mapping of the pro- jected density of the dark and luminous mass at the clus- ter core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' High accuracy lens models require space-based resolution and spectroscopic follow-up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Of the six pub- lished cluster-lensed quasars to date (Inada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2003, 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Dahle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Shu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2018, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Martinez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022), only three have the necessary data available to determine H0: SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In this paper, we use the avail- able archival HST data and the latest measurements of time delay and spectroscopic redshifts of background sources from the literature to obtain an independent measurement of H0 from these three systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' This paper is organized as follows: In Section 2, we outline the theory of observational gravitational lensing time delay and its dependence on H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In Section 3 we detail the lens modeling procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In Sections 4, 5, H0 from Cluster-Lensed Quasars 3 and 6, we give an overview of the three cluster-lensed quasar systems used in this H0 analysis and provide details about their HST and spectroscopic data, time delays, and lens models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In Section 7, we present our constraints on H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We conclude in Section 8 with a dis- cussion of our H0 result and the future prospects of the time delay H0 method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Throughout the paper, we adopt the standard ΛCDM flat cosmological model with Ωm = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 and ΩΛ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' TIME DELAY ANALYSIS The Refsdal H0 method is possible due to the mea- surable delay between the arrival time of two or more images of a variable source such as a quasar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Under the thin lens approximation, a packet of light that travels from the source to the observer will be delayed by time t given by the arrival time surface (Schneider 1985): t(⃗θ, ⃗β) = 1 + zl c dlds dls [1 2(⃗θ − ⃗β)2 − ψ(⃗θ)], (1) where zl is the redshift of the lens, dl, ds, and dls are an- gular diameter distances from the observer to the lens, to the source, and between the lens and the source, re- spectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' ⃗θ is the image position in the image plane;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' ⃗β is the unobserved source position;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' and ψ(⃗θ) is the gravitational lensing potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The arrival time t is a combination of the path length and the gravitational time delay (t = tgeometric + tgrav).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The last term, τ(θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' β) = [ 1 2(⃗θ − ⃗β)2 − ψ(⃗θ)], is also known as the Fer- mat potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The multiple images of a strongly-lensed source appear in the stationary points of the arrival time surface, that is, in the minima, maxima, and saddle points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' H0 is incorporated in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1 because of its in- verse scaling with the angular diameter distances: dA(z1, z2) = 1 (1 + z2) c H0 z2 � z1 dz E(z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Ωm, ΩΛ), (2) where E(z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Ωm, ΩΛ) is a dimensionless function given by E(z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Ωm, ΩΛ) = � Ωm(1 + z)3 + ΩΛ + (1 − Ωm − ΩΛ)(1 + z)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The matter density and vacuum energy density param- eters are Ωm and ΩΛ, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Conveniently, H0 is disentangled from the other cosmological parameters in the angular diameter distance equation (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' After substituting Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2 into dlds/dls in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1, the time delay is determined by solving Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1 for two image positions corresponding to the same source position and taking the difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The time delay between the images thus becomes: ∆t = � 1 H0 � � 1 + zl 1 + zs � � � � � � zl� 0 dz E(z) zs� 0 dz E(z) zs� zl dz E(z) � � � � � × �1 2[(⃗θ1 − ⃗β)2 − (⃗θ2 − ⃗β)2] − [ψ(⃗θ1) − ψ(⃗θ2)] � (3) The first term on the right-hand side of the time delay equation gives the Hubble parameter;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' the second term is a direct observable;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' the third term contains the depen- dence on cosmological parameters other than H0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' and the last term is solved by the strong gravitational lens model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We neglect the higher order effects of the cos- mological parameters and take the third term in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 3 to be constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The left-hand side of the equation is the measurement of the time delay, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', from monitoring and comparing the observed light curves of two images of the variable source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Once we compute a model of the lensing mass distribu- tion (see Section 3), we determine the model-predicted excess arrival time surface (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 3) with respect to one of the quasar images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Assuming our lens model is a correct description of the matter distribution, we then leverage the fact that the time delay scales inversely with H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We compare the model-predicted time delays between images to the observational measurements of the time delays to obtain H0 via: H0 = H0,model × ∆tmodel ∆tmeasured (4) where H0,model is the H0 value used to generate the Fer- mat potential from the lensing analysis, ∆tmodel is the model-predicted time delay between the quasar images, and ∆tmeasured is the observational measurement of the time delay between the pair of quasar images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' LENS MODELING We computed the lens models with the publicly avail- able software Lenstool (Jullo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Lenstool is a ‘parametric’ modeling algorithm which describes the lensing mass distribution as a linear combination of galaxy-scale, group-scale, and cluster-scale halos, each of which is parameterized as a pseudo-isothermal el- lipsoidal mass distribution (PIEMD, also called dPIE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' El´ıasd´ottir et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A PIEMD halo has seven pa- rameters whose values can either be fixed or varied: po- sition (x, y);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' ellipticity e = (a2-b2)/(a2+b2), where a and b are the semi-major and semi-minor axes, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' position angle θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' core radius rc;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' truncation radius rcut;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 4 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' and effective velocity dispersion σ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The parameters of the group-scale and cluster-scale halos are typically al- lowed to vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The exception is rcut for the cluster-scale halos as this radius usually occurs outside the region where strong lensing evidence is found, and thus, can- not be constrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Lenstool uses a Markov Chain Monte Carlo (MCMC) sampling of parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The best-fit model is iden- tified as the one that minimizes the scatter between the model-predicted and observed image locations in the im- age plane (“image plane minimization”) or minimizes the scatter between the predicted source locations of multiple images in the source plane (“source plane min- imization”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The lens models employ the strong lens- ing evidence of multiply-imaged galaxies (arcs), whose positions and redshifts are used as model constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The availability of lensing constraints strongly affects the accuracy of lens models, as they are used as local solutions of the lensing equations and constrain the pro- jected mass density distribution at the cluster’s core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The mass distribution and magnification are sensitive to the accurate identifications and positions of multiple images and to the redshifts of the lensed galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' It is necessary to include a few spectroscopic redshifts in the lens model in order to avoid incorrect results (Johnson & Sharon 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' To select cluster-member galaxies, we followed the procedure of Gladders & Yee (2000), by selecting galax- ies that fall on the cluster red sequence in a color- magnitude diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' For SDSS J1029+2623 we also in- corporated spectroscopic redshift information (see Sec- tion 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The galaxy-scale halos’ positional parame- ters (x, y, e, θ) are measured with Source Extractor (Bertin & Arnouts 1996) and fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The rcore, rcut, and σ0 of the galaxy-scale halos are scaled to their observed luminosity following the scaling relations in Limousin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' SDSS J1004+4112 SDSS J1004+4112 was the first discovered galaxy clus- ter strongly lensing a quasar (Inada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The cluster at z = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='68 strongly lenses a quasar at z = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 into five images, with a maximum image separation of 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='′′6 (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The cluster also strongly lenses several background sources at z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005), z = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 (Sharon 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Oguri 2010), and z = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005) (Table 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We used archival HST multi-color imaging from the Advanced Camera for Surveys (ACS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The SDSS J1004+4112 imaging data include GO-10509 (PI: Kochanek) ACS/F814W, F555W, F435W (10 orbits);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' GO-9744 (PI: Kochanek) ACS/F814W, F555W (2 or- bits);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' and GO-10793 (PI: Gal-Yam) ACS/F814W (1 orbit).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These data were originally proposed to iden- tify multiply-imaged galaxies to construct a mass model (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005), search for the fifth quasar image (Inada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005), derive ΩΛ, perform a weak lens- ing analysis, and search for supernovae in massive high- redshift clusters (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These data also enabled studies of the spectral energy distribution of the quasar host galaxy (Ross et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2009), the ultraviolet up- turn in red sequence galaxies (Ali et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2018), and active galactic nuclei (AGN) in massive clusters (Klesman & Sarajedini 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We modeled SDSS J1004+4112 using one cluster-scale halo, one brightest cluster galaxy (BCG)-scale halo, and a galaxy-scale halo for each of the cluster member galax- ies, four of which have their parameters optimized in- stead of adopting the scaling relations from Limousin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We modeled the cluster using both source-plane min- imization and image-plane minimization, and evaluated the quality of the models obtained by each approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' While formally the image-plane minimization resulted in a better image-plane scatter, these models produced additional quasar images that are not observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' There- fore, we proceeded with the source-plane minimization for SDSS J1004+4112 for the remainder of the analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We note that the best-fit lens model produced large scat- ter between the observed and model-predicted positions in the image plane for quasar image C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In our results, we checked what happens when image C is removed from the H0 measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The model consists of 27 free parameters and 78 constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The HST data and the lens model for SDSS J1004+4112 are shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The redshifts of the arcs in our lens model are the same as those used by For´es-Toribio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The strong lensing mass model parameters are reported in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The measured time delay between images A and B (∆tAB = -38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 days) was first published in Fohlmeister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In this notation, a positive value of the time delay means image A leads the other image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In addition to reporting a refined value of ∆tAB = -40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 days, Fohlmeister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2008) measured the time delay between images A and C (∆tAC = - 821.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 days) and set a lower limit of ∆tAD > 1250 days.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' After the completion of a 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 year mon- itoring campaign at the 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2m Fred Lawrence Whipple Observatory (FLWO), Mu˜noz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022) recently pre- sented new light curves for the four brightest images in SDSS J1004+4112, resulting in updated time delay val- ues of ∆tAB = -43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='01 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='27, ∆tAC = -825.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='23 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='46 days, and ∆tAD = 1633.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='23 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='97 days (Table 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' H0 from Cluster-Lensed Quasars 5 SDSS J1029+2623 SDSS 1029+2623 SDSS J1004+4112 SDSS J1029+2623 SDSS J2222+2745 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Hubble Space Telescope imaging of the three cluster-lensed quasars used to derive H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We computed the lens models for SDSS J1004+4112 and SDSS J1029+2623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' SDSS J2222+2745 is reproduced from Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The positions of the quasar images are denoted with the cyan letters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The critical curves, the loci of maximum magnification at a specified source redshift, are generated at the quasar redshifts – z = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734, z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992, and z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805, for SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745, respectively, and are plotted in red.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' SDSS J1029+2623 SDSS J1029+2623 is a cluster at z = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='588 that is strongly lensing a quasar at z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 into three im- ages (Inada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Oguri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The quasar images are in a naked cusp configuration with a maxi- mum image separation of 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='′′5 (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022) reported spectroscopic redshifts of several galaxies in the field, based on Multi Unit Spec- troscopic Explorer (MUSE) spectroscopy from the Very Large Telescope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' They refined the redshift measurement of the quasar to z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 (formerly reported as z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='197, Inada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2006)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The other spectroscop- ically confirmed objects from MUSE include a doubly- imaged galaxy at z=2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1812, a septuply-imaged galaxy at z=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275, a quadruply-imaged galaxy at z=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0278, a doubly-imaged galaxy at z=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0232, and a quadruply- imaged galaxy at z=5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0622 (Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022) (Table 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We used archival HST multi-color imaging from GO-12195 (PI: Oguri): WFC3/F160W (2 orbits), ACS/F814W (3 orbits), and ACS/F475W (2 or- bits).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These data were originally proposed to identify multiply-imaged galaxies to construct a mass model that could be used to better understand the anomalous flux ratios between two of the quasar images and the dynam- ical state of the cluster (Oguri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These HST data also enabled a weak lensing analysis and a mor- phology study of the quasar host galaxy (Oguri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Our lens model, which builds on the results from Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022) and Oguri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2013), con- tains 48 constraints and 33 free parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' All of the model constraints are taken from Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The model includes two cluster-scale dark matter ha- los that were allowed to vary in position around the two BCGs as well as two galaxy-scale halos that were fixed to the BCGs’ positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Additionally, a foreground galaxy (z=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5111 from MUSE) and a background galaxy (z=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6735 from MUSE) along the line of sight are both modeled at the cluster redshift since Lenstool does not yet implement a multi-plane lensing framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' This approach improves the accuracy of the lensing analysis outputs compared to omitting these interlopers from the model (Raney et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Our lens model differs from Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022) in the following ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Whereas Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022) in- clude a model (Model 1) with an external shear compo- nent, we opted to not include this component as its phys- ical effect on the measured time delay is not well under- stood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Additionally, for consistency with the other clus- ters modeled in this paper, our galaxy-scale halos have ellipticities, whereas Acebron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2022) use spherical halos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We constructed our galaxy catalog as described in Section 3, taking into account the MUSE spectroscopy to determine the red sequence (see Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We used the ACS F814W vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' F475W for selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We identified the red sequence by fitting a line to the spec- troscopic members in this phase space, with four itera- tions of sigma clipping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We found that the source-plane minimization did a better job at predicting the quasar image positions in this cluster than the image-plane minimization, possi- bly due to the close proximity of quasar images B and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Once a best-fit model was obtained, we examined the posterior distribution of image predictions and rejected from the MCMC sampling steps that did not produce 6 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' this lensing configuration, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', not producing two sep- arate images for A and B on either side of the critical curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Since these two images lie very close to the crit- ical curve, some parameter combinations produce solu- tions in which these two images merge and only image A of the quasar remains, in contrast to the observed lensing evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The HST data and the lens model for SDSS J1029+2623 are shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The strong lensing mass model parameters are reported in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Fohlmeister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2013) published the time delay measurement between images A and B (∆tAB = 744 ± 10 days) based on photometric monitoring campaign at the FLWO 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' SDSS J2222+2745 SDSS J2222+2745, discovered by Dahle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2013), is a cluster at z = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='49 that strongly lenses a quasar at z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The quasar is imaged six times (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2017) with a maximum image separation of 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='′′1 (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Spectroscopy of other lensed galaxies was obtained by the Gemini North Telescope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These data include triply- imaged and doubly-imaged knots from a galaxy at z = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='562 and a doubly-imaged galaxy at z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2963 (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We used archival HST multi-color imaging from GO- 13337 (PI: Sharon): WFC3/F160W, F110W (1 orbit) and ACS/F814W, F606W, F435W (6 orbits).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These data were originally proposed to detect any additional quasar images and to compute a mass model (Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Additionally, these HST data have enabled a spatially resolved study of the Lyman-alpha emission in the quasar host galaxy (Bayliss et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We adopted the lens model from Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2017) with 32 constraints and 31 free parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' SDSS J2222+2745 is modeled with one cluster-scale halo and 167 galaxy-scale halos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2017) included as constraints triply-imaged and doubly-imaged knots at the quasar’s redshift of z = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805, and triply-imaged and doubly-imaged knots from a galaxy at z = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='562.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Two separate triply-imaged galaxies had their redshifts left as free parameters, with priors of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 ≤ z ≤ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 ≤ z ≤ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0, respectively, based on photometric red- shift analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The HST data and the lens model for SDSS J2222+2745 are shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Table 5 of Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2017) lists the strong lensing mass model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Dahle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2015) first published the time delay mea- surements between images A and B (∆tAB = 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 days) and A and C (∆tAC = -722 ± 24 days).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Then Dyr- land (2019) reported updated values for the time delays between images A and B (∆tAB = 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='44 +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='36 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='44 days) and images A and C (∆tAC = -696.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='65 +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='00 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='10 days).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These measurements were based on data from a monitoring campaign at the 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5m Nordic Optical Telescope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' In the analysis that follows, we used the most up-to-date time delay values for SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745 which are listed in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Constraints on H0 from three cluster- lensed quasars, SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The histograms are created from 100 random models sampled from the MCMC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Overplotted are Gaussian fits to the distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Whereas individual time delay measurements produce H0 values with an average of 32% error, the error is decreased to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8% when the systems are analyzed together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The inverse-variance weighted mean of H0 is 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 km s−1 Mpc−1 (solid gray line) and the standard error of the weighted mean is 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 km s−1 Mpc−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' RESULTS Using the outputs of the lens models described in the previous sections, we computed the model-predicted time delay values for each of the quasar images in each cluster field with respect to image A of the quasar (Equation 3 and Table 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The time delay is a sensitive function of the posi- tions of the source (⃗β) and its multiple images (⃗θ1,⃗θ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The unobservable source position and the locations of its multiple images are strongly coupled to the time delay, since stationary points in the arrival time sur- Predicted Ho from time delays 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='06 2222 AB 2222 AC 1029 AB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='05 1004 AB 1004 AC 1004 AD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='00 0 50 100 150 200 250 300 Hokm/s/MpcH0 from Cluster-Lensed Quasars 7 face determine the image-plane positions of multiple im- ages of any given source-plane position (see Section 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' It is therefore important to measure time delays self- consistently, by obtaining the time delay at the image positions predicted by the same lensing potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Lens models are never perfect, and small scatter between ob- served and predicted position is expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' To maintain this self-consistency, we calculated the source position ⃗β by ray-tracing the observed position of image A (⃗θA) through the lens equation, and used the same lens model to predict the image-plane positions of its counter im- ages (⃗θ2,⃗θ3,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The time delay was then calculated from these predicted positions, rather than the observed posi- tions, which may be slightly shifted from the stationary points in the Fermat potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The scatter in the image or source plane contributes to the error budget through the MCMC exploration of the parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' An al- ternative approach to determining the source position would be averaging the predicted source locations from all the quasar images, and calculating the predicted im- age locations of the average source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Using Equation 4, we computed the H0 value cor- responding to each independent published time delay value and corresponding predicted time delays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' To gen- erate the 1σ uncertainties in H0, we used 100 random models from the MCMC sampling of the parameter space for each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The number of measured time delays in each field de- termines the number of H0 measurements derived from each cluster: three from SDSS J1004+4112, one from SDSS J1029+2623, and two from SDSS J2222+2745, for a total of six H0 measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Table 7 lists the derived H0 values and uncertainties, obtained for the “best” lens model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', the one producing the smallest scatter, and for the full posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The resulting H0 measurement from each quasar pair has large uncertainties due to the complexity of the lens and systematic uncertainties in the lens modeling pro- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' However, given that all three of these systems re- side in the same universe, they all must have the same H0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' we can leverage these three independent lines of sight, with six time delays, to obtain a tighter constraint than what is possible from a single time delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We com- bine the results from the six time delays by taking the inverse-variance weighted mean of the six H0 measure- ments, sampled from their posterior distributions, mak- ing sure to account for the correlation between measure- ments made in the same line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We note that the observational time delay measurement uncertainties are negligible compared to the lens modeling uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The inverse-variance weighted mean and the standard error of the weighted mean of H0 is 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 km s−1 Mpc−1 (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Combining the H0 values derived from multiple time delay values improves the constraints on H0, decreasing the uncertainty from ∼32% for an in- dividual H0 measurement to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8% for the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' If SDSS J1004+4112’s quasar image C is excluded from the analysis (see Section 4), we obtain H0 = 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7 ± 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 km s−1 Mpc−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' DISCUSSION Our analysis provides an independent H0 measure- ment that is not sensitive to the same systematics as other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Albeit with a larger fractional uncer- tainty, our H0 measurement (71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 km s−1 Mpc−1) falls between the lower H0 values from CMB (67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 km s−1 Mpc−1, Planck Collaboration et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2020)) and TRGB (69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 (stat) ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 (sys), Freedman (2021)) and the higher H0 value from Cepheids (73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 km s−1 Mpc−1, Riess et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2021)), and is consistent with all three.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Increasing the number of systems used for a com- bined time-delay measurement of H0 will improve this method’s competitiveness with CMB-based and distance ladder-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Although three other cluster- lensed quasars are published in the literature, none has all the necessary time delays measurements, space- resolution imaging, and spectroscopic redshifts of sec- ondary arcs for a measurement of H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' All three of the other published cluster-lensed quasars have ongo- ing photometric monitoring campaigns to measure their time delays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Additionally, one of the other three sys- tems, COOL J0542-2125 (Martinez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022) will be observed by HST in Cycle 30 (GO-17243;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' PI: Napier).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' To estimate the improvement in the H0 constraint from a sample of twice as many time delay measure- ments, we simulated H0 distributions guided by the existing sample, as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We randomly selected six integer H0 values between 50-150 as this is the range spanned by the peaks of the six H0 distribu- tions from SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We then randomly assigned to these six H0 values the standard deviation of one of the six H0 distributions (Table 7), and produced the correspond- ing Gaussian distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We repeated this simulation process 100 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Incorporating these new six H0 dis- tributions for a total of 12 constraints, and averaging the 100 iterations, gave a standard error of the weighted mean of 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 km s−1 Mpc−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Therefore, doubling the number of systems results in a ∼30% improvement in the constraint on H0, reducing the uncertainty on H0 from 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8% to 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A 1% uncertainty measurement of H0 from cluster- lensed quasars would be competitive with the cur- 8 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' rent precision level of CMB and distance ladder meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Extending the simulation described above to a larger number of systems, we estimated that ∼500 time delay measurements from cluster-lensed quasars would achieve a 1% uncertainty level on H0 from cluster lensed-quasars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Based on SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745 each having an average of two time delay measurements, a sample size of 250 cluster-lensed quasars would be needed to produce 500 time delay measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Future surveys are expected to detect of order ∼50 such systems in the next decade (Robertson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Therefore, this increase in sample size alone will not achieve 1% uncer- tainty in H0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' to reach 1% with of order of 50 systems (100 time delays) will require a decrease in the lens mod- eling uncertainties by about a factor of two, on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Future work will explore whether this decrease in the uncertainties is feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' ACKNOWLEDGMENTS Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Multimis- sion Archive at the Space Telescope Science Institute (MAST) at the Space Telescope Science Institute, which is operated by the Association of Universities for Re- search in Astronomy, Incorporated, under NASA con- tract NAS 5-26555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' These archival observations are associated with programs GO-10509, GO-9744, GO- 10793, GO-12195, and GO-13337.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Support for HST program AR-16150, which enabled this work, was pro- vided through grants from the STScI under NASA con- tract NAS5-26555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Co-author GM acknowledges fund- ing from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk�lodowska- Curie grant agreement NoMARACHAS - DLV- 896778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' We thank Ana Acebron for her useful discussions about SDSS J1029+2623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Facilities: HST(ACS);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' HST(WFC3);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' HST(MAST) Software: Lenstool (Jullo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Source Extractor (Bertin & Arnouts 1996) H0 from Cluster-Lensed Quasars 9 REFERENCES Acebron, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Grillo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Bergamini, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, ApJ, 926, 86, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/1538-4357/ac3d35 Ali, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Bremer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Phillipps, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & De Propris, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2018, MNRAS, 480, 2236, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/sty1988 Bayliss, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Acharyya, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2017, ApJL, 845, L14, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/2041-8213/aa831a Bertin, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Arnouts, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1996, A&AS, 117, 393, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1051/aas:1996164 Birrer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Millon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Sluse, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, arXiv e-prints, arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='10833, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='10833 Birrer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Treu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2021, A&A, 649, A61, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1051/0004-6361/202039179 Birrer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Shajib, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Galan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020, A&A, 643, A165, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1051/0004-6361/20203886110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='02941 Dahle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Gladders, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Bayliss, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Rigby, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2015, ApJ, 813, 67, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1088/0004-637X/813/1/67 Dahle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Gladders, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2013, ApJ, 773, 146, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1088/0004-637X/773/2/146 Dyrland, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2019, Master’s thesis, University of Oslo El´ıasd´ottir, ´A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Limousin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Richard, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007, arXiv e-prints, arXiv:0710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5636.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='org/abs/0710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5636 Fohlmeister, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kochanek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Falco, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Morgan, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Wambsganss, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2008, ApJ, 676, 761, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1086/528789 Fohlmeister, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kochanek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Falco, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2013, ApJ, 764, 186, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1088/0004-637X/764/2/186 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007, ApJ, 662, 62, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1086/518018 For´es-Toribio, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Mu˜noz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kochanek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Mediavilla, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, ApJ, 937, 35, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/1538-4357/ac8c40 Freedman, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2021, ApJ, 919, 16, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/1538-4357/ac0e9510.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='15656 Gladders, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Yee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2000, AJ, 120, 2148, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1086/301557 Hubble, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1929, Proceedings of the National Academy of Science, 15, 168, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1073/pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='168 Inada, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Oguri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Pindor, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2003, Nature, 426, 810, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1038/nature02153 Inada, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Oguri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Keeton, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005, PASJ, 57, L7, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/pasj/57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='L7 Inada, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Oguri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Morokuma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2006, ApJL, 653, L97, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1086/510671 Johnson, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2016, ApJ, 832, 82, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/0004-637X/832/1/82 Jullo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kneib, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Limousin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2007, New Journal of Physics, 9, 447, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1088/1367-2630/9/12/447 Klesman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Sarajedini, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2012, MNRAS, 425, 1215, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='21508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='x Kochanek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020, MNRAS, 493, 1725, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/staa344 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2021, MNRAS, 501, 5021, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/staa4033 Lemaˆıtre, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1927, Annales de la Socié' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='té' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Scientifique de Bruxelles, 47, 49 Lemon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Anguita, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Auger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, arXiv e-prints, arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='07714.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='org/abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='07714 Lemon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Auger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & McMahon, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2019, MNRAS, 483, 4242, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/sty3366 Limousin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kneib, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Natarajan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005, MNRAS, 356, 309, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='08449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='x Martinez, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Napier, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Cloonan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, arXiv e-prints, arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='03972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='org/abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='03972 Moresco, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Amati, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Amendola, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, arXiv e-prints, arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='07241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='org/abs/2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='07241 Mu˜noz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kochanek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Fohlmeister, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, arXiv e-prints, arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='08597.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='org/abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='08597 Oguri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2010, PASJ, 62, 1017, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/pasj/62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1017 Oguri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Ofek, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Inada, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2008, ApJL, 676, L1, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1086/586897 Oguri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Schrabback, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Jullo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2013, MNRAS, 429, 482, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/sts351 Planck Collaboration, Aghanim, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Akrami, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020, A&A, 641, A6, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1051/0004-6361/201833910 Raney, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Keeton, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Brennan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020, MNRAS, 492, 503, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/stz3116 Refsdal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1964, MNRAS, 128, 307, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='307 Riess, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Casertano, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Yuan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2021, ApJL, 908, L6, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/2041-8213/abdbaf Robertson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Smith, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Massey, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020, MNRAS, 495, 3727, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/staa1429 Ross, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Assef, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Kochanek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Falco, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & Poindexter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2009, ApJ, 702, 472, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1088/0004-637X/702/1/472 Schneider, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 1985, A&A, 143, 413 Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2008, PhD thesis, Tel Aviv University, Israel 10 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Mahler, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Coe, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & the RELICS Collaboration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2022, arXiv e-prints, arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='08483.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='org/abs/2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='08483 Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Ofek, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Smith, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2005, ApJL, 629, L73, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1086/452633 Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Gal-Yam, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Maoz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2010, ApJ, 718, 876, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1088/0004-637X/718/2/876 Sharon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Bayliss, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Dahle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2017, ApJ, 835, 5, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3847/1538-4357/835/1/5 Shu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Koposov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Evans, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2019, MNRAS, 489, 4741, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/stz2487 Shu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Marques-Chaves, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Evans, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', & P´erez-Fournon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2018, MNRAS, 481, L136, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnrasl/sly174 Wong, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Suyu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', Chen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2020, MNRAS, 498, 1420, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1093/mnras/stz3094 H0 from Cluster-Lensed Quasars 11 Target QSO Image QSO z RA [J2000] Decl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [J2000] µ SDSS J1004+4112 A 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1450074 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2109193 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 B 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1454888 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2119003 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 C 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1409266 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2096668 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 D 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1419060 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2136092 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='9 E 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1423413 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2122017 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='05 SDSS J1029+2623 A 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3081009 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3883044 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 B 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3093619 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='39446237 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 C 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3095755 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3939894 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7±8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 SDSS J2222+2745 A 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 335.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='537707 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='760543 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7 B 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 335.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='53669 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='761119 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 C 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 335.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='53296 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='760505 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 D 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 335.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='536205 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='758901 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 E 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 335.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='536007 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='758248 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 F 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 335.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='535874 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='759723 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The quasar image positions and redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Also included are the magnifications at the observed positions of the quasar images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' System ID R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [J2000] Decl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [J2000] z SDSS J1004+4112 QSO-A 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1450074 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2109193 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 QSO-B 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1454888 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2119003 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 QSO-C 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1409266 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2096668 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 QSO-D 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1419060 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2136092 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 QSO-E 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1423413 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2122017 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1418821 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2102917 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1468800 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2153908 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1417325 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2103272 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1470383 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2153011 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1419526 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2116044 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1416225 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2103033 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1471250 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2152436 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='74 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1414121 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2099250 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1476847 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2152121 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1413250 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2099825 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1477393 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2151976 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1412104 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2100544 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1478065 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2151979 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1411279 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2101547 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1478809 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2151884 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1418864 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2116948 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='288 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1439081 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2165866 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1382517 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2153846 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1379048 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2149959 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1434099 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2103752 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 12 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1441118 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2165193 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1383309 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2153297 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1378932 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2148820 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1434562 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2102573 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1444522 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2163884 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1383940 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2153469 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1378407 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2148091 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1434818 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2101761 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1445319 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2162919 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1384506 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2154232 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1376594 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2145747 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1435603 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2101349 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1424833 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2118271 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='332 SDSS J1029+2623 QSO-A 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3081009 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='38830445 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 QSO-B 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3093619 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='39446237 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 QSO-C 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3095755 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3939894 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2980611 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3907404 · · 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2978817 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3924467 · · 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3008758 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3974054 · · 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2981743 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3915325 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1812 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3014749 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3977063 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1812 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2990642 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3923892 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3074114 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3913469 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3041512 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3982630 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3015481 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3880193 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3017377 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3879213 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3018385 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3878900 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3032208 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3919632 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0275 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2992278 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3925219 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0278 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3076382 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3913247 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0278 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3043869 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3981437 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0278 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3023985 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3877048 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0278 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3035100 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3920169 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0278 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3019777 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3946563 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0232 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3008781 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3917377 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0232 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3075794 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3951262 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0622 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3064130 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3960500 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0622 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3014210 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3936610 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0622 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3012420 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3938020 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0622 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Positions and spectroscopic redshifts of the multiply-imaged background sources used as constraints in the strong lens models for SDSS J1004+4112 and SDSS J1029+2623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' See Table 1 from Sharon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2017) for the lensing constraints for SDSS J2222+2745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' H0 from Cluster-Lensed Quasars 13 Component No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' ∆ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [′′] ∆ Decl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [′′] e θ [deg] σ0 [km s−1] rcut [kpc] rcore [kpc] 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='085+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='56 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='53 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='07+5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='83 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='17+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='022 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='030 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='39+3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='70 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='22 987+245 −84 [1500] 126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='27+112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='43 −33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='97 2 [0] [0] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='40] 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='98+4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='34 −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='31 461+48 −52 181.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='42+13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='77 −28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='04 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='65+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='99 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='62 3 [1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='963] [-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='832] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='42+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='25 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='19 [349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='480] 235+10 −14 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='30+7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='045 −12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='29 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='68+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='99 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='68 4 [7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='659] [-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='821] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='43+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='22 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='29 [131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='13] 127+33 −29 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='13+6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='64 −8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='33 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='62+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='48 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='06 5 [-8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='463] [-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='877] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='44+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='24 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='27 [133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='89] 114+31 −28 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='28+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='97 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='97 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='26+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='92 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='20 6 [11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='220] [11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='401] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='42+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='29 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='29 150.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='24+22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='22 −34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='44 76+9 −7 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='465.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='79 −6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='85 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='18+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='85 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='85 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Strong lensing mass model parameters for SDSS J1004+4112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Median values and the 1σ confidence level from the MCMC are reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The coordinates ∆ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' and ∆ Decl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' are listed in arcseconds measured east and north from the core of Component No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 2 at [RA, Dec] = [151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='142381, 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='212131].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The other parameters are the ellipticity e, the position angle θ, the velocity dispersion σ0, the cut radius rcut, and the core radius rcore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The parameters listed in square brackets were not optimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Target Name z clus- ter z QSO no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' QSO im widest sepa- ration [′′] no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' of lensed sources no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' of spec- zs time delay (days) Reference SDSS J1004+4112 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='68 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='734 5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 4 4 ∆tAB = −43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='01 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='27 Mu˜noz+(2022) ∆tAC = −825.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='23 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='46 ∆tAD = 1633.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='23 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='97 SDSS J1029+2623 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='58 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1992 3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 7 6 ∆tAB = 744 ± 10 Fohlmeister+(2013) SDSS J2222+2745 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='49 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='805 6 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 5 3 ∆tAB = 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='44+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='36 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='44 Dyrland (2019) ∆tAC = −696.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='65+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='00 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='10 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The three large separation lensed QSOs in the HST archive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The listed time delays are the most up-to-date values from the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' See Fohlmeister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2008) and Dahle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' (2015) for previous measurements for SDSS J1004+4112 and SDSS J2222+2745, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Component No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' ∆ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [′′] ∆ Decl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' [′′] e θ [deg] σ0 [km s−1] rcut [kpc] rcore [kpc] 1 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='01+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='53 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='71+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='25 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='53+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='031 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='034 172.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='80+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='24 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='27 650+21 −20 [1500] 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='39+4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='37 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='78 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='04+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='16 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='38 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='62+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='46 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='55+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='052 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='055 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='25+4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='87 −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='10 528+30 −20 [1500] 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='95+6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='42 −6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='62 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='48+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='35 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='11+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='83 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='61+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='10 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='062 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='57+7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='24 −9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='24 385+43 −52 [1500] 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='82+9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='47 −11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='86 4 [-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='808] [-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='354] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='51+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='19 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='21 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='07+19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='26 −15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='61 202+20 −19 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='64+7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='88 −6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='82 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='92+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='52 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='86 5 [-19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7] [-8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0] 169+30 −24 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='94+19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='27 −19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='47 [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0] 6 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='87+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='13 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='11 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='50+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='14 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='30+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='29 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='20 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='06+26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='58 −38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='88 64+7 −5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='65+11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='13 −16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='51+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='30 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='31 Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Strong lensing mass model parameters for SDSS J1029+2623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Median values and the 1σ confidence level from the MCMC are reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The coordinates ∆ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' and ∆ Decl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' are listed in arcseconds measured east and north from [RA, Dec] = [157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='302047, 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='392209].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The other parameters are the ellipticity e, the position angle θ, the velocity dispersion σ0, the cut radius rcut, and the core radius rcore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The parameters listed in square brackets were not optimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' System ∆tAB ∆tAC ∆tAD ∆tAE ∆tAF SDSS J1004+4112 11 783 1294 1776 N/A SDSS J1029+2623 1060 1054 N/A N/A N/A SDSS J2222+2745 54 693 485 564 431 Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' Predicted time delay (in days) from the ‘best’ lens model for each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The values are measured at the model- predicted locations of the quasar images, assuming H0= 70 km s−1 Mpc−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' 14 Napier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' System H0 (km s−1 Mpc−1) H0 (km s−1 Mpc−1) (from best model) (mean ± 1σ) SDSS J1004+4112 AB 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 ± 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 AC 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 ± 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='9 AD 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='5 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='3 ± 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='2 SDSS J1029+2623 AB 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 ± 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 SDSS J2222+2745 AB 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='0 ± 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='1 AC 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 ± 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content='8 Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' H0 constraints from the time delay measurements in SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The middle column is the H0 value from the ‘best’ lens model for each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} +page_content=' The right column lists the mean and 1σ from the Gaussian distribution fit to the H0 values determined from 100 random models drawn from the MCMC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdFIT4oBgHgl3EQfWysJ/content/2301.11240v1.pdf'} diff --git a/H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf b/H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2e95b653e578d09ee7f92ff22331300bbf03faa1 --- /dev/null +++ b/H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7561dd5c2cee88d8893ed27d5821e2756c35a61e41acc893b1f89f7ac38ac0db +size 665242 diff --git a/H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss b/H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..0b652eb7ba9a783e7a4c34f5fc47de83ce9c8d9c --- /dev/null +++ b/H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:622aa1ff94403eb9953f47fda612427887cf64fe6e8f5ed3652a6df55ce9f2c2 +size 2687021 diff --git a/H9E3T4oBgHgl3EQfuQuc/vector_store/index.pkl b/H9E3T4oBgHgl3EQfuQuc/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..0563c6fc56442696f396e23025890285650ae5af --- /dev/null +++ b/H9E3T4oBgHgl3EQfuQuc/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caac45291d0d87a25256eca2b3df4011a56a4bc99699aab3669273b930a9b6d2 +size 112749 diff --git a/INE3T4oBgHgl3EQfWwrb/vector_store/index.pkl b/INE3T4oBgHgl3EQfWwrb/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..370313525d36fed22abbb5f204867cb68685b262 --- /dev/null +++ b/INE3T4oBgHgl3EQfWwrb/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49edb3200612f1be3961339ba9c5f918a8bcb99d1a08e509ce74615660eae3e5 +size 88625 diff --git a/INE3T4oBgHgl3EQfugsS/content/tmp_files/2301.04684v1.pdf.txt b/INE3T4oBgHgl3EQfugsS/content/tmp_files/2301.04684v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f7cbe39b9f198c85237914dd62eba0cedc7c957d --- /dev/null +++ b/INE3T4oBgHgl3EQfugsS/content/tmp_files/2301.04684v1.pdf.txt @@ -0,0 +1,853 @@ +Design and Characterization of Viscoelastic McKibben Actuators with +Tunable Force-Velocity Curves +Michael J. Bennington* [1], Tuo Wang* [1], Jiaguo Yin[2], +Sarah Bergbreiter[1], Carmel Majidi[1], Victoria A. Webster-Wood+ [1,3,4] +Abstract— The McKibben pneumatic artificial muscle is a +commonly studied soft robotic actuator, and its quasistatic +force-length properties have been well characterized and mod- +eled. However, its damping and force-velocity properties are +less well studied. Understanding these properties will allow +for more robust dynamic modeling of soft robotic systems. +The force-velocity response of these actuators is of particular +interest because these actuators are often used as hardware +models of skeletal muscles for bioinspired robots, and this +force-velocity relationship is fundamental to muscle physiology. +In this work, we investigated the force-velocity response of +McKibben actuators and the ability to tune this response +through the use of viscoelastic polymer sheaths. These vis- +coelastic McKibben actuators (VMAs) were characterized using +iso-velocity experiments inspired by skeletal muscle physiology +tests. A simplified 1D model of the actuators was developed +to connect the shape of the force-velocity curve to the material +parameters of the actuator and sheaths. Using these viscoelastic +materials, we were able to modulate the shape and magnitude +of the actuators’ force-velocity curves, and using the developed +model, these changes were connected back to the material +properties of the sheaths. +I. INTRODUCTION +Originally introduced in the 1930s-1940s [1], and popu- +larized by Joseph McKibben in the 1950s [2]–[4], pneumatic +artificial muscles are a commonly studied soft robotic actu- +ator and have been used in traditional rigid robotics [1], [5], +[6], soft robotic platforms [7], [8], and wearable and assistive +devices [9]–[13]. Consisting of an inner rubber bladder and +an outer constraining mesh, the McKibben actuator is able +to achieve high actuator strains and large force relative to +its light weight [14]. McKibben actuators are of particular +interest in bioinspired robotics and prosthetics because of +their functional similarity to biological muscle in terms of +contracting in response to activation and introducing com- +pliance into the system. As a consequence, they can serve +as first-order hardware models of skeletal muscle [4], [14]– +[16]. Current experimental characterizations and models of +these actuators tend to focus on their quasistatic properties, +relating their inflation pressure, length, and axial force [3], +[17]–[20], but less attention has been given to their dynamics +properties. These properties are important both to the design +and modeling of the dynamics of a robotic system composed +*: +The +authors +contributed +equally. ++: +Corresponding +author +vwebster@andrew.cmu.edu. Departments of [1] Mechanical +Engineering, [2] Materials Science and Engineering, [3] Biomedical +Engineering. [4] McGowan Institute for Regenerative Medicine. All +departments and institutes are part of Carnegie Mellon University, 5000 +Forbes Ave, Pittsburgh, PA, 15213, USA. +Fig. 1. +Viscoelastic McKibben Actuator (VMA): (a) Plain McKibben +Actuator (control), (b) Ecoflex-30 sheath, (c) Urethane sheath, (d) Ecoflex- +30 and Carbopol composite sheath (10mm diameter shown for all). Each +VMA contains a plain McKibben actuator at its core, fabricated in the same +method as the control. +by these actuators and to the use of McKibben muscles as +biomimetic actuators. +While few studies have been reported on the dynamic +properties of McKibben actuators, those that have done +so have often focused on the force-velocity relationship. +For example, Tondu et al. performed isotonic quick-release +experiments on McKibben actuators and showed that, for +a particular combination of rubber bladder and mesh ma- +terials, the force-velocity relationship can resemble that of +the Hill muscle model [14]. Other works have shown that +the velocity-dependence of the McKibben actuator’s force is +minimal compared to that of biological muscle [15], [21]. +The authors instead augmented the muscle with parallel +hydraulic damping elements to better mimic the biological +tissue [15]. However, these solutions either rely on very +particular woven mesh materials or large auxiliary equipment +to tune the shape of the actuator’s force-velocity response. +In this work, we begin to investigate the force-velocity +relationship of McKibben actuators and the ability to tune +these relationships using simple viscoelastic material sheaths. +Four different actuator architectures are investigated using +actuators of three different diameters. The force-velocity +response of these viscoelastic McKibben actuators (VMA) is +measured using iso-velocity tests adapted from the muscle +physiology literature [22]. To connect the measured force- +velocity response to the material properties of the sheath +and the mechanics of the underlying McKibben actuator, a +simplified 1D model, consisting of parallel chains of standard +This work has been submitted to the IEEE for possible publication. Copyright may be +transferred without notice, after which this version may no longer be accessible. +arXiv:2301.04684v1 [cs.RO] 11 Jan 2023 + +(a) +(b) +(c) +(d) +20mmFig. 2. +Fabrication, Characterization, and Modeling. (a) Each viscoelastic muscle actuator consists of a standard McKibben actuator (fabricated following +[7]) and a viscoelastic polymer sheath. (b) To characterize the dynamic properties of the actuators, iso-velocity experiments were performed on an Instron +5969 at various velocities and inflation pressures. (c) The dynamics of the actuators were modeled using parallel chains of Standard Linear Solid elements +(SLSE), with one arm capturing the dynamics of the McKibben actuator and the other the dynamics of the sheath material. Using this model, an analytical +expression for the force-velocity curves can be obtained (d), and the shape of the curve can be related to the material properties of the constituents. The +height of this curve above the v = 0 point, ∆FV (v), can be related to two material properties of the actuator. Here, shortening velocity (negative of the +extension rate) is reported in alignment with standard muscle physiology experiments. +linear solid elements (SLSEs), is formulated. +II. MATERIALS AND METHODS +A. Actuator Design and Fabrication +Each viscoelastic muscle actuator consists of a traditional +McKibben actuator, serving as the contractile element, and +a viscoelastic sheath around the McKibben, serving as a +passive damper (Fig. 2a). Four 90 mm long McKibben +actuators each of three different diameters (6 mm, 10 mm, +12 mm nominal mesh diameter) were fabricated. The design +of the actuator was adapted from [7]. Briefly, a latex balloon +inner bladder is connected to two barbed tube ends and is +constrained by commercially available overexpanded cable +meshes (PET Expandable Sleeving, Alex Tech). Kevlar fibers +and cyanoacrylate glue were used to seal and connect the +bladder and mesh to the end caps of the actuator. +Thin +hollow +sheaths +of +different +viscoelastic +and +thixotropic materials were attached to the outside of the +McKibben to act as the damping element of the actuator. +To create the outer viscoelastic sheaths, 2 single-layered, +concentric-cylindrical molds were 3D printed (Object 30, +Stratasys), with inner diameters of 9 mm and 12 mm. For +both diameters, the resulting sheath has a thickness of 2 +mm. Polyurethane (Vytaflex, Smooth-On Inc.), Ecoflex-30 +(Ecoflex 00-30, Smooth-On Inc.) and 5% Carbopol (Car- +bomer 940, Sanare) gel were used to fabricate the McKibben +sheaths. For both the Ecoflex-30 and polyurethane sheaths, +the liquid elastomer was prepared by mixing the 2-part +polymer in a 1:1 ratio. The mixed polymer was placed in +a vacuum chamber for 5 minutes to remove air bubbles. +The 3D-printed molds were prepared by spraying a thin +layer of mold release (Ease Release 200, Mann Release +Technologies) on the inner surfaces of the mold. The elas- +tomer was then injected into the mold and cured at room +temperature (25◦C) for 12 hours. The Carbopol gel used in +this project was adapted from [23]. First, 10g of Carbopol +940 powder (Carbomer) was mixed with 190g of deionized +water. The mixture was then mechanically stirred for 4 hours. +After stirring, 4g of 10M NaOH solution was added to the +mixture. The new mixture was then mechanically stirred for +30 min. Finally, the gel was injected in between an Ecoflex- +30 sheath and the McKibben actuator. The resulting sheaths +were connected at the ends of the actuator using silicone +epoxy (Sil-poxy, Smooth-On Inc.) and Kevlar threads. +The geometric and material parameters of all 12 actuators +fabricated for experimental characterization, with and with- +out sheaths, are provided in Table I. The fabricated length +of each actuator was measured by a digital caliper. The Max +Contraction Ratio is defined as the ratio of the length of the +actuator at 20 psi to the length of the actuator at 0 psi (initial +length). +B. Experimental Characterization +Inspired by biological muscle testing [22], iso-velocity +tests were performed at different pressure levels for all sam- +ple actuators on a universal material testing system (5969, In- +stron, 1 kN load cell). Inflation pressure was measured with a +digital pressure sensor (ELVH-030G-HAND-C-PSA4, ALL +SENSORS, maximum pressure 30 psi, resolution 0.1 psi) +and recorded using a microcontroller (Teensy 3.6, PJRC). +Two pairs of 3D printed holders were designed to hold both +ends of the actuator and provide consistent friction between +the actuator and the testing system. The force and length +data from the universal material testing system and pressure +data from the microcontroller were collected independently +and synchronized later in MATLAB. +Iso-velocity tests (Fig. 4 (a)) were performed at five +velocity magnitudes (2, 4, 6, 8, 10, all in mm/s) at 4 pressure +levels (5, 10, 15, 20, all in psi). All five velocities were +tested in a single session at a given pressure level. For a +given pressure: the actuator was first held at its rest length +in the testing system and pressurized to the desired level. +After allowing the actuator force to reach steady state, the +actuator was stretched between +4 and -4 mm at 0.01 mm/s +for one cycle, returned to the unpressurized rest length, and +again allowed to come to steady state. This was done to +minimize preconditioning effects on the first ramp. For each +velocity magnitude v: the actuator was stretched 2 mm at v +mm/s and then held for 30 seconds. The actuator was then +returned to the unpressurized rest length at 0.01 mm/s and +held for 30 seconds. This same profile was then repeated at + +pressure +-FV Curve +MINGIRON +- -FV(v=0) +regulator +- - -FV(v=±8) +Teorew taest +3 +To VMA +Sample +Pressure +ns +VMA +0.9 +Sensor +ckib +P +Teensy 3.6 +0.8 +μ-controller +30 mm +ShorteningVelocitya velocity of −v mm/s. For 5 psi, only 1 mm of extension +was applied, as shortening by more than 1mm from the +unpressurized rest length would have led to shortening below +the pressurized rest length of the actuators. Five repetitions +of this full protocol were conducted for each actuator at each +pressure and velocity. +C. Modeling +To relate changes in the experimental force-velocity curves +to design parameters of the actuator materials, a simplified, +1D model was developed where both the McKibben actuator +and polymer sheaths were treated as standard linear solid +elements (SLSE) (Fig. 2c). The resulting force-velocity ex- +pressions are parameterized by mechanical properties of the +actuator constituents and can therefore be used as a design +tool to inform future designs. In this model, elastic elements +are assumed to have a force linearly proportional to strain +(F = kε where the normalized stiffness k has units [N]), and +the viscous elements are assumed to have a force linearly +proportional to the strain rate (F = η ˙ε where the damping +coefficient η has units [Ns]). For the case of a single SLSE, +the system force is given by: +F(t) = k1iε(t) + k2iε2i(t) +(1) +where k1i and k2i are the stiffness of the parallel and +series elastic elements, and ε and ε2i are the strains of the +parallel and series elastic elements of the ith SLSE. For the +actuators presented here only two SLSEs are included: a +control McKibben (c) and the sheath (s). For each SLSE, +˙ε2i = ˙ε − k2i +ηi +ε2i +(2) +where ηi is the damping coefficient of the series damper. +Starting from steady state ( ˙ε2i(t = 0−) = 0), a constant +strain rate ramp ( ˙ε = ˆv where ˆv has units [1/s]) yields a +system force +Fi(t) = k1i(ε0 + ˆvt) + ˆvηi +k2i +(1 − e− +k2i +ηi t) . +(3) +For a fixed final applied strain dε, the peak system force is +a function of the velocity (tpeak = dε/ˆv). Normalizing by +the pre-extension, steady state force (Fi(t = 0−) = k1iε0), +the force-velocity curve for the model can be written as: +FVi(ˆv) = 1 + sgn(ˆv)dε +ε0 ++ ˆvκi +ε0γi +(1 − e−sgn(ˆv) γidε +ˆv ) +(4) +where κi = k2i/k1i is the relative stiffness of the elastic +elements and γi = k2i/ηi is the inverse of the time constant +of the viscous arm (Fig. 2d). Here, sgn(x) is the sign function +(1 : x > 0, −1 : x < 0). The height of the force-velocity +curve above the v = 0 discontinuity then takes the form: +∆FVi(ˆv) = ˆvκi +ε0γi +(1 − e−sgn(ˆv) γidε +ˆv ) . +(5) +Using this equation, the parameters κi and γi can be related +to the shape of force-velocity curve. Specifically, the hori- +zontal asymptote is given by: +∆FVi(ˆv∞) = dε +ε0 +κi +(6) +and the velocity ˆvα at which the force-velocity curve reaches +α∆FVi(ˆv∞) can be approximated as: +ˆvα ≈ +dε +2(1 − α)γi . +(7) +This approximation is valid within 5% for α > 0.75. Thus +the height of the force-velocity curve is governed by κi and +the steepness of the force-velocity curve is governed by γi +(Fig.3a,b). +For the two-SLSE case (McKibben actuator and the +sheath), similar relationships can be found. The shape of +the force-velocity curve takes the form: +∆FVc+s(ˆv) = +βc +βc + βs +∆FVc(ˆv) + +βs +βc + βs +∆FVs(ˆv) (8) +where ∆FVs and ∆FVc both take the form of ∆FVi from +the 1 SLSE case. The height and steepness are governed by +weighted averages of κi and γi: +∆FVc+s(ˆv∞) = dε +ε0 +βcκc + βsκs +βc + βs +(9) +and +ˆvα ≈ +dε +2(1 − α) +βcκcγc + βsκsγs +βcκc + βsκs +(10) +TABLE I +GEOMETRIC AND MATERIAL PARAMETERS FOR THE VISCOELASTIC MCKIBBEN ACTUATORS +Sample +Mesh Diameter +(mm) +IL* ± 1 STD (mm) +ML* ± 1 STD (mm) +Max +Contraction +Ratio (%) +Sheath Material +Sheath Diameter (mm) +Control1 +6 +88.5±0.9 +68.3±0.6 +22.8 +N/A +N/A +Control2 +10 +94.3±0.5 +70.3±0.3 +25.4 +N/A +N/A +Control3 +12 +91.5±0.4 +67.2±0.1 +26.6 +N/A +N/A +Ecoflex1 +6 +91.7±0.2 +75.2±0.3 +17.9 +Ecoflex-30 +9 +Ecoflex2 +10 +89.7±0.4 +70.2±0.5 +21.8 +Ecoflex-30 +9 +Ecoflex3 +12 +90.9±0.6 +69.6±0.5 +23.4 +Ecoflex-30 +12 +Urethane1 6 +91.2±1.0 +70.6±0.5 +21.7 +Poly-urethane +9 +Urethane2 10 +90.8±0.7 +70.2±0.6 +22.6 +Poly-urethane +12 +Urethane3 12 +89.7±0.4 +68.3±0.8 +23.9 +Poly-urethane +12 +Carbopol1 6 +92.5±0.3 +72.7±0.3 +21.4 +Carbopol+Ecoflex-30 +12 +Carbopol2 10 +93.8±0.4 +70.2±0.5 +21.9 +Carbopol+Ecoflex-30 +12 +Carbopol3 12 +86.4±0.3 +64.8±0.4 +24.9 +Carbopol+Ecoflex-30 +12 +IL*: Initial length. The length of sample actuator measured at 0 psi. +ML*: Minimum length. The length of sample actuators measured at 20 psi. +STD: Standard Deviation. + +Fig. 3. +Investigation of model parameters for a 1-SLSE model ((a) and +(b)) and for a 2-SLSE model ((c) and (d)). Here the normalized shortening +velocity (negative of the extension strain rate, v) is reported in alignment +with standard muscle physiology experiments. (a) By varying the stiffness +: damping ratio in the viscous arm of the SLSE, the slope of the force- +velocity curve can be changed. As γ decreases (increased damping time +constant), the force-velocity curve approaches a step response, with no +velocity dependence. Conversely, as γ increases, the curve approaches a +linear response. (b) By varying the stiffness ratio between the two arms of +the model, the height of the force-velocity curve is changed, with the height +increasing with increasing κ. For (c) and (d), one SLSE in the model was +fixed with κ1 = 10 and γ1 = 50, and the parameters of the other arm +were varied. (c) By varying γ2 in the second arm, the slope of the force- +velocity can be tuned, and (d) by varying κ2, the height of the force-velocity +curve can be adjusted. In both cases, increasing β (the stiffness ratio of the +two parallel elastic elements), the effect of changing κ2 or γ2 is amplified. +(e) These parameters can grouped into four different material classes. By +combining materials of classes, the force-velocity curve can be tuned for a +desired dynamic response. +where βi = k1i/k1c is the stiffness ratio of parallel elastic +element to the McKibben parallel stiffness (βc = 1) in the +different elements. With different combinations of βs, κs, +and γs, the force-velocity response of the McKibben actuator +can be tuned (Fig. 3 (c),(d)). These expressions can also be +extended to any number of parallel SLSEs following this +weighted average scheme. +D. Analysis +Experimental force-velocity curves were compiled for +each actuator and each pressure level using data mea- +sured during characterization experiments. Individual ve- +locity ramps were identified and extracted from the larger +experiment (Fig. 4a). The average velocity was found by +fitting a piece-wise-linear ramp function to the extension +Fig. 4. +Characterization and Modeling of the Force-Velocity Curve. (a) +An example iso-velocity experiment (6mm control McKibben actuator at 10 +PSI) with the individual velocity ramps overlayed. Data from these force +responses are used to construct an experimental force-velocity curve. (b) +To avoid confounding effects from various amounts of overshoot, the peak +force that is normalized by the initial force and the velocity is normalized +by the pressurized rest length for the force-velocity curve is taken at the +point when the ramp first reaches its target point. This occurs just prior to +the extension overshoot. +data, the slope of which corresponds to the average velocity +(Fig. 4b). The average velocity is then normalized by the +pressurized rest length of the actuator to obtained the strain +rate. The starting force (F0) was calculated as the mean +force during the two seconds prior to the start of the ramp. +The peak force was taken as the force value when the +extension first reached its target point. This was to avoid +artifacts introduced by extension overshoot, which occurred +at higher velocities. The peak force was then normalized by +the starting force. +These experimental force-velocity curves were then used +to obtain model parameters for the McKibben actuators +and viscoelastic sheaths as functions of pressure. For all +experiments, the values of dε, ε0, ε, and ˆv in the model +were calculated relative to the pressurized rest length of +the actuator. First, for each control McKibben actuator at +each pressure, κc and γc were fitted using a nonlinear least +squares method (code generated by using MATLAB Curve +Fitting Toolbox). Parameter initialization was chosen based +on the Equations 6 and 7 for the horizontal asymptote and ˆvα. +Specifically, the normalized force from the ±10 mm/s tests +were used as FVc(ˆv∞) to approximate κc, and the data from +the ±4 mm/s was used as the α point to estimate γc. To fit +κs, γs, and βs for each material, diameter, and pressure, the +corresponding McKibben parameters were set as κc and γc +and not optimized. The parameter βs was initialized to 1, and +κs and γs were initialized following the same procedure as +in the plain McKibben case. The same optimization process +was then carried out for these three parameters. +III. RESULTS AND DISCUSSION +A. Characterization +The force-velocity response of the twelve actuators was +measured as a function of the pressure, sheath material, and +mesh diameter (Fig. 5). In these figures, the normalized +shortening velocity (negative of the extension strain rate) + +0.5 +0.8 +0 +10 +104 +101 +102 +100 +100 +10-2 +1 Element +-%=1000 +Peak +-=100 + -=10 +--=1 +-%=0.1 +0.8 +0 +ShorteningVelocity[1/slShorteningVelocityF1/slShorteningVelocity1/s +90.0. +1 Element - - - k,=25 +-h,=37.5 +-k=12.5 +-K_=50 +ShorteningVelocity[1/s] +ShorteningVelocity[1/s]ShorteningVelocity[1/s +High K, High Y +High K, Low Y +Low K, High Y +LoW K, LoW YTensile Force [N] +ww +xtensionFig. 5. +Experimental Characterization of Viscoelastic McKibben Actuators. Each column shows the data for a different actuator, and each row shows a +different actuator diameter. Along the dashed line, the experimental data is reported as mean ± 1 standard deviation. The solid line shows the corresponding +model fit for that actuator and that pressure. For the control actuators, a 1-SLSE model is used, and for each of the VMA, a 2-SLSE model is used, with +the control element parameters set by the corresponding control actuator model. Inset: The 5PSI curve for the 12mm Urethane actuator is inset to allow a +smaller axis range for the rest of the 12mm actuators. +is reported in alignment with standard muscle physiology +experiments. The shortening velocity is normalized by the +pressurized rest length of the actuator (units of shortening +velocity here are [1/s]). Common force-velocity features +were found across all actuators. Unlike what is predicted in +the model, all actuators showed an asymmetric force-velocity +response, with a larger magnitude asymptote for extensions +(negative shortening velocities) than for shortening. This +reflects the nonlinear stiffness properties of the McKibben +actuators that have been previously reported, with stiffness +increasing with increased length [3]. Additionally, an in- +crease in pressure led to a decrease in the height of force- +velocity curve at all velocities and diameters, suggesting a +more elastically dominant behavior at high pressures (Fig. +3b,e). However, the difference in height for a given pressure +increase diminished with increasing pressure. This is most +pronounced in the 10 and 12 mm diameter actuators at the 5 +psi level. This could be related to changes in the contact +state of the inner bladder. In these larger actuators, the +bladder is not in full contact with mesh at lower pressures, +but at higher pressures has made full contact. This low- +pressure discrepancy being related to the contact state is also +supported by the observation this discrepancy is not seen in +the 6 mm actuators, where the mesh is in full contact with +the actuator even at low pressures. However, the mechanism +that causes this contact state to result in a more viscous- +dominated response would require additional investigation. +The addition of the viscoelastic polymer sheaths was +successful in altering the force-velocity response of the +McKibben actuators. In the case of the 6 and 12 mm diameter +urethane actuators, a more viscous-dominating response was +achieved, with the height of the force-velocity curve being +higher than the control actuator response at all pressures and +velocities. The 10 mm urethane actuator showed a different +response, with the height in much closer agreement with +the control. This could be due to the 10 mm urethane +actuator requiring a larger diameter sheath than the other 10 +mm actuators. The larger diameter sheath was used because +the smaller sheath diameter consistently ruptured at higher +pressures. However, this meant that the sheath was less in +contact with the underlying McKibben than in the 6 and +12 mm cases and was thus less engaged. Conversely, the +Ecoflex sheath led to a decrease in the height of the force- +velocity curve in extension for all diameters and pressure, +showing a more elastic-dominant response. In shortening, the +Ecoflex actuators showed closer agreement with the control +actuators. Finally, the effect of the Carbopol actuators varied. +For the 6 mm actuator, almost no change was seem from the +control actuator. In the 10 mm actuator, the response was +much closer to that of the Ecoflex, showing a more elastic- +dominant response. In the 12 mm case, the effect varied with +pressure and direction of motion, with an increased height + +Shortening Velocity[1/s +Shortening Velocity1/s +5 PSI +10 PSI +15 PSI +20 PSI +一seen at 10 and 15 psi in extension, but no difference seen at +20 psi or in shortening at 10, 15 or 20 psi. +This characterization is limited in a number of ways. +The force-velocity curve, while relevant to the actuator in +terms of its role as a model of skeletal muscle, is only +one metric by which to determine these actuators’ dynamic +properties or the ability of these material sheaths to tune +them. More complete characterization will require cyclic +testing at various speeds to determine hysteresis as a function +of velocity. Additionally, the minimum extension rate of 2 +mm/s was near the horizontal asymptote for many actuators, +resulting in poor characterization of the high slope region +of the force-velocity curve near ˆv = 0. A more complete +investigation of the force-velocity curve will require lower +velocities to be incorporated. These higher test rates also +resulted in extension and shortening overshoot in the tests, +which made the calculation of the peak force and the +following force decay more challenging. These overshoots +would be minimized with lower velocity tests. +B. Modeling +The presented model was able to successfully capture +major trends in the experimental force-velocity curves (R2 = +0.94 ± 0.05 for all actuators and pressures), and the changes +in the VMA curve relative to the control curves can be +explained through the model parameters. For example, in all +actuators, an increase in pressure leads to a decrease in the +height of the force-velocity curve. This is expected under this +model, as increasing the pressure of the McKibben actuator +increases its stiffness [3], and this increased stiffness results +in a lower κc and thus FV (ˆv∞). This model can also be used +to explain changes in the force-velocity curves associated +with the material sheaths (Fig. 6). Based on preliminary +materials testing, the urethane sheath would fall into the +viscous dominant, long time constant class, and Ecoflex +would fall into the elastic dominant, short time constant class +(relative to the McKibben actuator). Therefore, we would +expect that the urethane would cause an increase in the +height of the force-velocity curve (Fig. 3e). However, with +increased pressure, the relative stiffness of the McKibben to +the urethane sheath increase (decreasing βs), so we would +expect this difference to decrease with increased pressure as +the weighted average begins to favor the McKibben actuator +(Fig. 6a). For the Ecoflex sheath, the relatively shorter time +constant would lead to a high slope of the force-velocity +curve, which is seen at low pressures (Fig. 6b). However, as +with the urethane sheath, an increased pressure leads to the +McKibben properties dominating once again. +While this model can capture many of the trends in the +data, there are some limitations in its accuracy and predictive +power. Both the McKibben actuators and the sheath materials +are non-linearly elastic, with their stiffness increasing with +increased strain. This results in an asymmetrical force- +velocity curve, with a larger response for extension (negative +shortening velocity) relative to shortening at the same rate. +This cannot be captured by the linear springs in the proposed +model. As a consequence, the model fits tend to under-predict +Fig. 6. +Comparison of 1 SLSE and 2 SLSE Model. For (a) and (b), the +black dashed line shows the model fit of the corresponding plain Mckibben +actuators (1 SLSE model), and the solid colored line shows the adjusted 2 +SLSE model. As with Fig. 5, the experimental data are shown as mean ± 1 +STD. (a) 6mm Urethane VMA. For all pressures, the viscous nature of the +urethane led to an increased height of the force-velocity curve, captured by +the 2 SLSE model have a higher asymptote. As pressure increases and the +McKibben stiffens, this asymptote difference decreases as the McKibben +begins to dominate. (δ∆FV (ˆv∞) = FV2SLSE(ˆv∞)−FV1SLSE(ˆv∞)). +(b) 10mm Ecoflex VMA. At low pressure, the low viscous effects (large γE) +of the Ecoflex sheath are able to change the slope of the force-velocity curve, +but at higher pressures, the relative stiffness of the McKibben actuator again +dominates, making the VMA response into alignment with the standard +McKibben actuator. +extension responses and to over-predict shortening responses. +Furthermore, the asymmetry also results in high parameter +uncertainty. Improvements can be made through the inclusion +of nonlinear spring elements and more appropriate models of +the McKibben actuator at the cost of decreased interpretabil- +ity of the model parameters. Additionally, the optimized +sheath parameters tend to vary with pressure, whereas it +would be expected that they would be pressure-independent +for linear materials. However, a pressure dependence would +be expected for nonlinear materials, as the McKibben ac- +tuator’s pressure will determine the deformation state of the +sheath material. In the future, this pressure dependence could +be incorporated into the model as well, but it would require +3D geometric information about the actuator. Both of these +issues could be addressed by incorporating a more complete +quasi-static McKibben model [3], [9], [17] to capture the +strain stiffening of the McKibben actuators and provide the +geometry needed to estimate the sheath stiffness pressure +dependence. +Finally, this model only includes damping from standard +dash-pot elements. However, previous work has shown that +a velocity dependence in McKibben actuators can actually +come from non-linear friction interactions in the mesh mate- +rial [14] and Coulomb friction between the bladder material +and the sheath [4], [21]. Future model development should +incorporate such friction into a more complete model of the +McKibben to replace one of the SLSEs in this model. + +.5 +0.5 +.5 +0.5 +0. +0 +0.1 +-0.1 +0 +0.1 +0.1 +0 +0.1 +Shortening +Shortening +Shortening +Velocity [1/s] +Velocity [1/s] +Velocity[1/s]IV. CONCLUSION AND FUTURE WORKS +This work presents the characterization and modeling of +the force-velocity relationships of viscoelastic McKibben +actuators (VMA). These VMAs consist of a standard McK- +ibben actuator surrounded by a viscoelastic polymer sheath. +Iso-velocity experiments were performed to measure the +force-velocity response of these actuators, and a simplified +1D model was developed to relate the shape of these exper- +imental force-velocity curves to the material properties of +the actuators. Using these polymer sheaths, we were able +to successfully augment the force-velocity response of a +standard McKibben, changing either its asymptotic height +or its slope. The 1D model performed well in capturing +the trends in these force-velocity curves, but missed key +features, including the asymmetry in extension/shortening +and the pressure dependence of sheath properties. +Future works on these actuators will include iso-velocity +tests at slower speeds to further investigate the steep portion +of the force-velocity curve near the ˆv = 0 discontinuity. +Additionally, to increase the predictive power of the model, +more accurate quasistatic models of the McKibben’s length- +pressure-force properties will be implemented to replace the +linear spring element. This will also require the measure- +ment of the actuator geometry during quasi-static testing. +Geometric information from these models will be used to +capture the deformation-dependent properties of the sheath +materials as well. With better predictive power, these models +can be used as a design tool for creating actuators with a +desired force-velocity response. Future characterization will +also include cyclic testing of the actuators at various speeds +to more robustly investigate their dynamic properties. The +work presented here lays the foundation for the fabrica- +tion and design of pneumatic actuators with tunable force- +velocity dynamics for broad applications in bioinspired and +biomimetic robotics. +ACKNOWLEDGEMENTS +This work was supported in part by the National Sci- +ence Foundation (NSF) through grant no. FRR-2138873, +and +in +part +by +NSF +DBI-2015317 +as +part +of +the +NSF/CIHR/DFG/FRQ/UKRI-MRC Next Generation Net- +works for Neuroscience Program. Any opinions, findings, +and conclusions expressed in this material are those of the +authors and do not necessarily reflect the views of the NSF. +REFERENCES +[1] F. Daerden, D. Lefeber et al., “Pneumatic artificial muscles: actuators +for robotics and automation,” European journal of mechanical and +environmental engineering, vol. 47, no. 1, pp. 11–21, 2002. +[2] E. Hawkes, C. Majidi, and M. Tolley, “Hard questions for soft +robotics,” Science Robotics, vol. 6, no. 53, 2021. +[3] B. Tondu, “Modelling of the McKibben artificial muscle: A review,” +Journal of Intelligent Material Systems and Structures, vol. 23, no. 3, +pp. 225–253, 2012. +[4] C.-P. Chou and B. Hannaford, “Measurement and modeling of mck- +ibben pneumatic artificial muscles,” IEEE Transactions on Robotics +and Automation, pp. 90–102, 1996. +[5] N. Delson, T. Hanak, K. Loewke, and D. N. Miller, “Modeling +and implementation of McKibben actuators for a hopping robot,” +2005 International Conference on Advanced Robotics, ICAR ’05, +Proceedings, vol. 2005, pp. 833–840, 2005. +[6] S. Kurumaya, K. Suzumori, H. Nabae, and S. Wakimoto, “Mus- +culoskeletal lower-limb robot driven by multifilament muscles,” +ROBOMECH Journal, vol. 3, no. 1, pp. 1–15, 2016. +[7] K. Dai, R. Sukhnandan, M. Bennington, K. Whirley, R. Bao, L. Li, +J. P. Gill, H. J. Chiel, and V. A. Webster-Wood, “Slugbot, an aplysia- +inspired robotic grasper for studying control,” Living Machines, 2022. +[8] A. A. Faudzi, N. I. Azmi, M. Sayahkarajy, W. L. Xuan, and K. Suzu- +mori, “Soft manipulator using thin McKibben actuator,” IEEE/ASME +International Conference on Advanced Intelligent Mechatronics, AIM, +vol. 2018-July, pp. 334–339, 2018. +[9] F. Connolly, C. Walsh, and K. Bertoldi, “Automatic design of fiber- +reinforced soft actuators for trajectory matching,” Proceedings of the +National Academy of Sciences, vol. 114, no. 1, pp. 51–56, 2016. +[10] M. Tschiersky, E. E. Hekman, D. M. Brouwer, J. L. Herder, and +K. Suzumori, “A Compact McKibben Muscle Based Bending Actuator +for Close-to-Body Application in Assistive Wearable Robots,” IEEE +Robotics and Automation Letters, vol. 5, no. 2, pp. 3042–3049, 2020. +[11] S. Koizumi, T. H. Chang, H. Nabae, G. Endo, K. Suzumori, M. Mita, +K. Saitoh, K. Hatakeyama, S. Chida, and Y. Shimada, “Soft Robotic +Gloves with Thin McKibben Muscles for Hand Assist and Rehabili- +tation,” Proceedings of the 2020 IEEE/SICE International Symposium +on System Integration, SII 2020, pp. 93–98, 2020. +[12] L. Rosalia, C. Ozturk, J. Coll-Font, Y. Fan, Y. Nagata, M. Singh, +D. Goswami, A. Mauskapf, S. Chen, R. A. Eder, E. M. Goffer, +J. H. Kim, S. Yurista, B. P. Bonner, A. N. Foster, R. A. Levine, +E. R. Edelman, M. Panagia, J. L. Guerrero, E. T. Roche, and C. T. +Nguyen, “A soft robotic sleeve mimicking the haemodynamics and +biomechanics of left ventricular pressure overload and aortic stenosis,” +Nature Biomedical Engineering, vol. 6, pp. 1134–1147, 2022. +[13] Y.-L. Park, B.-r. Chen, C. Majidi, R. J. Wood, R. Nagpal, and +E. Goldfield, “Active modular elastomer sleeve for soft wearable +assistance robots,” in 2012 IEEE/RSJ International Conference on +Intelligent Robots and Systems, 2012, pp. 1595–1602. +[14] B. Tondu and S. D. Zagal, “McKibben artificial muscle can be adapted +to be in accordance with the Hill skeletal muscle model,” Proceedings +of the First IEEE/RAS-EMBS International Conference on Biomedical +Robotics and Biomechatronics, 2006, BioRob 2006, vol. 2006, no. 3, +pp. 714–720, 2006. +[15] G. K. Klute, J. M. Czerniecki, and B. Hannaford, “McKibben arti- +ficial muscles: Pneumatic actuators with biomechanical intelligence,” +IEEE/ASME International Conference on Advanced Intelligent Mecha- +tronics, AIM, pp. 221–226, 1999. +[16] S. Gollob, J. Poss, G. Memoli, and E. Roche, “A multi-material, +anthropomorphic metacarpophalangeal joint with abduction and ad- +duction actuated by soft artificial muscles,” IEEE Robotics and Au- +tomation Letters, vol. 7, no. 3, pp. 5882–5887, 2022. +[17] A. Al-Ibadi, S. Nefti-Meziani, and S. Davis, “Efficient structure-based +models for the McKibben contraction pneumatic muscle actuator: The +full description of the behaviour of the contraction PMA,” Actuators, +vol. 6, no. 4, 2017. +[18] G. Olsen, H. Manjarrez, J. Adams, and Y. Meng¨uc¸, “Experimentally +identified models of mckibben soft actuators as primary movers and +passive structures,” Journal of Mechanisms and Robotics, vol. 14, no. +JMR-20-1425, pp. 011 006–1–011 006–15, 2022. +[19] C. S. Kothera, M. Jangid, J. Sirohi, and N. M. Wereley, “Experimental +characterization and static modeling of McKibben actuators,” Journal +of Mechanical Design, Transactions of the ASME, vol. 131, no. 9, pp. +0 910 101–09 101 010, 2009. +[20] T. Hassan, M. Cianchetti, M. Moatamedi, B. Mazzolai, C. Laschi, and +P. Dario, “Finite-element modeling and design of a pneumatic braided +muscle actuator with multifunctional capabilities,” IEEE/ASME Trans- +actions on Mechatronics, vol. 24, no. 1, pp. 109–119, 2019. +[21] C. P. Chou and B. Hannaford, “Static and dynamic characteristics of +McKibben pneumatic artificial muscles,” Proceedings - IEEE Interna- +tional Conference on Robotics and Automation, no. pt 1, pp. 281–286, +1994. +[22] S. N. Yu, P. E. Crago, and H. J. Chiel, “Biomechanical properties and a +kinetic simulation model of the smooth muscle I2 in the buccal mass +of Aplysia,” Biological Cybernetics, vol. 81, no. 5-6, pp. 505–513, +1999. +[23] T. J. Hinton, A. Hudson, K. Pusch, A. Lee, and A. W. Fienberg, “3d +printing pdms elastomer in a hydrophilic support bath via freeform +reversible embedding,” ACS Biomaterials Science & Engineering, +2016. + diff --git a/INE3T4oBgHgl3EQfugsS/content/tmp_files/load_file.txt b/INE3T4oBgHgl3EQfugsS/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..9643416a3fe25460df36225a1ec56959b291cddc --- /dev/null +++ b/INE3T4oBgHgl3EQfugsS/content/tmp_files/load_file.txt @@ -0,0 +1,574 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf,len=573 +page_content='Design and Characterization of Viscoelastic McKibben Actuators with Tunable Force-Velocity Curves Michael J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Bennington* [1], Tuo Wang* [1], Jiaguo Yin[2], Sarah Bergbreiter[1], Carmel Majidi[1], Victoria A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Webster-Wood+ [1,3,4] Abstract— The McKibben pneumatic artificial muscle is a commonly studied soft robotic actuator, and its quasistatic force-length properties have been well characterized and mod- eled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, its damping and force-velocity properties are less well studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Understanding these properties will allow for more robust dynamic modeling of soft robotic systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The force-velocity response of these actuators is of particular interest because these actuators are often used as hardware models of skeletal muscles for bioinspired robots, and this force-velocity relationship is fundamental to muscle physiology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In this work, we investigated the force-velocity response of McKibben actuators and the ability to tune this response through the use of viscoelastic polymer sheaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These vis- coelastic McKibben actuators (VMAs) were characterized using iso-velocity experiments inspired by skeletal muscle physiology tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' A simplified 1D model of the actuators was developed to connect the shape of the force-velocity curve to the material parameters of the actuator and sheaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Using these viscoelastic materials, we were able to modulate the shape and magnitude of the actuators’ force-velocity curves, and using the developed model, these changes were connected back to the material properties of the sheaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' INTRODUCTION Originally introduced in the 1930s-1940s [1], and popu- larized by Joseph McKibben in the 1950s [2]–[4], pneumatic artificial muscles are a commonly studied soft robotic actu- ator and have been used in traditional rigid robotics [1], [5], [6], soft robotic platforms [7], [8], and wearable and assistive devices [9]–[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Consisting of an inner rubber bladder and an outer constraining mesh, the McKibben actuator is able to achieve high actuator strains and large force relative to its light weight [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' McKibben actuators are of particular interest in bioinspired robotics and prosthetics because of their functional similarity to biological muscle in terms of contracting in response to activation and introducing com- pliance into the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' As a consequence, they can serve as first-order hardware models of skeletal muscle [4], [14]– [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Current experimental characterizations and models of these actuators tend to focus on their quasistatic properties, relating their inflation pressure, length, and axial force [3], [17]–[20], but less attention has been given to their dynamics properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These properties are important both to the design and modeling of the dynamics of a robotic system composed : The authors contributed equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' +: Corresponding author vwebster@andrew.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='cmu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Departments of [1] Mechanical Engineering, [2] Materials Science and Engineering, [3] Biomedical Engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [4] McGowan Institute for Regenerative Medicine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' All departments and institutes are part of Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Viscoelastic McKibben Actuator (VMA): (a) Plain McKibben Actuator (control), (b) Ecoflex-30 sheath, (c) Urethane sheath, (d) Ecoflex- 30 and Carbopol composite sheath (10mm diameter shown for all).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Each VMA contains a plain McKibben actuator at its core, fabricated in the same method as the control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' by these actuators and to the use of McKibben muscles as biomimetic actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' While few studies have been reported on the dynamic properties of McKibben actuators, those that have done so have often focused on the force-velocity relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For example, Tondu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' performed isotonic quick-release experiments on McKibben actuators and showed that, for a particular combination of rubber bladder and mesh ma- terials, the force-velocity relationship can resemble that of the Hill muscle model [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Other works have shown that the velocity-dependence of the McKibben actuator’s force is minimal compared to that of biological muscle [15], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The authors instead augmented the muscle with parallel hydraulic damping elements to better mimic the biological tissue [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, these solutions either rely on very particular woven mesh materials or large auxiliary equipment to tune the shape of the actuator’s force-velocity response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In this work, we begin to investigate the force-velocity relationship of McKibben actuators and the ability to tune these relationships using simple viscoelastic material sheaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Four different actuator architectures are investigated using actuators of three different diameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The force-velocity response of these viscoelastic McKibben actuators (VMA) is measured using iso-velocity tests adapted from the muscle physiology literature [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' To connect the measured force- velocity response to the material properties of the sheath and the mechanics of the underlying McKibben actuator, a simplified 1D model, consisting of parallel chains of standard This work has been submitted to the IEEE for possible publication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Copyright may be transferred without notice, after which this version may no longer be accessible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='04684v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='RO] 11 Jan 2023 (a) (b) (c) (d) 20mmFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Fabrication, Characterization, and Modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (a) Each viscoelastic muscle actuator consists of a standard McKibben actuator (fabricated following [7]) and a viscoelastic polymer sheath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (b) To characterize the dynamic properties of the actuators, iso-velocity experiments were performed on an Instron 5969 at various velocities and inflation pressures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (c) The dynamics of the actuators were modeled using parallel chains of Standard Linear Solid elements (SLSE), with one arm capturing the dynamics of the McKibben actuator and the other the dynamics of the sheath material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Using this model, an analytical expression for the force-velocity curves can be obtained (d), and the shape of the curve can be related to the material properties of the constituents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The height of this curve above the v = 0 point, ∆FV (v), can be related to two material properties of the actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Here, shortening velocity (negative of the extension rate) is reported in alignment with standard muscle physiology experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' linear solid elements (SLSEs), is formulated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' MATERIALS AND METHODS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Actuator Design and Fabrication Each viscoelastic muscle actuator consists of a traditional McKibben actuator, serving as the contractile element, and a viscoelastic sheath around the McKibben, serving as a passive damper (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Four 90 mm long McKibben actuators each of three different diameters (6 mm, 10 mm, 12 mm nominal mesh diameter) were fabricated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The design of the actuator was adapted from [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Briefly, a latex balloon inner bladder is connected to two barbed tube ends and is constrained by commercially available overexpanded cable meshes (PET Expandable Sleeving, Alex Tech).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Kevlar fibers and cyanoacrylate glue were used to seal and connect the bladder and mesh to the end caps of the actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Thin hollow sheaths of different viscoelastic and thixotropic materials were attached to the outside of the McKibben to act as the damping element of the actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' To create the outer viscoelastic sheaths, 2 single-layered, concentric-cylindrical molds were 3D printed (Object 30, Stratasys), with inner diameters of 9 mm and 12 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For both diameters, the resulting sheath has a thickness of 2 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Polyurethane (Vytaflex, Smooth-On Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='), Ecoflex-30 (Ecoflex 00-30, Smooth-On Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=') and 5% Carbopol (Car- bomer 940, Sanare) gel were used to fabricate the McKibben sheaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For both the Ecoflex-30 and polyurethane sheaths, the liquid elastomer was prepared by mixing the 2-part polymer in a 1:1 ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The mixed polymer was placed in a vacuum chamber for 5 minutes to remove air bubbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The 3D-printed molds were prepared by spraying a thin layer of mold release (Ease Release 200, Mann Release Technologies) on the inner surfaces of the mold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The elas- tomer was then injected into the mold and cured at room temperature (25◦C) for 12 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The Carbopol gel used in this project was adapted from [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' First, 10g of Carbopol 940 powder (Carbomer) was mixed with 190g of deionized water.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The mixture was then mechanically stirred for 4 hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' After stirring, 4g of 10M NaOH solution was added to the mixture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The new mixture was then mechanically stirred for 30 min.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Finally, the gel was injected in between an Ecoflex- 30 sheath and the McKibben actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The resulting sheaths were connected at the ends of the actuator using silicone epoxy (Sil-poxy, Smooth-On Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=') and Kevlar threads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The geometric and material parameters of all 12 actuators fabricated for experimental characterization, with and with- out sheaths, are provided in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The fabricated length of each actuator was measured by a digital caliper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The Max Contraction Ratio is defined as the ratio of the length of the actuator at 20 psi to the length of the actuator at 0 psi (initial length).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Experimental Characterization Inspired by biological muscle testing [22], iso-velocity tests were performed at different pressure levels for all sam- ple actuators on a universal material testing system (5969, In- stron, 1 kN load cell).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Inflation pressure was measured with a digital pressure sensor (ELVH-030G-HAND-C-PSA4, ALL SENSORS, maximum pressure 30 psi, resolution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 psi) and recorded using a microcontroller (Teensy 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6, PJRC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Two pairs of 3D printed holders were designed to hold both ends of the actuator and provide consistent friction between the actuator and the testing system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The force and length data from the universal material testing system and pressure data from the microcontroller were collected independently and synchronized later in MATLAB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Iso-velocity tests (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 4 (a)) were performed at five velocity magnitudes (2, 4, 6, 8, 10, all in mm/s) at 4 pressure levels (5, 10, 15, 20, all in psi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' All five velocities were tested in a single session at a given pressure level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For a given pressure: the actuator was first held at its rest length in the testing system and pressurized to the desired level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' After allowing the actuator force to reach steady state, the actuator was stretched between +4 and -4 mm at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='01 mm/s for one cycle, returned to the unpressurized rest length, and again allowed to come to steady state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This was done to minimize preconditioning effects on the first ramp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For each velocity magnitude v: the actuator was stretched 2 mm at v mm/s and then held for 30 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The actuator was then returned to the unpressurized rest length at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='01 mm/s and held for 30 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This same profile was then repeated at pressure FV Curve MINGIRON -FV(v=0) regulator - -FV(v=±8) Teorew taest 3 To VMA Sample Pressure ns VMA 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9 Sensor ckib P Teensy 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8 μ-controller 30 mm ShorteningVelocitya velocity of −v mm/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For 5 psi, only 1 mm of extension was applied, as shortening by more than 1mm from the unpressurized rest length would have led to shortening below the pressurized rest length of the actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Five repetitions of this full protocol were conducted for each actuator at each pressure and velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Modeling To relate changes in the experimental force-velocity curves to design parameters of the actuator materials, a simplified, 1D model was developed where both the McKibben actuator and polymer sheaths were treated as standard linear solid elements (SLSE) (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The resulting force-velocity ex- pressions are parameterized by mechanical properties of the actuator constituents and can therefore be used as a design tool to inform future designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In this model, elastic elements are assumed to have a force linearly proportional to strain (F = kε where the normalized stiffness k has units [N]), and the viscous elements are assumed to have a force linearly proportional to the strain rate (F = η ˙ε where the damping coefficient η has units [Ns]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For the case of a single SLSE, the system force is given by: F(t) = k1iε(t) + k2iε2i(t) (1) where k1i and k2i are the stiffness of the parallel and series elastic elements, and ε and ε2i are the strains of the parallel and series elastic elements of the ith SLSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For the actuators presented here only two SLSEs are included: a control McKibben (c) and the sheath (s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For each SLSE, ˙ε2i = ˙ε − k2i ηi ε2i (2) where ηi is the damping coefficient of the series damper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Starting from steady state ( ˙ε2i(t = 0−) = 0), a constant strain rate ramp ( ˙ε = ˆv where ˆv has units [1/s]) yields a system force Fi(t) = k1i(ε0 + ˆvt) + ˆvηi k2i (1 − e− k2i ηi t) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (3) For a fixed final applied strain dε, the peak system force is a function of the velocity (tpeak = dε/ˆv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Normalizing by the pre-extension, steady state force (Fi(t = 0−) = k1iε0), the force-velocity curve for the model can be written as: FVi(ˆv) = 1 + sgn(ˆv)dε ε0 + ˆvκi ε0γi (1 − e−sgn(ˆv) γidε ˆv ) (4) where κi = k2i/k1i is the relative stiffness of the elastic elements and γi = k2i/ηi is the inverse of the time constant of the viscous arm (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Here, sgn(x) is the sign function (1 : x > 0, −1 : x < 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The height of the force-velocity curve above the v = 0 discontinuity then takes the form: ∆FVi(ˆv) = ˆvκi ε0γi (1 − e−sgn(ˆv) γidε ˆv ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (5) Using this equation, the parameters κi and γi can be related to the shape of force-velocity curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Specifically, the hori- zontal asymptote is given by: ∆FVi(ˆv∞) = dε ε0 κi (6) and the velocity ˆvα at which the force-velocity curve reaches α∆FVi(ˆv∞) can be approximated as: ˆvα ≈ dε 2(1 − α)γi .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (7) This approximation is valid within 5% for α > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Thus the height of the force-velocity curve is governed by κi and the steepness of the force-velocity curve is governed by γi (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3a,b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For the two-SLSE case (McKibben actuator and the sheath), similar relationships can be found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The shape of the force-velocity curve takes the form: ∆FVc+s(ˆv) = βc βc + βs ∆FVc(ˆv) + βs βc + βs ∆FVs(ˆv) (8) where ∆FVs and ∆FVc both take the form of ∆FVi from the 1 SLSE case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The height and steepness are governed by weighted averages of κi and γi: ∆FVc+s(ˆv∞) = dε ε0 βcκc + βsκs βc + βs (9) and ˆvα ≈ dε 2(1 − α) βcκcγc + βsκsγs βcκc + βsκs (10) TABLE I GEOMETRIC AND MATERIAL PARAMETERS FOR THE VISCOELASTIC MCKIBBEN ACTUATORS Sample Mesh Diameter (mm) IL* ± 1 STD (mm) ML* ± 1 STD (mm) Max Contraction Ratio (%) Sheath Material Sheath Diameter (mm) Control1 6 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8 N/A N/A Control2 10 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 N/A N/A Control3 12 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6 N/A N/A Ecoflex1 6 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9 Ecoflex-30 9 Ecoflex2 10 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8 Ecoflex-30 9 Ecoflex3 12 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 Ecoflex-30 12 Urethane1 6 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='0 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='7 Poly-urethane 9 Urethane2 10 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='7 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='6 Poly-urethane 12 Urethane3 12 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9 Poly-urethane 12 Carbopol1 6 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 Carbopol+Ecoflex-30 12 Carbopol2 10 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9 Carbopol+Ecoflex-30 12 Carbopol3 12 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='3 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='4 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='9 Carbopol+Ecoflex-30 12 IL*: Initial length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The length of sample actuator measured at 0 psi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' ML*: Minimum length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The length of sample actuators measured at 20 psi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' STD: Standard Deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Investigation of model parameters for a 1-SLSE model ((a) and (b)) and for a 2-SLSE model ((c) and (d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Here the normalized shortening velocity (negative of the extension strain rate, v) is reported in alignment with standard muscle physiology experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (a) By varying the stiffness : damping ratio in the viscous arm of the SLSE, the slope of the force- velocity curve can be changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' As γ decreases (increased damping time constant), the force-velocity curve approaches a step response, with no velocity dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Conversely, as γ increases, the curve approaches a linear response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (b) By varying the stiffness ratio between the two arms of the model, the height of the force-velocity curve is changed, with the height increasing with increasing κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For (c) and (d), one SLSE in the model was fixed with κ1 = 10 and γ1 = 50, and the parameters of the other arm were varied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (c) By varying γ2 in the second arm, the slope of the force- velocity can be tuned, and (d) by varying κ2, the height of the force-velocity curve can be adjusted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In both cases, increasing β (the stiffness ratio of the two parallel elastic elements), the effect of changing κ2 or γ2 is amplified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (e) These parameters can grouped into four different material classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' By combining materials of classes, the force-velocity curve can be tuned for a desired dynamic response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' where βi = k1i/k1c is the stiffness ratio of parallel elastic element to the McKibben parallel stiffness (βc = 1) in the different elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' With different combinations of βs, κs, and γs, the force-velocity response of the McKibben actuator can be tuned (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3 (c),(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These expressions can also be extended to any number of parallel SLSEs following this weighted average scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Analysis Experimental force-velocity curves were compiled for each actuator and each pressure level using data mea- sured during characterization experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Individual ve- locity ramps were identified and extracted from the larger experiment (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The average velocity was found by fitting a piece-wise-linear ramp function to the extension Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Characterization and Modeling of the Force-Velocity Curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (a) An example iso-velocity experiment (6mm control McKibben actuator at 10 PSI) with the individual velocity ramps overlayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Data from these force responses are used to construct an experimental force-velocity curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (b) To avoid confounding effects from various amounts of overshoot, the peak force that is normalized by the initial force and the velocity is normalized by the pressurized rest length for the force-velocity curve is taken at the point when the ramp first reaches its target point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This occurs just prior to the extension overshoot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' data, the slope of which corresponds to the average velocity (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The average velocity is then normalized by the pressurized rest length of the actuator to obtained the strain rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The starting force (F0) was calculated as the mean force during the two seconds prior to the start of the ramp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The peak force was taken as the force value when the extension first reached its target point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This was to avoid artifacts introduced by extension overshoot, which occurred at higher velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The peak force was then normalized by the starting force.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These experimental force-velocity curves were then used to obtain model parameters for the McKibben actuators and viscoelastic sheaths as functions of pressure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For all experiments, the values of dε, ε0, ε, and ˆv in the model were calculated relative to the pressurized rest length of the actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' First, for each control McKibben actuator at each pressure, κc and γc were fitted using a nonlinear least squares method (code generated by using MATLAB Curve Fitting Toolbox).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Parameter initialization was chosen based on the Equations 6 and 7 for the horizontal asymptote and ˆvα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Specifically, the normalized force from the ±10 mm/s tests were used as FVc(ˆv∞) to approximate κc, and the data from the ±4 mm/s was used as the α point to estimate γc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' To fit κs, γs, and βs for each material, diameter, and pressure, the corresponding McKibben parameters were set as κc and γc and not optimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The parameter βs was initialized to 1, and κs and γs were initialized following the same procedure as in the plain McKibben case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The same optimization process was then carried out for these three parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' RESULTS AND DISCUSSION A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Characterization The force-velocity response of the twelve actuators was measured as a function of the pressure, sheath material, and mesh diameter (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In these figures, the normalized shortening velocity (negative of the extension strain rate) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8 0 10 104 101 102 100 100 10-2 1 Element %=1000 Peak =100 =10 --=1 %=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='8 0 ShorteningVelocity[1/slShorteningVelocityF1/slShorteningVelocity1/s 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1 Element - - - k,=25 h,=37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 k=12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 K_=50 ShorteningVelocity[1/s] ShorteningVelocity[1/s]ShorteningVelocity[1/s High K, High Y High K, Low Y Low K, High Y LoW K, LoW YTensile Force [N] ww xtensionFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Experimental Characterization of Viscoelastic McKibben Actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Each column shows the data for a different actuator, and each row shows a different actuator diameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Along the dashed line, the experimental data is reported as mean ± 1 standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The solid line shows the corresponding model fit for that actuator and that pressure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For the control actuators, a 1-SLSE model is used, and for each of the VMA, a 2-SLSE model is used, with the control element parameters set by the corresponding control actuator model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Inset: The 5PSI curve for the 12mm Urethane actuator is inset to allow a smaller axis range for the rest of the 12mm actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' is reported in alignment with standard muscle physiology experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The shortening velocity is normalized by the pressurized rest length of the actuator (units of shortening velocity here are [1/s]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Common force-velocity features were found across all actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Unlike what is predicted in the model, all actuators showed an asymmetric force-velocity response, with a larger magnitude asymptote for extensions (negative shortening velocities) than for shortening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This reflects the nonlinear stiffness properties of the McKibben actuators that have been previously reported, with stiffness increasing with increased length [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Additionally, an in- crease in pressure led to a decrease in the height of force- velocity curve at all velocities and diameters, suggesting a more elastically dominant behavior at high pressures (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3b,e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, the difference in height for a given pressure increase diminished with increasing pressure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This is most pronounced in the 10 and 12 mm diameter actuators at the 5 psi level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This could be related to changes in the contact state of the inner bladder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In these larger actuators, the bladder is not in full contact with mesh at lower pressures, but at higher pressures has made full contact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This low- pressure discrepancy being related to the contact state is also supported by the observation this discrepancy is not seen in the 6 mm actuators, where the mesh is in full contact with the actuator even at low pressures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, the mechanism that causes this contact state to result in a more viscous- dominated response would require additional investigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The addition of the viscoelastic polymer sheaths was successful in altering the force-velocity response of the McKibben actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In the case of the 6 and 12 mm diameter urethane actuators, a more viscous-dominating response was achieved, with the height of the force-velocity curve being higher than the control actuator response at all pressures and velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The 10 mm urethane actuator showed a different response, with the height in much closer agreement with the control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This could be due to the 10 mm urethane actuator requiring a larger diameter sheath than the other 10 mm actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The larger diameter sheath was used because the smaller sheath diameter consistently ruptured at higher pressures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, this meant that the sheath was less in contact with the underlying McKibben than in the 6 and 12 mm cases and was thus less engaged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Conversely, the Ecoflex sheath led to a decrease in the height of the force- velocity curve in extension for all diameters and pressure, showing a more elastic-dominant response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In shortening, the Ecoflex actuators showed closer agreement with the control actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Finally, the effect of the Carbopol actuators varied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For the 6 mm actuator, almost no change was seem from the control actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In the 10 mm actuator, the response was much closer to that of the Ecoflex, showing a more elastic- dominant response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In the 12 mm case, the effect varied with pressure and direction of motion, with an increased height Shortening Velocity[1/s Shortening Velocity1/s 5 PSI 10 PSI 15 PSI 20 PSI 一seen at 10 and 15 psi in extension, but no difference seen at 20 psi or in shortening at 10, 15 or 20 psi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This characterization is limited in a number of ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The force-velocity curve, while relevant to the actuator in terms of its role as a model of skeletal muscle, is only one metric by which to determine these actuators’ dynamic properties or the ability of these material sheaths to tune them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' More complete characterization will require cyclic testing at various speeds to determine hysteresis as a function of velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Additionally, the minimum extension rate of 2 mm/s was near the horizontal asymptote for many actuators, resulting in poor characterization of the high slope region of the force-velocity curve near ˆv = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' A more complete investigation of the force-velocity curve will require lower velocities to be incorporated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These higher test rates also resulted in extension and shortening overshoot in the tests, which made the calculation of the peak force and the following force decay more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These overshoots would be minimized with lower velocity tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Modeling The presented model was able to successfully capture major trends in the experimental force-velocity curves (R2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='94 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='05 for all actuators and pressures), and the changes in the VMA curve relative to the control curves can be explained through the model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For example, in all actuators, an increase in pressure leads to a decrease in the height of the force-velocity curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This is expected under this model, as increasing the pressure of the McKibben actuator increases its stiffness [3], and this increased stiffness results in a lower κc and thus FV (ˆv∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This model can also be used to explain changes in the force-velocity curves associated with the material sheaths (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Based on preliminary materials testing, the urethane sheath would fall into the viscous dominant, long time constant class, and Ecoflex would fall into the elastic dominant, short time constant class (relative to the McKibben actuator).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Therefore, we would expect that the urethane would cause an increase in the height of the force-velocity curve (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, with increased pressure, the relative stiffness of the McKibben to the urethane sheath increase (decreasing βs), so we would expect this difference to decrease with increased pressure as the weighted average begins to favor the McKibben actuator (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For the Ecoflex sheath, the relatively shorter time constant would lead to a high slope of the force-velocity curve, which is seen at low pressures (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, as with the urethane sheath, an increased pressure leads to the McKibben properties dominating once again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' While this model can capture many of the trends in the data, there are some limitations in its accuracy and predictive power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Both the McKibben actuators and the sheath materials are non-linearly elastic, with their stiffness increasing with increased strain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This results in an asymmetrical force- velocity curve, with a larger response for extension (negative shortening velocity) relative to shortening at the same rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This cannot be captured by the linear springs in the proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' As a consequence, the model fits tend to under-predict Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Comparison of 1 SLSE and 2 SLSE Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For (a) and (b), the black dashed line shows the model fit of the corresponding plain Mckibben actuators (1 SLSE model), and the solid colored line shows the adjusted 2 SLSE model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' As with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 5, the experimental data are shown as mean ± 1 STD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (a) 6mm Urethane VMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' For all pressures, the viscous nature of the urethane led to an increased height of the force-velocity curve, captured by the 2 SLSE model have a higher asymptote.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' As pressure increases and the McKibben stiffens, this asymptote difference decreases as the McKibben begins to dominate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (δ∆FV (ˆv∞) = FV2SLSE(ˆv∞)−FV1SLSE(ˆv∞)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' (b) 10mm Ecoflex VMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' At low pressure, the low viscous effects (large γE) of the Ecoflex sheath are able to change the slope of the force-velocity curve, but at higher pressures, the relative stiffness of the McKibben actuator again dominates, making the VMA response into alignment with the standard McKibben actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' extension responses and to over-predict shortening responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Furthermore, the asymmetry also results in high parameter uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Improvements can be made through the inclusion of nonlinear spring elements and more appropriate models of the McKibben actuator at the cost of decreased interpretabil- ity of the model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Additionally, the optimized sheath parameters tend to vary with pressure, whereas it would be expected that they would be pressure-independent for linear materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, a pressure dependence would be expected for nonlinear materials, as the McKibben ac- tuator’s pressure will determine the deformation state of the sheath material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' In the future, this pressure dependence could be incorporated into the model as well, but it would require 3D geometric information about the actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Both of these issues could be addressed by incorporating a more complete quasi-static McKibben model [3], [9], [17] to capture the strain stiffening of the McKibben actuators and provide the geometry needed to estimate the sheath stiffness pressure dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Finally, this model only includes damping from standard dash-pot elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' However, previous work has shown that a velocity dependence in McKibben actuators can actually come from non-linear friction interactions in the mesh mate- rial [14] and Coulomb friction between the bladder material and the sheath [4], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Future model development should incorporate such friction into a more complete model of the McKibben to replace one of the SLSEs in this model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='1 Shortening Shortening Shortening Velocity [1/s] Velocity [1/s] Velocity[1/s]IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' CONCLUSION AND FUTURE WORKS This work presents the characterization and modeling of the force-velocity relationships of viscoelastic McKibben actuators (VMA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' These VMAs consist of a standard McK- ibben actuator surrounded by a viscoelastic polymer sheath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Iso-velocity experiments were performed to measure the force-velocity response of these actuators, and a simplified 1D model was developed to relate the shape of these exper- imental force-velocity curves to the material properties of the actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Using these polymer sheaths, we were able to successfully augment the force-velocity response of a standard McKibben, changing either its asymptotic height or its slope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The 1D model performed well in capturing the trends in these force-velocity curves, but missed key features, including the asymmetry in extension/shortening and the pressure dependence of sheath properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Future works on these actuators will include iso-velocity tests at slower speeds to further investigate the steep portion of the force-velocity curve near the ˆv = 0 discontinuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Additionally, to increase the predictive power of the model, more accurate quasistatic models of the McKibben’s length- pressure-force properties will be implemented to replace the linear spring element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' This will also require the measure- ment of the actuator geometry during quasi-static testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Geometric information from these models will be used to capture the deformation-dependent properties of the sheath materials as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' With better predictive power, these models can be used as a design tool for creating actuators with a desired force-velocity response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Future characterization will also include cyclic testing of the actuators at various speeds to more robustly investigate their dynamic properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' The work presented here lays the foundation for the fabrica- tion and design of pneumatic actuators with tunable force- velocity dynamics for broad applications in bioinspired and biomimetic robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' ACKNOWLEDGEMENTS This work was supported in part by the National Sci- ence Foundation (NSF) through grant no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' FRR-2138873, and in part by NSF DBI-2015317 as part of the NSF/CIHR/DFG/FRQ/UKRI-MRC Next Generation Net- works for Neuroscience Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' REFERENCES [1] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Daerden, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Lefeber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=', “Pneumatic artificial muscles: actuators for robotics and automation,” European journal of mechanical and environmental engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 47, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 11–21, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [2] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hawkes, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Majidi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Tolley, “Hard questions for soft robotics,” Science Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 53, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [3] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Tondu, “Modelling of the McKibben artificial muscle: A review,” Journal of Intelligent Material Systems and Structures, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 225–253, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chou and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hannaford, “Measurement and modeling of mck- ibben pneumatic artificial muscles,” IEEE Transactions on Robotics and Automation, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 90–102, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Delson, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hanak, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Loewke, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Miller, “Modeling and implementation of McKibben actuators for a hopping robot,” 2005 International Conference on Advanced Robotics, ICAR ’05, Proceedings, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2005, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 833–840, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [6] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Kurumaya, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Suzumori, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Nabae, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Wakimoto, “Mus- culoskeletal lower-limb robot driven by multifilament muscles,” ROBOMECH Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1–15, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [7] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Dai, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Sukhnandan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Bennington, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Whirley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Bao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Gill, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chiel, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Webster-Wood, “Slugbot, an aplysia- inspired robotic grasper for studying control,” Living Machines, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Faudzi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Azmi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Sayahkarajy, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Xuan, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Suzu- mori, “Soft manipulator using thin McKibben actuator,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2018-July, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 334–339, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Connolly, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Walsh, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Bertoldi, “Automatic design of fiber- reinforced soft actuators for trajectory matching,” Proceedings of the National Academy of Sciences, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 114, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 51–56, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [10] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Tschiersky, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hekman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Brouwer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Herder, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Suzumori, “A Compact McKibben Muscle Based Bending Actuator for Close-to-Body Application in Assistive Wearable Robots,” IEEE Robotics and Automation Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 5, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3042–3049, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [11] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Koizumi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Nabae, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Endo, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Suzumori, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Mita, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Saitoh, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hatakeyama, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chida, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Shimada, “Soft Robotic Gloves with Thin McKibben Muscles for Hand Assist and Rehabili- tation,” Proceedings of the 2020 IEEE/SICE International Symposium on System Integration, SII 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 93–98, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [12] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Rosalia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Ozturk, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Coll-Font, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Nagata, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Singh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Goswami, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Mauskapf, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Eder, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Goffer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Yurista, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Bonner, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Foster, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Levine, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Edelman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Panagia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Guerrero, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Roche, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Nguyen, “A soft robotic sleeve mimicking the haemodynamics and biomechanics of left ventricular pressure overload and aortic stenosis,” Nature Biomedical Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1134–1147, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Park, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Majidi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Wood, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Nagpal, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Goldfield, “Active modular elastomer sleeve for soft wearable assistance robots,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1595–1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [14] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Tondu and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Zagal, “McKibben artificial muscle can be adapted to be in accordance with the Hill skeletal muscle model,” Proceedings of the First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, 2006, BioRob 2006, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 2006, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 714–720, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [15] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Klute, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Czerniecki, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hannaford, “McKibben arti- ficial muscles: Pneumatic actuators with biomechanical intelligence,” IEEE/ASME International Conference on Advanced Intelligent Mecha- tronics, AIM, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 221–226, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Gollob, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Poss, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Memoli, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Roche, “A multi-material, anthropomorphic metacarpophalangeal joint with abduction and ad- duction actuated by soft artificial muscles,” IEEE Robotics and Au- tomation Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 5882–5887, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Al-Ibadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Nefti-Meziani, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Davis, “Efficient structure-based models for the McKibben contraction pneumatic muscle actuator: The full description of the behaviour of the contraction PMA,” Actuators, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 4, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [18] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Olsen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Manjarrez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Adams, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Meng¨uc¸, “Experimentally identified models of mckibben soft actuators as primary movers and passive structures,” Journal of Mechanisms and Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 14, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' JMR-20-1425, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 011 006–1–011 006–15, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [19] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Kothera, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Jangid, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Sirohi, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Wereley, “Experimental characterization and static modeling of McKibben actuators,” Journal of Mechanical Design, Transactions of the ASME, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 131, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 0 910 101–09 101 010, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [20] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hassan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Cianchetti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Moatamedi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Mazzolai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Laschi, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Dario, “Finite-element modeling and design of a pneumatic braided muscle actuator with multifunctional capabilities,” IEEE/ASME Trans- actions on Mechatronics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 24, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 109–119, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [21] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chou and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hannaford, “Static and dynamic characteristics of McKibben pneumatic artificial muscles,” Proceedings - IEEE Interna- tional Conference on Robotics and Automation, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' pt 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 281–286, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [22] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Yu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Crago, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Chiel, “Biomechanical properties and a kinetic simulation model of the smooth muscle I2 in the buccal mass of Aplysia,” Biological Cybernetics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 81, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 5-6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' 505–513, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' [23] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hinton, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Hudson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Pusch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Lee, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} +page_content=' Fienberg, “3d printing pdms elastomer in a hydrophilic support bath via freeform reversible embedding,” ACS Biomaterials Science & Engineering, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INE3T4oBgHgl3EQfugsS/content/2301.04684v1.pdf'} diff --git a/ItFIT4oBgHgl3EQfYivh/content/2301.11249v1.pdf b/ItFIT4oBgHgl3EQfYivh/content/2301.11249v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..64bf4a4410db0b783112077b7df75634ff480c7d --- /dev/null +++ b/ItFIT4oBgHgl3EQfYivh/content/2301.11249v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ce014fa5ba7fa07ffb77e89a72b645a8a7dc71c0d58d02c014b7b0bbd758eba +size 5196091 diff --git a/JNE2T4oBgHgl3EQf_wld/content/2301.04251v1.pdf b/JNE2T4oBgHgl3EQf_wld/content/2301.04251v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b3bc9c1faaec89d4af1e44f4383ac100b5c56ad4 --- /dev/null +++ b/JNE2T4oBgHgl3EQf_wld/content/2301.04251v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b6cd681c940dd5a854af150fdccc3df1e38e126265e9e5ce48f597fd64ec064 +size 142971 diff --git a/JNE2T4oBgHgl3EQf_wld/vector_store/index.pkl b/JNE2T4oBgHgl3EQf_wld/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..bfda7e7434a6d0eda71824bffee56645535367d6 --- /dev/null +++ b/JNE2T4oBgHgl3EQf_wld/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91e3fa87850903a4de8725882be2b4c58823bd07ac616ead021bb5e766895e72 +size 102724 diff --git a/KNE0T4oBgHgl3EQfSQBx/content/tmp_files/2301.02219v1.pdf.txt b/KNE0T4oBgHgl3EQfSQBx/content/tmp_files/2301.02219v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..1e49bdd329d87f19fb64c61b172eef10d5d9d8e0 --- /dev/null +++ b/KNE0T4oBgHgl3EQfSQBx/content/tmp_files/2301.02219v1.pdf.txt @@ -0,0 +1,822 @@ +arXiv:2301.02219v1 [cond-mat.soft] 5 Jan 2023 +Transfer Learning Facilitates the Prediction of +Polymer–Surface Adhesion Strength +Jiale Shi,† Fahed Albreiki,‡ Yamil J. Colón ,† Samanvaya Srivastava,‡,¶,§,∥ and +Jonathan K. Whitmer∗,†,⊥ +†Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre +Dame, Indiana 46556, United States +‡Department of Chemical and Biomolecular Engineering, University of California, Los +Angeles, Los Angeles, California 90095, United States +¶California NanoSystems Institute, Center for Biological Physic, University of California, +Los Angeles, Los Angeles, California 90095, United States +§Institute for Carbon Management, University of California, Los Angeles, Los Angeles, +California 90095, United States +∥Center for Biological Physics, University of California, Los Angeles, Los Angeles, +California 90095, United States +⊥Department of Chemistry and Biochemistry,University of Notre Dame, Notre Dame, +Indiana 46556, United States +E-mail: jwhitme1@nd.edu +Abstract +Machine learning (ML) accelerates the exploration of material properties and their +links to the structure of the underlying molecules. In previous work [J. Shi, M. J. +Quevillon, P. H. A. Valen¸ca, and J. K. Whitmer, ACS Appl. Mater. Interfaces., 2022, +1 + +14, 32, 37161–37169 ], ML models were applied to predict the adhesive free energy of +polymer–surface interactions with high accuracy from the knowledge of the sequence +data, demonstrating successes in inverse-design of polymer sequence for known surface +compositions. While the method was shown to be successful in designing polymers +for a known surface, extensive datasets were needed for each specific surface in order +to train the surrogate models. Ideally, one should be able to infer information about +similar surfaces without having to regenerate a full complement of adhesion data for +each new case. In the current work, we demonstrate a transfer learning (TL) technique +using a deep neural network to improve the accuracy of ML models trained on small +datasets by pre-training on a larger database from a related system and fine-tuning +the weights of all layers with a small amount of additional data. The shared knowledge +from the pre-trained model facilitates the prediction accuracy significantly on small +datasets. We also explore the limits of database size on accuracy and the optimal +tuning of network architecture and parameters for our learning tasks. While applied +to a relatively simple coarse-grained (CG) polymer model, the general lessons of this +study apply to detailed modeling studies and the broader problems of inverse materials +design. +Introduction +Numerous industrial applications and biological phenomena involve chemically specific polymer– +surface interactions, from ink absorption on paper,1,2 and semiconductor fabrication and +coating,3,4 to the design and synthesis of artificial tissues5 and viruses recognizing receptors +on a cell surface.6–10 The use of highly tuned sequence-defined polymers is attractive in con- +trolling phase behavior, stabilizing interfaces, and promoting adhesion. Sequence-dependent +adsorption of polymers to patterned surfaces has been studied through traditional theoret- +ical and computational approaches11–18 and machine learning methods,19 emphasizing the +importance of polymer sequence in determining the adsorption energies.17,19 +2 + +Machine learning (ML) and artificial intelligence (AI)20–32 have achieved dramatic success +in determining the behaviors and properties of polymer and biomacromolecule systems,33–41 +including predicting protein structure,33–35 polymer structures (such as radius of gyration in +solvent),36,42 and thermodynamic properties (such as polymer glass transition temperature, +Tg).40,43,44 However, the wide-ranging chemical sequence, topological space, and mass distri- +bution of the polymer are too extensive to explore.42,45 For example, even for linear binary +copolymers with twenty monomers, the number of possible sequences is approximately one +million. The chemical space becomes exponentially large if more monomer types, variable +degrees of polymerization, non-uniform topologies, and mass distributions enter the descrip- +tion. ML techniques can help, but often provide knowledge highly specific to the immediate +problem and require significant new datasets to incorporate information outside the original +scope. For example, our prior work (see Ref. 19) utilized ML models to predict the adhesive +free energy of polymer–surface interactions with high accuracy and aid the inverse-design of +polymer sequence for known surface compositions, but exploring adhesion of such a polymer +to a substrate requires about 8000 data points to train an accurate ML model for each deco- +rated surface. Often, ML models are inaccurate or overfit when trained on small datasets. At +the same time, in both industrial applications and biological settings, the surface patterns +vary substantially, both structurally and randomly. Collecting large datasets for every pat- +terned surface from thousands or millions of new experiments or simulations is, therefore, +prohibitively difficult and expensive. In realistic situations, it may only be feasible to collect +tens to hundreds of new data points. Data-driven ML modeling is easier to implement but +often necessitates large datasets that could be difficult to obtain.46–48 Therefore, our aim +here is to determine the minimum amount of additional computation necessary to obtain an +accurate binding model, building as much as possible on prior knowledge. +Transfer learning (TL) can be a valuable technique to overcome the dilemma of in- +sufficient data.46–48 In TL, an ML model initially pre-trained for a given task on a large +dataset of the source domain is utilized as the base to train a model for a new task by fine- +3 + +tuning a small dataset of the target domain.29,46–49 Typically, TL can improve the model’s +accuracy if the source and target domains are closely related.29,46–48,48,50 TL has achieved +considerable success in speech recognition,51,52 image recognition,53,54 and natural language +processing.55,56 In addition, TL has also been successfully utilized in materials informatics +studies57–59 such as structural prediction of gas adsorption in MOFs,60 phonon properties in +semiconductors,61 and thermal conductivity62 and electrochemical properties29 of polymers. +However, these studies typically do not explore the explicit inverse design problem involved +in materials design: what molecular structures, subject to reasonable constraints, are best +for a given application? +In this study, we demonstrate the ability of transfer learning to leverage the prediction +performance of adhesive free energies between polymer chains with a defined sequence and +patterned surfaces via fine-tuning a pre-trained model. The source domain and learning task +come from a large dataset of polymer-surface interactions with one patterned surface.19 The +target domain and learning task come from a small dataset of polymer-surface interactions +with a different patterned surface.19 We utilize a deep neural network architecture to perform +transfer learning and characterize the improvements on three example cases. We also explore +the limits of database size on accuracy and the optimal tuning of network architecture and +parameters for our learning tasks. +Methods +Data Set +The data sets used in this work are from our recent work, Shi, et al. (Ref. 19). As shown +in Figure 1 (a), every data point includes one sequence-defined polymer and its adhesive +free energy ∆F with a patterned surface. The ∆F were generated by LAMMPS molecular +dynamic simulations63 coupled with adaptive biasing force (ABF) method64 SSAGES.27,65–69 +The polymer chain and surface are both composed of two types of beads, denoted ”red” beads +4 + +and ”green” by their visualization in Figure 1. The polymer is modeled as a flexible 20-bead +linear chain. The surface is holonomically constrained, with a simple square lattice of beads +having dimension 20σ×20σ for a total of 400 beads. Each dataset contains 2×104 sequence- +defined polymers and their adhesive free energies with one patterned surface. There are four +different data sets, one for each pattern shown in Figure 1 (b): PS1, which is composed of +half red beads and half green beads in two stripes. Nred = 200 and Ngreen = 200; PS2, which +is composed of 16 alternate small size squares (5σ×5σ) of red and green beads with the same +overall composition as PS1; PS3, where each bead was randomly generated with a probability +of 0.5 for each site to be red or green resulting in Nred = 184 and Ngreen = 216; and PS4, +which is built upon PS2, but randomized within the interior of the 5σ ×5σ squares resulting +in a total of Nred = 206 and Ngreen = 194. PS3 and PS4 allow exploration of the role of +randomizing effects on our adhesive models, with PS4 including randomness within an overall +structure rather than only randomness. For simplicity, we use the name of the patterned +surface to represent each data set, called Data-PS1, Data-PS2, Data-PS3, and Data-PS4. +Detailed distributions and analysis of the adhesive free energy datasets are available in Ref. +19; reduced metrics corresponding to Gaussian fit paramters for each free energy distribution +Data-PS1, Data-PS2, Data-PS3, and Data-PS4 are shown in Table 1. Additional details for +generating the datasets are discussed in the previous work19. All datasets are available online +at https://github.com/shijiale0609/ML_PSI. +Table 1: Gaussian Fitting Details19 of Distributions of Adhesive Free Energies for Data-PS1, +Data-PS2, Data-PS3 and Data-PS4 +Dataset +µ(kBT) +σ(kBT) +Data-PS1 +15.66 +2.89 +Data-PS2 +13.84 +1.55 +Data-PS3 +8.96 +0.77 +Data-PS4 +8.20 +0.31 +5 + +∆F +(a) +Sequence +Adhesive Free Energy +(b) +PS1 +PS2 +PS3 +PS4 +Figure 1: A schematic of the data sets about adhesive free energies of sequence defined +polymers with patterned surfaces from the work of Shi, et al.19 (a) Every data point includes: +A sequence-defined polymer and its adhesive free energy with a patterned surface. Each +dataset contains 2×104 sequence defined polymers and their adhesive free energies with one +patterned surface. Therefore, for simplification, we use the name of the patterned surface to +represent each data set. (b) There are four such datasets (Data-PS1, Data-PS2, Data-PS3 +and Data-PS4) for four different patterned surfaces( PS1, PS2, PS3, and PS4). +Transfer Learning Architecture +In this work, a deep neural network (DNN) architecture29,60 with one input layer, three +hidden layers, and one output layer was used to quantify the relationship between the polymer +sequence information and polymer–surface adhesive free energy, ∆F. The input was one hot +encoding of the polymer sequence. The output was the adhesive free energy. The DNN +architecture is shown in Figure 2. +First, we trained a source DNN with the source data set. We used the Data-PS1, 2 × 104 +6 + +Source Domain +data +Target Domain +data +200 random data +Batch 1 +Batch 1000 +1000 random draws +200 random data +200 random data +Batch 2 +Pre-training +Source DNN +Transfer +TL +DL +DL +DL +TL +TL +Data-PS2, Data-PS3, or Data-PS4 +Data-PS1 +Figure 2: A schematic of the procedure for testing the performance of transfer learning from +source domain (Data-PS1) to target domain (Data-PS2, Data-PS3, or Data-PS4). A total +of 2 × 104 polymer sequences and the corresponding ∆F with PS1 are used as the source +data. We train a fully connected deep neural network whose architecture is (20,64,64,32,1), +using all the 2 × 104 source data points and save its weights. When training the DNN for +target data, as a transfer learning framework, we fine-tune a subset of the weights in the +pretrained source DNN using 200 data points (TL) and compare with the learning from +randomly initialized DNN in a direct learning (DL) way. +data points of polymer sequences and their ∆F with PS1 as the source data, as the ML model +applied to this dataset achieved the highest accuracy among the four original datasets.19 +Then we randomly separated the 2 × 104 data points into 1.6 × 104 as the training set and +4 × 103 as the validation set. 4:1 ratio is a commonly used ratio in machine learning.20,21,70 +The training set is the set of data that was used to train and make the model learn the +hidden features/patterns in the data. In each epoch, the same training data was fed to the +neural network architecture repeatedly, and the model continued to learn the features of the +data. The validation set is a set of data that was used to validate our model performance +7 + +during training. This validation process provided information that helped tune the model’s +hyperparameters and configurations. A test set is not required for this initial task as we +are seeking a base line trained on PS1 to extend to the other datasets. Without the need +to leave data points for a test set, we were able to have more data points for training +and validation. The hyperparameters of the DNN are optimized on the source task PS1 by +promoting the accuracy and robustness of the DNN. Utilizing an n-tuple description for the +hidden layers of a fully connected DNN, our network was represented by (20,64,64,32,1). +The learning rate, which serves as the step size for updating the DNN parameters, was set +to 0.00002 to make the learning process stable. LeakyReLU71 with a negative slope of 0.1 +was used as the activation function, and Adam algorithm72 was used to optimize weights. +The number of learning epochs was set to 104, and the training process can be early ended +by a converging check function was applied on the validation data to terminate the training +process, if appropriate. We trained a source DNN using the training set of source domain +and selected the epoch with the highest accuracy on the validation set as the base DNN for +subsequent TL task, which were referred to as the pre-trained source DNN (depicted as the +red DNN in Figure 2). An open-source machine learning framework, Pytorch73, was used +to implement the DNN. All the parameters are stored on Github as described in the Code +Availability section. +Next, we turned to the target data set and applied the DNN with the same hyperpa- +rameters. The small target data set was composed of 200 data points which were randomly +drawn from existing data on the new domain (Data-PS2, Data-PS3, or Data-PS4). The data +set was then divided into training, validation, and test sets in the ratio of 72:18:10, to be +consistent with previous transfer learning studies.60 144 training data points were used for +training the model, and 36 validation data points were used to determine when the training +should be stopped and to avoid overfitting. The validation data set was used to select the +training epoch. Since the validation data set was involved in the training process, the model’s +performance is toward it. Therefore, we additionally tested our model on the untouched test +8 + +data set to provide unbiased final model performance metrics. Our use of this protocol en- +abled us to address the core question: “How well does the model perform on the small data +set of Data-PS2, Data-PS3, or Data-PS4 without bias?” To illustrate the power of transfer +learning, with the same 200 data points and the same separation for training, validation, +and test sets, we performed direct learning (DL) (black DNN in Figure +2) and transfer +learning (TL) (blue DNN in Figure 2). For direct learning, we trained the DNN model from +randomly initialized weights. For transfer learning (blue DNN in Figure 2), we alternatively +fine-tuneed the weights of all layers in the pre-trained DNN from the source task. There are +three reasons that we choose to fine-tune the weights of all layers. First, we sought to build +an end-to-end model which is more friendly to other users who are not familiar with deep +learning. In an end-to-end model, users only need to focus on the input and output and do +not need to worry about how to modify the inside architecture of the model. We want to +show that starting from a pre-trained DNN without fixing the weights can get improvements. +Second, we tested other fine-tuning formats, like fixing the weights of the first n layers and +fine-tuning the weights of the remaining m layers.60 We found that those formats do not +provide competitive improvements and sometimes behaved worse than when fine-tuning all +layers. Third, when the size of the training set increases, fixing the weights of some layers +might lead to underfitting. Fine-tuning the weights of all layers is more robust to the size of +the training data. +The comparison between performances of DL and TL scenarios was evaluated by com- +paring their respective coefficients of determination (R2 values) on the same test sets (20 +data points). +R2 = 1 − +�(yi − ˆyi)2 +�(yi − ¯yi)2 +(1) +The maximum performance score of R2 = 1.0 occurs when every prediction is correct (yi ≡ +ˆyi). Note that R2 can be negative because the model can be arbitrarily poor. The choice +of R2 as an evaluation metric was reasonable, as R2 can provide a natural baseline for +judging the performance of models.60 For each small data set, we obtained two coefficients: +9 + +R2 +DL, which shows the performance of DL, and R2 +TL which characterizes the performance of +TL. The small data set resulted in highly variable accuracies in models due to the random +data drawing. Therefore, we did not limit testing to a single small target data case and +randomly drew sample data from the target space 1000 times for both DL and TL scenarios, +subsequently obtaining 1000 pairs of R2 +DL and R2 +TL. This enabled us to gain a statistically +robust understanding of the behavior of TL, mitigating the effects of outlier data sets on +training. +Results and Discussion +A summary of performance (captured via R2 values) of direct (DL) and transfer (TL) learning +for 1000 small target data sets from Data-PS2, Data-PS3, and Data-PS4 is given in Table 2. +We step through specific cases below. +Table 2: R2 characteristics from DL and TL for 1000 small target data sets for datasets Data- +PS2, Data-PS3 and Data-PS4; transfer learning proceeds using a neural network trained on +Data-PS1 applied to data on the target surface. +Dataset +R2 +DL +R2 +TL +Data-PS2 +−0.0089 ± 0.1956 +0.8303 ± 0.0747 +Data-PS3 +0.6338 ± 0.2079 +0.7998 ± 0.1173 +Data-PS4 +0.4502 ± 0.1849 +0.6578 ± 0.1341 +Knowledge Transfer from Data-PS1 to Data-PS2 +We first investigate the application of TL in transferring knowledge from Data-PS1 over +to Data-PS2, which links data acquired on from one regular patterned surface to another +regular patterned surface. PS1 and PS2 have the same overall composition (red:green = +200:200), and similar patterning when accounting for periodic boundaries, though PS2 uses +smaller squares of uniform chemistry rather than two large stripes. The results of 1000 trials +10 + +are plotted as pairs R2 +DL and R2 +TL for comparison in Figure 3. As shown in Figure 3(a), 624 +DL R2 values are negative, implying poor model performance for those cases. All R2 values +for TL cases are positively correlated, and many are close to one, meaning that the models’ +performances are excellent in those cases. Collectively, the average R2 on the 1000 test sets +through DL is −0.0089±0.1956, while the same metric for TL is 0.8303±0.0747. Therefore, +TL both improved the mean value of R2 and decreased its standard deviation (SD). The +diminishing SD shows that TL is less sensitive to the random selection of the small dataset +than the DL, which can be ascribed to the weights of the pre-trained DNN being close to the +optimized weights of the target DNN. From the dashed line depicting ∆R2 in Figure 3(c), +where all the ∆R2 are greater than zero, and Figure 3(b), where all points (R2 +TL vs. R2 +DL) +are above the line y = x, it can be inferred that in all the 1000 target cases, TL improved +the accuracy of the model prediction. In Figure 3(c), a strong negative linear relationship +between the improvement in ∆R2 and the model accuracy from DL demonstrated that +TL contributed improved knowledge in situations where DL yielded low accuracy. Since TL +transfers a pretrained network rather than initializing weights randomly, the additional small +data set acts to refine the weights rather than generate them wholesale — hence, even when +DL yields low accuracy the performance of TL remains stable, as is strongly evident in the +the improvement of ∆R2 for these surfaces. At the same time, when DL already achieved +very high model accuracy on the target tasks, the transfer knowledge from the source task +offered only a slight improvement. For these cases, the randomly initialized weights of the +DNN for DL happened to be close to the optimized weights. +Knowledge Transfer from Data-PS1 to Data-PS3 +Next, we investigated the application of TL from Data-PS1 to Data-PS3. While PS1 is +very regular, PS3 is a fully randomized surface with composition (red:green = 184:216), +generated using a random probability P(red) = P(green) = 0.5 for the beads in the square +lattice. The average R2 on 1000 test sets modeled using DL was 0.6338 ± 0.2079, while the +11 + +(a) +(b) +(c) +Figure 3: Transfer learning applied adhesive free energies of sequence defined polymers using +a DNN fit to Data-PS1 adapted to Data-PS2 using a small dataset. (a) 1000 pairs of R2 +DL for +direct learning (blue line) and R2 +TL for transfer learning (red line). The improvement from +transfer learning (∆R2 = R2 +TL − R2 +DL is shown in green line. The Case ID numbers on the +x axis are sorted by the value ∆R2 in descending order. (b) R2 +TL plotted against R2 +DL. (c) +Improvement ∆R2 as a function of R2 +DL. +same metric from TL was improved to 0.7998 ± 0.1173. We note that DL’s performance for +Data-PS3 was better than that for Data-PS2, attributable to the reason that the standard +variation (σ = 0.77kBT) of the whole 2 × 104 Data-PS3’s ∆F is smaller than that of Data- +PS2 (σ = 1.55kBT).19 We conclude that the improvement from TL is less robust on this +dataset than Data-PS2, likely because of the tighter distribution for adhesive energies (see +Ref. 19 for context). Still there remains a marked improvement. Another significant reason +for the differences in this case is that the source and data sets is the more dissimilar adhesion +properties related to the randomization of the surface pattern. Still, DL has 15 cases where +R2 +DL is not greater than zero, while TL only has 2 cases where R2 +TL is not greater than +zero. The green line in Figure 4(a) and data in Figure 4(b) show that in most examined +cases (910 out of 1000), TL gives positive improvement. In Figure 4(c), there is a generally +negative linear relationship between ∆R2 and model accuracy from DL, though the linear +relationship is not as strong as the prior dataset in Figure 3(c). Thus, we infer that when +12 + +the target tasks have very high model accuracy from DL already, the transfer of knowledge +from the source task does not always help further improve the model accuracy. +(a) +(b) +(c) +Figure 4: Transfer learning applied adhesive free energies of sequence defined polymers using +a DNN fit to Data-PS1 adapted to Data-PS3 using a small dataset. (a) R2 values for direct +learning (R2 +DL, blue line), transfer learning (R2 +TL, red line) and improvement from transfer +learning (∆R2 = R2 +TL − R2 +DL, green line) of the 1000 target cases. Case ID numbers on the +x axis are sorted by the value ∆R2 in descending order. (b) R2 +TL plotted against R2 +DL. (c) +Improvement ∆R2 as a function of R2 +DL. +Knowledge Transfer from Data-PS1 to Data-PS4 +Finally we test the application of TL from Data-PS1 to Data-PS4; the surface PS4 is a +randomized version of PS2 whose composition (red:green = 206:194) differs slightly from the +1:1 composition of PS2.19 The average R2 on 1000 test sets through DL is 0.4502 ± 0.1849, +while the same metric from TL is improved to 0.6578 ± 0.1341. DL’s performance for Data- +PS4 was better than that for Data-PS2, attributable to the relative tightness of the free +energy distribution of Data-PS4 (σ = 0.31kBT) compared to Data-PS2 (σ = 1.55kBT).19 +The improvement is not as evident as in the first case (Data-PS1 to Data-PS2), but is +overall much better than the performance using TL on the completely randomized PS3 +surface. The green line in Figure 5(a) and Figure 5(b) show most cases (939 out of 1000) are +13 + +positively impacted by TL. In Figure 5(c), the negatively correlated relationship between the +improvement in ∆R2 and the model accuracy from DL appeared weaker than the previous +two cases in Figure 3(c) and Figure 4(c), though we note that a single testing point with +good TL and poor DL performance skews the plots visually. +(b) +(c) +(a) +Figure 5: Transfer learning applied adhesive free energies of sequence defined polymers using +a DNN fit to Data-PS1 adapted to Data-PS4 using a small dataset. (a) R2 values for direct +learning (R2 +DL, blue line), transfer learning (R2 +TL, red line) and improvement from transfer +learning (∆R2 = R2 +TL − R2 +DL, green line) of the 1000 target cases. Case ID numbers on the +x axis are sorted by the value ∆R2 in descending order. (b) R2 +TL plotted against R2 +DL. (c) +Improvement ∆R2 as a function of R2 +DL. +Feature Importance Analysis +The structure of the one-hot encoding of our sequence-defined polymers permitted the in- +terrogation of the feature importance of various sites on the polymer backbone. We utilize +the entire data set of Data-PS1, Data-PS2, Data-PS3, and Data-PS4. The details of the +training process were identical to those stated in the Methods section. Permutation feature +importance, which is defined to be the decrease in predictive accuracy (∆R2) when a sin- +gle feature value is randomly shuffled, was used to evaluate descriptor importance.60,74 The +14 + +feature importance for the feature i is computed by +FIi = ∆R2 = R2 − R2 +i , +where R2 is the predictive accuracy without randomly shuffling and R2 +i is the predictive +accuracy after randomly shuffling the ith dimensional feature. We used the permutation +feature importance implementation in the Python package ELI575 to perform this analysis. +Since our input is the one-hot encoding of the polymer sequence, a 20-dimensional vector, +we alternatively shuffled the feature value of each dimension and calculated the descriptor +importance for every input variable. +Figure 6: Permutation feature importance of 20-dimensional input vector representing for +bead in the sequence-defined polymer for four data set, Data-PS1 (red), Data-PS2 (blue), +Data-PS3 (orange) and Data-PS4 (purple). Essentially, only the endpoints are significantly +different with the 18 interior beads having rough similar importance to one another. +The results of the feature importance analysis are shown in Figure 6. Even though the +absolute value of feature importance is different for each patterned surface, some common +features exist for all four patterned surfaces. The head (first) and the tail (twentieth) beads +had relatively lower values of feature importance, and the other eighteen beads’ feature +importance were almost the same within the individual surface dataset. Statt et al.,37 also +found that the ends of an intrinsically disordered protein (IDP) have a distinct effect on +the phase behavior (critical temperature) compared with mutations in the middle of the +15 + +chain, though the ends are seen there to have a more pronounced effect on the proteins’ +phase behavior. The common features we found among Data-PS1, Data-PS2, Data-PS3, +and Data-PS4, represent the shareable knowledge from TL and can explain the successful +application of TL in these cases. Pre-trained models were able to obtain these features before +fine-tuning with the small datasets. +Size Effects for TL Improvements +From the above investigations, we illustrated that transfer learning can improve the accuracy +of the DNN models trained on a small target dataset (200 data points). It is of interest to +see how this scales with the amount of the available data, thus we also explored the effects +of the size of the target data set on the improvements from TL, increasing the “small” data +set to values between 200 and 4000 points. As previously, other training settings were kept +the same as for N = 200 datasets. +Figure 7(a) illustrates the prediction performance of DL and TL for Data-PS2 as a +function of the size of the target data set, as quantified by the R2 score. The mean value +increases and the SD decrease for the R2 score in both DL and TL as the size of the target +data set increases. These values are seen to change more rapidly in DL than TL, which is +perhaps to be expected, as more data should drastically improve the behavior of DL given the +initial datasets are so small; the pretrained TL model is able to approach an optimal fit more +easily. We note R2 +TL quickly converged with increasing size. After the target data size exceeded +800, accuracy (measured by R2 +TL) saturated. Thus, the improvement between TL and DL +decreased as the target data set size increases. Still, the performance of the TL models was +always better on average than DL models. This indicates that the transfer of knowledge from +the source task can offer significant improvement when scant target data is available. The +DNN model can learn sufficiently directly from the feed data in large target data sets, and +TL does not offer significant improvement. In our data sets, 2 × 104 independent points are +available, and TL’s efficacy saturates relative to DL when approximately 20% of the target +16 + +(a) +(c) +(b) +Data-PS2 +Data-PS3 +Data-PS4 +Figure 7: (a) R2 +DL (blue), R2 +TL (red) as a function of the size of the target dataset. Data- +PS2 was used for this comparison. The error bars reflect the SD of the R2 score from 1000 +random draws. As the size of the target data set increases, R2 +DL and R2 +TL both increase +and their SD values decrease. R2 +DL increases more dramatically relative to R2 +TL. Though the +improvement ∆R2 decreases with increasing size of the data set, R2 +TL is always larger than +R2 +DL. Similar improvements of DL and TL were seen in Data-PS3 (b) and Data-PS4 (c), +though the behavior of TL saturated at lower accuracy in each relative to Data-PS2. +17 + +data set is use. This suggests a threshold below which TL should always be used. Similar +improvements of DL and TL were seen in Data-PS3 and Data-PS4 [see Fig. 7(b,c)], though +the behavior of TL saturated at lower accuracy in each relative to Data-PS2. Nonetheless, +improvements on the order of one standard deviation in R2 were seen up to the same 20% +threshold applied to the target data size. +Conclusion +In summary, a comprehensive TL study of the polymer–surface interaction between polymers +with defined sequences and different surfaces was conducted through pre-training a DNN +with a large dataset from the source domain (Data-PS1) and fine-tuning the pre-trained +DNN with the small dataset from the target domain (Data-PS2, Data-PS3, or Data-PS4). +Knowledge was transferable among the polymer interactions with different patterned sur- +faces. TL significantly upgraded the performance of the model trained on the small dataset. +In addition, our results showed that TL’s model is more stable than the DL and less likely to +be affected by the random selection of data. The study of permutation feature importance +revealed that the four patterned surfaces have some similar features, representing part of +the reasons the transferable knowledge can work. We also demonstrate that the increase +in target data size can diminish the improvement from TL, ascribable to the fact that DL +learns more knowledge from the feed data directly when the size of the target data increases. +However, the TL model with the full-fine-tuning architecture always performs better than +the DL model, even though the improvement diminishes at large sizes of datasets. +Our work highlights the importance of transfer learning in elevating the performance of +an ML model regarding the polymer–surface interaction under insufficient data dilemmas. +Usually, tens or hundreds of data points are insufficient to train an accurate ML model. +With transfer learning tools, the shareable knowledge from a pre-trained model can help +the ML model trained for polymer–surface interaction with a new surface to obtain higher +18 + +performance. Our test cases are all simulation datasets, and the knowledge is shared among +different surfaces in simulation. But the benefit of knowledge sharing is not only limited +to simulation data sets. For example, Briceno-Mena et al.29 have utilized transfer learning +techniques to increase the performance of an ML model trained with insufficient experi- +mental data by transferring knowledge from an ML model trained with a large simulation +dataset.29 Similarly, for the prediction and optimization of adhesive energies, transfer learn- +ing can be used to maximize our knowledge within a new chemical domain from a smaller +amount of simulations or experiments, perhaps allowing purely computational and coarse- +grained models to cheaply explore compositional space and predictive models to be refined +by directed experimentation. As transfer learning, especially few-shot learning, has achieved +dramatic successes in computer vision and language models by building a series of sizeable +pre-trained ML models, such as YOLO,76 BERT,56 and GPT-3,55 we anticipate knowledge +about network structure and problem complexity may be used to guide these algorithms and +their applications to materials problems. With more experimental and simulation data about +polymer–surface interactions being produced and collected in the future, it is expected to +obtain a sizeable pre-trained ML model for polymer surface interaction. +Data Availability +The adhesive free energy datasets that used in this article are available online at https://github.com/shijiale0609/ML_PSI. +Example scripts and information necessary to run the examples contained in this article are +posted at https://github.com/shijiale0609/TL_PSI. +Author Contributions +JS, FA, SS, and JKW conceived the study. JS and JKW designed the research plan. YJC +provided suggestions and guidance on the transfer learning framework. JS conducted transfer +learning model training and analysed the results. JS, FA, YJC, SS, and JKW interpreted +19 + +the results and wrote the paper. +Acknowledgement +JS, and JKW acknowledge the support of MICCoM, the Midwest Center for Computational +Materials, as part of the Computational Materials Sciences Program funded by the U.S. +Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and +Engineering Division, for the development of algorithms and codes used within this work. +JS, and JKW acknowledge computational resources at the Notre Dame Center for Research +Computing (CRC). YJC acknowledges support from the US National Science Foundation +Faculty Early Career Development Program (CAREER) through award CBET-2143346. +References +(1) Chakraborty, A. K.; Golumbfskie, A. J. Polymer Adsorption–Driven Self-Assembly of +Nanostructures. Annu. Rev. Phys. Chem. 2001, 52, 537–573, PMID: 11326074. +(2) Li, L.; Lin, Q.; Tang, M.; Duncan, A. J. E.; Ke, C. Advanced Polymer Designs for Direct- +Ink-Write 3D Printing. Chemistry – A European Journal 2019, 25, 10768–10781. +(3) Kim, S. O.; Solak, H. H.; Stoykovich, M. P.; Ferrier, N. J.; De Pablo, J. J.; Nealey, P. F. +Epitaxial self-assembly of block copolymers on lithographically defined nanopatterned +substrates. Nature 2003, 424, 411–414. +(4) Ikawa, M.; Yamada, T.; Matsui, H.; Minemawari, H.; Tsutsumi, J.; Horii, Y.; Chika- +matsu, M.; Azumi, R.; Kumai, R.; Hasegawa, T. Simple push coating of polymer thin- +film transistors. Nature Communications 2012, 3, 1–8. +(5) Annabi, N.; Tamayol, A.; Shin, S. R.; Ghaemmaghami, A. M.; Peppas, N. A.; +20 + +Khademhosseini, A. Surgical materials: Current challenges and nano-enabled solutions. +Nano Today 2014, 9, 574–589. +(6) Xiu, S.; Dick, A.; Ju, H.; Mirzaie, S.; Abdi, F.; Cocklin, S.; Zhan, P.; Liu, X. Inhibitors +of SARS-CoV-2 Entry: Current and Future Opportunities. J. Med. Chem 2020, 63, +12256–12274. +(7) Wong, J. P.; Damania, B. SARS-CoV-2 dependence on host pathways. Science 2021, +371, 884–885. +(8) Callaway, E. Making sense of coronavirus mutations. Nature 2020, 174–177. +(9) Plante, J. A.; Liu, Y.; Liu, J.; Xia, H.; Johnson, B. A.; Lokugamage, K. G.; Zhang, X.; +Muruato, A. E.; Zou, J.; Fontes-Garfias, C. R.; Mirchandani, D.; Scharton, D.; +Bilello, J. P.; Ku, Z.; An, Z.; Kalveram, B.; Freiberg, A. N.; Menachery, V. D.; Xie, X.; +Plante, K. S.; Weaver, S. C.; Shi, P.-Y. Spike mutation D614G alters SARS-CoV-2 +fitness. Nature 2021, 592, 116–121. +(10) Hie, B.; Zhong, E. D.; Berger, B.; Bryson, B. Learning the language of viral evolution +and escape. Science 2021, 371, 284–288. +(11) Ozboyaci, M.; Kokh, D. B.; Corni, S.; Wade, R. C. Modeling and simulation of protein– +surface interactions: achievements and challenges. Quarterly Reviews of Biophysics +2016, 49. +(12) Chakraborty, A. K.; Bratko, D. A simple theory and Monte Carlo simulations for recog- +nition between random heteropolymers and disordered surfaces. J. Chem. Phys. 1998, +108, 1676–1682. +(13) Muthukumar, M. Pattern recognition by polyelectrolytes. J. Chem. Phys. 1995, 103, +4723–4731. +21 + +(14) Chakraborty, A. K. Disordered heteropolymers: models for biomimetic polymers and +polymers with frustrating quenched disorder. Phys. Rep. 2001, 342, 1–61. +(15) Muthukumar, M. Pattern recognition in self-assembly. Curr. Opin. Colloid Interface +Sci. 1998, 3, 48–54. +(16) Chauhan, G.; Simpson, M. L.; Abel, S. M. Crowding-induced interactions of ring poly- +mers. Soft Matter 2021, 17, 16–23. +(17) Kriksin, Y. A.; Khalatur, P. G.; Khokhlov, A. R. Adsorption of multiblock copolymers +onto a chemically heterogeneous surface: A model of pattern recognition. J. Chem. +Phys. 2005, 122, 114703. +(18) Tereshkin, E. V.; Tereshkina, K. B.; Krupyanskii, Y. F. Predicting Binding Free Ener- +gies for DPS Protein-DNA Complexes and Crystals Using Molecular Dynamics. Super- +computing Frontiers and Innovations 2022, 9, 33–45. +(19) Shi, J.; Quevillon, M. J.; Amorim Valen¸ca, P. H.; Whitmer, J. K. Predicting Adhesive +Free Energies of Polymer–Surface Interactions with Machine Learning. ACS Applied +Materials & Interfaces 2022, 14, 37161–37169. +(20) Bishop, C. M. Pattern Recognition and Machine Learning (Information Science and +Statistics); Springer-Verlag: Berlin, Heidelberg, 2006. +(21) Murphy, K. P. Machine learning: a probabilistic perspective; MIT press, 2012. +(22) Murphy, K. P. Probabilistic Machine Learning: An introduction; MIT Press, 2022. +(23) Murphy, K. P. Probabilistic Machine Learning: Advanced Topics; MIT Press, 2023. +(24) Artrith, N.; Butler, K. T.; Coudert, F.-X.; Han, S.; Isayev, O.; Jain, A.; Walsh, A. Best +practices in machine learning for chemistry. Nat. Chem. 2021, 13, 505–508. +22 + +(25) de Pablo, J. J.; Jackson, N. E.; Webb, M. A.; Chen, L.-Q.; Moore, J. E.; Morgan, D.; +Jacobs, R.; Pollock, T.; Schlom, D. G.; Toberer, E. S.; Analytis, J.; Dabo, I.; De- +Longchamp, D. M.; Fiete, G. A.; Grason, G. M.; Hautier, G.; Mo, Y.; Rajan, K.; +Reed, E. J.; Rodriguez, E.; Stevanovic, V.; Suntivich, J.; Thornton, K.; Zhao, J.-C. +New frontiers for the materials genome initiative. npj Computational Materials 2019, +5, 41. +(26) Gormley, A. J.; Webb, M. A. Machine learning in combinatorial polymer chemistry. +Nature Reviews Materials 2021, 1–3. +(27) Huang, K.; Fu, T.; Glass, L. M.; Zitnik, M.; Xiao, C.; Sun, J. DeepPurpose: a deep +learning library for drug–target interaction prediction. Bioinformatics 2020, 36, 5545– +5547. +(28) Liang, Z.; Li, Z.; Zhou, S.; Sun, Y.; Yuan, J.; Zhang, C. Machine-learning exploration +of polymer compatibility. Cell Reports Physical Science 2022, 3, 100931. +(29) Briceno-Mena, L. A.; Romagnoli, J. A.; Arges, C. G. PemNet: A Transfer Learning- +Based Modeling Approach of High-Temperature Polymer Electrolyte Membrane Elec- +trochemical Systems. Industrial & Engineering Chemistry Research 2022, 61, 3350– +3357. +(30) Sattari, K.; Xie, Y.; Lin, J. Data-driven algorithms for inverse design of polymers. Soft +Matter 2021, 17, 7607–7622. +(31) Sevgen, E.; Guo, A. Z.; Sidky, H.; Whitmer, J. K.; de Pablo, J. J. Combined Force- +Frequency Sampling for Simulation of Systems Having Rugged Free Energy Landscapes. +Journal of Chemical Theory and Computation 2020, 16, 1448–1455, PMID: 31951703. +(32) Sidky, H.; Whitmer, J. K. Learning free energy landscapes using artificial neural net- +works. The Journal of Chemical Physics 2018, 148, 104111. +23 + +(33) Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tun- +yasuvunakool, K.; Bates, R.; ˇZ´idek, A.; Potapenko, A.; Bridgland, A.; Meyer, C.; +Kohl, S. A. A.; Ballard, A. J.; Cowie, A.; Romera-Paredes, B.; Nikolov, S.; Jain, R.; +Adler, J.; Back, T.; Petersen, S.; Reiman, D.; Clancy, E.; Zielinski, M.; Steinegger, M.; +Pacholska, M.; Berghammer, T.; Bodenstein, S.; Silver, D.; Vinyals, O.; Senior, A. W.; +Kavukcuoglu, K.; Kohli, P.; Hassabis, D. Highly accurate protein structure prediction +with AlphaFold. Nature 2021, 596, 583–589. +(34) Tunyasuvunakool, K.; Adler, J.; Wu, Z.; Green, T.; Zielinski, M.; ˇZ´idek, A.; Bridg- +land, A.; Cowie, A.; Meyer, C.; Laydon, A.; Velankar, S.; Kleywegt, G. J.; Bateman, A.; +Evans, R.; Pritzel, A.; Figurnov, M.; Ronneberger, O.; Bates, R.; Kohl, S. A. A.; +Potapenko, A.; Ballard, A. J.; Romera-Paredes, B.; Nikolov, S.; Jain, R.; Clancy, E.; +Reiman, D.; Petersen, S.; Senior, A. W.; Kavukcuoglu, K.; Birney, E.; Kohli, P.; +Jumper, J.; Hassabis, D. Highly accurate protein structure prediction for the human +proteome. Nature 2021, 596, 590–596. +(35) Baek, M.; DiMaio, F.; Anishchenko, I.; Dauparas, J.; Ovchinnikov, S.; Lee, G. R.; +Wang, J.; Cong, Q.; Kinch, L. N.; Schaeffer, R. D.; Mill´an, C.; Park, H.; Adams, C.; +Glassman, C. R.; DeGiovanni, A.; Pereira, J. H.; Rodrigues, A. V.; van Dijk, A. A.; +Ebrecht, A. C.; Opperman, D. J.; Sagmeister, T.; Buhlheller, C.; Pavkov-Keller, T.; +Rathinaswamy, M. K.; Dalwadi, U.; Yip, C. K.; Burke, J. E.; Garcia, K. C.; Gr- +ishin, N. V.; Adams, P. D.; Read, R. J.; Baker, D. Accurate prediction of protein +structures and interactions using a three-track neural network. Science 2021, 373, +871–876. +(36) Webb, M. A.; Jackson, N. E.; Gil, P. S.; de Pablo, J. J. Targeted sequence design within +the coarse-grained polymer genome. Sci. Adv. 2020, 6. +(37) Statt, A.; Casademunt, H.; Brangwynne, C. P.; Panagiotopoulos, A. Z. Model for disor- +24 + +dered proteins with strongly sequence-dependent liquid phase behavior. J. Chem. Phys. +2020, 152, 075101. +(38) Statt, A.; Kleeblatt, D. C.; Reinhart, W. F. Unsupervised learning of sequence-specific +aggregation behavior for a model copolymer. Soft Matter 2021, 17, 7697–7707. +(39) Meenakshisundaram, V.; Hung, J.-H.; Patra, T. K.; Simmons, D. S. Designing +Sequence-Specific Copolymer Compatibilizers Using a Molecular-Dynamics-Simulation- +Based Genetic Algorithm. Macromolecules 2017, 50, 1155–1166. +(40) Ma, R.; Huang, D.; Zhang, T.; Luo, T. Determining influential descriptors for polymer +chain conformation based on empirical force-fields and molecular dynamics simulations. +Chem. Phys. Lett. 2018, 704, 49–54. +(41) Arora, A.; Lin, T.-S.; Rebello, N. J.; Av-Ron, S. H. M.; Mochigase, H.; Olsen, B. D. +Random Forest Predictor for Diblock Copolymer Phase Behavior. ACS Macro Letters +2021, 10, 1339–1345. +(42) Patel, R. A.; Borca, C. H.; Webb, M. A. Featurization strategies for polymer sequence +or composition design by machine learning. Mol. Syst. Des. Eng. 2022, 7, 661–676. +(43) Ma, R.; Zhang, H.; Luo, T. Exploring High Thermal Conductivity Amorphous Polymers +Using Reinforcement Learning. ACS Applied Materials & Interfaces 2022, 14, 15587– +15598. +(44) Ma, R.; Liu, Z.; Zhang, Q.; Liu, Z.; Luo, T. Evaluating Polymer Representations via +Quantifying Structure–Property Relationships. Journal of Chemical Information and +Modeling 2019, 59, 3110–3119, PMID: 31268306. +(45) Lin, T.-S.; Coley, C. W.; Mochigase, H.; Beech, H. K.; Wang, W.; Wang, Z.; Woods, E.; +Craig, S. L.; Johnson, J. A.; Kalow, J. A.; Jensen, K. F.; Olsen, B. D. BigSMILES: A +25 + +Structurally-Based Line Notation for Describing Macromolecules. ACS Central Science +2019, 5, 1523–1531. +(46) Jindong, +W.; +et +al., +Transfer +Learning +Tutorial. +2018; +https://github.com/jindongwang/transferlearning-tutorial. +(47) Yang, Q.; Zhang, Y.; Dai, W.; Pan, S. J. Transfer Learning; Cambridge University +Press, 2020. +(48) Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep +neural networks? Advances in Neural Information Processing Systems. 2014; pp 1–9. +(49) Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A compre- +hensive survey on transfer learning. Proceedings of the IEEE 2020, 109, 43–76. +(50) Pan, S. J.; Yang, Q. A survey on transfer learning. IEEE Transactions on Knowledge +and Data Engineering 2009, 22, 1345–1359. +(51) Wang, D.; Zheng, T. F. Transfer learning for speech and language processing. 2015 Asia- +Pacific Signal and Information Processing Association Annual Summit and Conference +(APSIPA). 2015; pp 1225–1237. +(52) Kunze, J.; Kirsch, L.; Kurenkov, I.; Krug, A.; Johannsmeier, J.; Stober, S. Transfer +Learning for Speech Recognition on a Budget. CoRR 2017, abs/1706.00290. +(53) Ng, H.-W.; Nguyen, V. D.; Vonikakis, V.; Winkler, S. Deep Learning for Emotion +Recognition on Small Datasets Using Transfer Learning. Proceedings of the 2015 ACM +on International Conference on Multimodal Interaction. New York, NY, USA, 2015; p +443–449. +(54) Yang, X.; Zhang, Y.; Lv, W.; Wang, D. Image recognition of wind turbine blade dam- +age based on a deep learning model with transfer learning and an ensemble learning +classifier. Renewable Energy 2021, 163, 386–397. +26 + +(55) Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakan- +tan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; +Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; +Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCan- +dlish, S.; Radford, A.; Sutskever, I.; Amodei, D. Language Models are Few-Shot Learn- +ers. Advances in Neural Information Processing Systems. 2020; pp 1877–1901. +(56) Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirec- +tional transformers for language understanding. arXiv preprint arXiv:1810.04805 2018, +(57) Tsubaki, M.; Mizoguchi, T. Quantum Deep Descriptor: Physically Informed Transfer +Learning from Small Molecules to Polymers. Journal of Chemical Theory and Compu- +tation 2021, 17, 7814–7821, PMID: 34846893. +(58) Sultan, M. M.; Wayment-Steele, H. K.; Pande, V. S. Transferable Neural Networks for +Enhanced Sampling of Protein Dynamics. Journal of Chemical Theory and Computa- +tion 2018, 14, 1887–1894, PMID: 29529369. +(59) K¨aser, S.; Boittier, E. D.; Upadhyay, M.; Meuwly, M. Transfer Learning to CCSD(T): +Accurate Anharmonic Frequencies from Machine Learning Models. Journal of Chemical +Theory and Computation 2021, 17, 3687–3699, PMID: 33960787. +(60) Ma, R.; Colón, Y. J.; Luo, T. Transfer Learning Study of Gas Adsorption in +Metal–Organic Frameworks. ACS Appl. Mater. Interfaces 2020, 12, 34041–34048, +PMID: 32613831. +(61) Liu, Z.; Jiang, M.; Luo, T. Leverage electron properties to predict phonon properties +via transfer learning for semiconductors. Science Advances 2020, 6, eabd1356. +(62) Wu, S.; Kondo, Y.; Kakimoto, M.-a.; Yang, B.; Yamada, H.; Kuwajima, I.; Lambard, G.; +Hongo, K.; Xu, Y.; Shiomi, J.; Schick, C.; Morikawa, J.; Yoshida, R. Machine-learning- +27 + +assisted discovery of polymers with high thermal conductivity using a molecular design +algorithm. npj Computational Materials 2019, 5, 66. +(63) Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. S.; Brown, W. M.; +Crozier, P. S.; in ’t Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D.; Shan, R.; +Stevens, M. J.; Tranchida, J.; Trott, C.; Plimpton, S. J. LAMMPS - a flexible simulation +tool for particle-based materials modeling at the atomic, meso, and continuum scales. +Comp. Phys. Comm. 2022, 271, 108171. +(64) Darve, E.; Rodr´iguez-Gómez, D.; Pohorille, A. Adaptive biasing force method for scalar +and vector free energy calculations. J. Chem. Phys. 2008, 128, 144120. +(65) Sidky, H.; Colón, Y. J.; Helfferich, J.; Sikora, B. J.; Bezik, C.; Chu, W.; Giberti, F.; +Guo, A. Z.; Jiang, X.; Lequieu, J.; Li, J.; Moller, J.; Quevillon, M. J.; Rahimi, M.; +Ramezani-Dakhel, H.; Rathee, V. S.; Reid, D. R.; Sevgen, E.; Thapar, V.; Webb, M. A.; +Whitmer, J. K.; de Pablo, J. J. SSAGES: Software Suite for Advanced General Ensemble +Simulations. The Journal of Chemical Physics 2018, 148, 044104. +(66) Shi, J.; Huang, S.; Gygi, F.; Whitmer, J. K. Free-Energy Landscape and Isomerization +Rates of Au4 Clusters at Finite Temperatures. The Journal of Physical Chemistry A +2022, 126, 3392–3400, PMID: 35584205. +(67) Shi, J.; Sidky, H.; Whitmer, J. K. Automated determination of n-cyanobiphenyl and +n-cyanobiphenyl binary mixtures elastic constants in the nematic phase from molecular +simulation. Mol. Syst. Des. Eng. 2020, 5, 1131–1136. +(68) Leonhard, A. C.; Whitmer, J. K. Accurate Determination of Cavitand Binding Free +Energies via Unrestrained Advanced Sampling. Journal of Chemical Theory and Com- +putation 2019, 15, 5761–5768, PMID: 31566977. +(69) Cort´es-Morales, E. C.; Rathee, V. S.; Ghobadi, A.; Whitmer, J. K. A molecular view of +plasticization of polyvinyl alcohol. The Journal of Chemical Physics 2021, 155, 174903. +28 + +(70) Zhang, A.; Lipton, Z. C.; Li, M.; Smola, A. J. Dive into Deep Learning. arXiv preprint +arXiv:2106.11342 2021, +(71) Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical evaluation of rectified activations in +convolutional network. arXiv preprint arXiv:1505.00853 2015, +(72) Kingma, D. P.; Ba, J. Adam: A method for stochastic optimization. arXiv preprint +arXiv:1412.6980 2014, +(73) Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; +Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; +Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; Chintala, S. In +Advances in Neural Information Processing Systems 32; Wallach, H., Larochelle, H., +Beygelzimer, A., d’Alch´e Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc., +2019; pp 8024–8035. +(74) Altmann, A.; Tolo¸si, L.; Sander, O.; Lengauer, T. Permutation importance: a corrected +feature importance measure. Bioinformatics 2010, 26, 1340–1347. +(75) TeamHG-Memex, ELI5. https://github.com/TeamHG-Memex/eli5, 2020. +(76) Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time +object detection. Proceedings of the IEEE conference on computer vision and pattern +recognition. 2016; pp 779–788. +29 + diff --git a/KNE0T4oBgHgl3EQfSQBx/content/tmp_files/load_file.txt b/KNE0T4oBgHgl3EQfSQBx/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..0116873a71926d2c213e8f603127d1f2e5d7da5d --- /dev/null +++ b/KNE0T4oBgHgl3EQfSQBx/content/tmp_files/load_file.txt @@ -0,0 +1,1617 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf,len=1616 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='02219v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='soft] 5 Jan 2023 Transfer Learning Facilitates the Prediction of Polymer–Surface Adhesion Strength Jiale Shi,† Fahed Albreiki,‡ Yamil J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Colón ,† Samanvaya Srivastava,‡,¶,§,∥ and Jonathan K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='†,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='⊥ †Department of Chemical and Biomolecular Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' University of Notre Dame,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Notre Dame,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Indiana 46556,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' United States ‡Department of Chemical and Biomolecular Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' University of California,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' California 90095,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' United States ¶California NanoSystems Institute,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Center for Biological Physic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' University of California,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' California 90095,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' United States §Institute for Carbon Management,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' University of California,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' California 90095,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' United States ∥Center for Biological Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' University of California,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Los Angeles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' California 90095,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' United States ⊥Department of Chemistry and Biochemistry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='University of Notre Dame,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Notre Dame,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Indiana 46556,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' United States E-mail: jwhitme1@nd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='edu Abstract Machine learning (ML) accelerates the exploration of material properties and their links to the structure of the underlying molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In previous work [J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Shi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Quevillon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Valen¸ca, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, ACS Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', 2022, 1 14, 32, 37161–37169 ], ML models were applied to predict the adhesive free energy of polymer–surface interactions with high accuracy from the knowledge of the sequence data, demonstrating successes in inverse-design of polymer sequence for known surface compositions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' While the method was shown to be successful in designing polymers for a known surface, extensive datasets were needed for each specific surface in order to train the surrogate models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ideally, one should be able to infer information about similar surfaces without having to regenerate a full complement of adhesion data for each new case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In the current work, we demonstrate a transfer learning (TL) technique using a deep neural network to improve the accuracy of ML models trained on small datasets by pre-training on a larger database from a related system and fine-tuning the weights of all layers with a small amount of additional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The shared knowledge from the pre-trained model facilitates the prediction accuracy significantly on small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We also explore the limits of database size on accuracy and the optimal tuning of network architecture and parameters for our learning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' While applied to a relatively simple coarse-grained (CG) polymer model, the general lessons of this study apply to detailed modeling studies and the broader problems of inverse materials design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Introduction Numerous industrial applications and biological phenomena involve chemically specific polymer– surface interactions, from ink absorption on paper,1,2 and semiconductor fabrication and coating,3,4 to the design and synthesis of artificial tissues5 and viruses recognizing receptors on a cell surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6–10 The use of highly tuned sequence-defined polymers is attractive in con- trolling phase behavior, stabilizing interfaces, and promoting adhesion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sequence-dependent adsorption of polymers to patterned surfaces has been studied through traditional theoret- ical and computational approaches11–18 and machine learning methods,19 emphasizing the importance of polymer sequence in determining the adsorption energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='17,19 2 Machine learning (ML) and artificial intelligence (AI)20–32 have achieved dramatic success in determining the behaviors and properties of polymer and biomacromolecule systems,33–41 including predicting protein structure,33–35 polymer structures (such as radius of gyration in solvent),36,42 and thermodynamic properties (such as polymer glass transition temperature, Tg).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='40,43,44 However, the wide-ranging chemical sequence, topological space, and mass distri- bution of the polymer are too extensive to explore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='42,45 For example, even for linear binary copolymers with twenty monomers, the number of possible sequences is approximately one million.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The chemical space becomes exponentially large if more monomer types, variable degrees of polymerization, non-uniform topologies, and mass distributions enter the descrip- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ML techniques can help, but often provide knowledge highly specific to the immediate problem and require significant new datasets to incorporate information outside the original scope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' For example, our prior work (see Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 19) utilized ML models to predict the adhesive free energy of polymer–surface interactions with high accuracy and aid the inverse-design of polymer sequence for known surface compositions, but exploring adhesion of such a polymer to a substrate requires about 8000 data points to train an accurate ML model for each deco- rated surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Often, ML models are inaccurate or overfit when trained on small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' At the same time, in both industrial applications and biological settings, the surface patterns vary substantially, both structurally and randomly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Collecting large datasets for every pat- terned surface from thousands or millions of new experiments or simulations is, therefore, prohibitively difficult and expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In realistic situations, it may only be feasible to collect tens to hundreds of new data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Data-driven ML modeling is easier to implement but often necessitates large datasets that could be difficult to obtain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='46–48 Therefore, our aim here is to determine the minimum amount of additional computation necessary to obtain an accurate binding model, building as much as possible on prior knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer learning (TL) can be a valuable technique to overcome the dilemma of in- sufficient data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='46–48 In TL, an ML model initially pre-trained for a given task on a large dataset of the source domain is utilized as the base to train a model for a new task by fine- 3 tuning a small dataset of the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='29,46–49 Typically, TL can improve the model’s accuracy if the source and target domains are closely related.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='29,46–48,48,50 TL has achieved considerable success in speech recognition,51,52 image recognition,53,54 and natural language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='55,56 In addition, TL has also been successfully utilized in materials informatics studies57–59 such as structural prediction of gas adsorption in MOFs,60 phonon properties in semiconductors,61 and thermal conductivity62 and electrochemical properties29 of polymers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' However, these studies typically do not explore the explicit inverse design problem involved in materials design: what molecular structures, subject to reasonable constraints, are best for a given application?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In this study, we demonstrate the ability of transfer learning to leverage the prediction performance of adhesive free energies between polymer chains with a defined sequence and patterned surfaces via fine-tuning a pre-trained model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The source domain and learning task come from a large dataset of polymer-surface interactions with one patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 The target domain and learning task come from a small dataset of polymer-surface interactions with a different patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 We utilize a deep neural network architecture to perform transfer learning and characterize the improvements on three example cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We also explore the limits of database size on accuracy and the optimal tuning of network architecture and parameters for our learning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Methods Data Set The data sets used in this work are from our recent work, Shi, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' As shown in Figure 1 (a), every data point includes one sequence-defined polymer and its adhesive free energy ∆F with a patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The ∆F were generated by LAMMPS molecular dynamic simulations63 coupled with adaptive biasing force (ABF) method64 SSAGES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='27,65–69 The polymer chain and surface are both composed of two types of beads, denoted ”red” beads 4 and ”green” by their visualization in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The polymer is modeled as a flexible 20-bead linear chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The surface is holonomically constrained, with a simple square lattice of beads having dimension 20σ×20σ for a total of 400 beads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Each dataset contains 2×104 sequence- defined polymers and their adhesive free energies with one patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' There are four different data sets, one for each pattern shown in Figure 1 (b): PS1, which is composed of half red beads and half green beads in two stripes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nred = 200 and Ngreen = 200;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' PS2, which is composed of 16 alternate small size squares (5σ×5σ) of red and green beads with the same overall composition as PS1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' PS3, where each bead was randomly generated with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='5 for each site to be red or green resulting in Nred = 184 and Ngreen = 216;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' and PS4, which is built upon PS2, but randomized within the interior of the 5σ ×5σ squares resulting in a total of Nred = 206 and Ngreen = 194.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' PS3 and PS4 allow exploration of the role of randomizing effects on our adhesive models, with PS4 including randomness within an overall structure rather than only randomness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' For simplicity, we use the name of the patterned surface to represent each data set, called Data-PS1, Data-PS2, Data-PS3, and Data-PS4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Detailed distributions and analysis of the adhesive free energy datasets are available in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 19;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' reduced metrics corresponding to Gaussian fit paramters for each free energy distribution Data-PS1, Data-PS2, Data-PS3, and Data-PS4 are shown in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Additional details for generating the datasets are discussed in the previous work19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' All datasets are available online at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='com/shijiale0609/ML_PSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Table 1: Gaussian Fitting Details19 of Distributions of Adhesive Free Energies for Data-PS1, Data-PS2, Data-PS3 and Data-PS4 Dataset µ(kBT) σ(kBT) Data-PS1 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='66 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='89 Data-PS2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='84 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='55 Data-PS3 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='77 Data-PS4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='31 5 ∆F (a) Sequence Adhesive Free Energy (b) PS1 PS2 PS3 PS4 Figure 1: A schematic of the data sets about adhesive free energies of sequence defined polymers with patterned surfaces from the work of Shi, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 (a) Every data point includes: A sequence-defined polymer and its adhesive free energy with a patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Each dataset contains 2×104 sequence defined polymers and their adhesive free energies with one patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Therefore, for simplification, we use the name of the patterned surface to represent each data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (b) There are four such datasets (Data-PS1, Data-PS2, Data-PS3 and Data-PS4) for four different patterned surfaces( PS1, PS2, PS3, and PS4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer Learning Architecture In this work, a deep neural network (DNN) architecture29,60 with one input layer, three hidden layers, and one output layer was used to quantify the relationship between the polymer sequence information and polymer–surface adhesive free energy, ∆F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The input was one hot encoding of the polymer sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The output was the adhesive free energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The DNN architecture is shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' First, we trained a source DNN with the source data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We used the Data-PS1, 2 × 104 6 Source Domain data Target Domain data 200 random data Batch 1 Batch 1000 1000 random draws 200 random data 200 random data Batch 2 Pre-training Source DNN Transfer TL DL DL DL TL TL Data-PS2, Data-PS3, or Data-PS4 Data-PS1 Figure 2: A schematic of the procedure for testing the performance of transfer learning from source domain (Data-PS1) to target domain (Data-PS2, Data-PS3, or Data-PS4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A total of 2 × 104 polymer sequences and the corresponding ∆F with PS1 are used as the source data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We train a fully connected deep neural network whose architecture is (20,64,64,32,1), using all the 2 × 104 source data points and save its weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' When training the DNN for target data, as a transfer learning framework, we fine-tune a subset of the weights in the pretrained source DNN using 200 data points (TL) and compare with the learning from randomly initialized DNN in a direct learning (DL) way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' data points of polymer sequences and their ∆F with PS1 as the source data, as the ML model applied to this dataset achieved the highest accuracy among the four original datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 Then we randomly separated the 2 × 104 data points into 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6 × 104 as the training set and 4 × 103 as the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 4:1 ratio is a commonly used ratio in machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='20,21,70 The training set is the set of data that was used to train and make the model learn the hidden features/patterns in the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In each epoch, the same training data was fed to the neural network architecture repeatedly, and the model continued to learn the features of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The validation set is a set of data that was used to validate our model performance 7 during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' This validation process provided information that helped tune the model’s hyperparameters and configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A test set is not required for this initial task as we are seeking a base line trained on PS1 to extend to the other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Without the need to leave data points for a test set, we were able to have more data points for training and validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The hyperparameters of the DNN are optimized on the source task PS1 by promoting the accuracy and robustness of the DNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Utilizing an n-tuple description for the hidden layers of a fully connected DNN, our network was represented by (20,64,64,32,1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The learning rate, which serves as the step size for updating the DNN parameters, was set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='00002 to make the learning process stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' LeakyReLU71 with a negative slope of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1 was used as the activation function, and Adam algorithm72 was used to optimize weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The number of learning epochs was set to 104, and the training process can be early ended by a converging check function was applied on the validation data to terminate the training process, if appropriate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We trained a source DNN using the training set of source domain and selected the epoch with the highest accuracy on the validation set as the base DNN for subsequent TL task, which were referred to as the pre-trained source DNN (depicted as the red DNN in Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' An open-source machine learning framework, Pytorch73, was used to implement the DNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' All the parameters are stored on Github as described in the Code Availability section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Next, we turned to the target data set and applied the DNN with the same hyperpa- rameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The small target data set was composed of 200 data points which were randomly drawn from existing data on the new domain (Data-PS2, Data-PS3, or Data-PS4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The data set was then divided into training, validation, and test sets in the ratio of 72:18:10, to be consistent with previous transfer learning studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='60 144 training data points were used for training the model, and 36 validation data points were used to determine when the training should be stopped and to avoid overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The validation data set was used to select the training epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Since the validation data set was involved in the training process, the model’s performance is toward it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Therefore, we additionally tested our model on the untouched test 8 data set to provide unbiased final model performance metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Our use of this protocol en- abled us to address the core question: “How well does the model perform on the small data set of Data-PS2, Data-PS3, or Data-PS4 without bias?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' To illustrate the power of transfer learning, with the same 200 data points and the same separation for training, validation, and test sets, we performed direct learning (DL) (black DNN in Figure 2) and transfer learning (TL) (blue DNN in Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' For direct learning, we trained the DNN model from randomly initialized weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' For transfer learning (blue DNN in Figure 2), we alternatively fine-tuneed the weights of all layers in the pre-trained DNN from the source task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' There are three reasons that we choose to fine-tune the weights of all layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' First, we sought to build an end-to-end model which is more friendly to other users who are not familiar with deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In an end-to-end model, users only need to focus on the input and output and do not need to worry about how to modify the inside architecture of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We want to show that starting from a pre-trained DNN without fixing the weights can get improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Second, we tested other fine-tuning formats, like fixing the weights of the first n layers and fine-tuning the weights of the remaining m layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='60 We found that those formats do not provide competitive improvements and sometimes behaved worse than when fine-tuning all layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Third, when the size of the training set increases, fixing the weights of some layers might lead to underfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Fine-tuning the weights of all layers is more robust to the size of the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The comparison between performances of DL and TL scenarios was evaluated by com- paring their respective coefficients of determination (R2 values) on the same test sets (20 data points).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R2 = 1 − �(yi − ˆyi)2 �(yi − ¯yi)2 (1) The maximum performance score of R2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='0 occurs when every prediction is correct (yi ≡ ˆyi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Note that R2 can be negative because the model can be arbitrarily poor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The choice of R2 as an evaluation metric was reasonable, as R2 can provide a natural baseline for judging the performance of models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='60 For each small data set, we obtained two coefficients: 9 R2 DL, which shows the performance of DL, and R2 TL which characterizes the performance of TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The small data set resulted in highly variable accuracies in models due to the random data drawing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Therefore, we did not limit testing to a single small target data case and randomly drew sample data from the target space 1000 times for both DL and TL scenarios, subsequently obtaining 1000 pairs of R2 DL and R2 TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' This enabled us to gain a statistically robust understanding of the behavior of TL, mitigating the effects of outlier data sets on training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Results and Discussion A summary of performance (captured via R2 values) of direct (DL) and transfer (TL) learning for 1000 small target data sets from Data-PS2, Data-PS3, and Data-PS4 is given in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We step through specific cases below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Table 2: R2 characteristics from DL and TL for 1000 small target data sets for datasets Data- PS2, Data-PS3 and Data-PS4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' transfer learning proceeds using a neural network trained on Data-PS1 applied to data on the target surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dataset R2 DL R2 TL Data-PS2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='0089 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1956 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='8303 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='0747 Data-PS3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6338 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='2079 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='7998 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1173 Data-PS4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='4502 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1849 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6578 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1341 Knowledge Transfer from Data-PS1 to Data-PS2 We first investigate the application of TL in transferring knowledge from Data-PS1 over to Data-PS2, which links data acquired on from one regular patterned surface to another regular patterned surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' PS1 and PS2 have the same overall composition (red:green = 200:200), and similar patterning when accounting for periodic boundaries, though PS2 uses smaller squares of uniform chemistry rather than two large stripes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The results of 1000 trials 10 are plotted as pairs R2 DL and R2 TL for comparison in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' As shown in Figure 3(a), 624 DL R2 values are negative, implying poor model performance for those cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' All R2 values for TL cases are positively correlated, and many are close to one, meaning that the models’ performances are excellent in those cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Collectively, the average R2 on the 1000 test sets through DL is −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='0089±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1956, while the same metric for TL is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='8303±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='0747.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Therefore, TL both improved the mean value of R2 and decreased its standard deviation (SD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The diminishing SD shows that TL is less sensitive to the random selection of the small dataset than the DL, which can be ascribed to the weights of the pre-trained DNN being close to the optimized weights of the target DNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' From the dashed line depicting ∆R2 in Figure 3(c), where all the ∆R2 are greater than zero, and Figure 3(b), where all points (R2 TL vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R2 DL) are above the line y = x, it can be inferred that in all the 1000 target cases, TL improved the accuracy of the model prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In Figure 3(c), a strong negative linear relationship between the improvement in ∆R2 and the model accuracy from DL demonstrated that TL contributed improved knowledge in situations where DL yielded low accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Since TL transfers a pretrained network rather than initializing weights randomly, the additional small data set acts to refine the weights rather than generate them wholesale — hence, even when DL yields low accuracy the performance of TL remains stable, as is strongly evident in the the improvement of ∆R2 for these surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' At the same time, when DL already achieved very high model accuracy on the target tasks, the transfer knowledge from the source task offered only a slight improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' For these cases, the randomly initialized weights of the DNN for DL happened to be close to the optimized weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Knowledge Transfer from Data-PS1 to Data-PS3 Next, we investigated the application of TL from Data-PS1 to Data-PS3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' While PS1 is very regular, PS3 is a fully randomized surface with composition (red:green = 184:216), generated using a random probability P(red) = P(green) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='5 for the beads in the square lattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The average R2 on 1000 test sets modeled using DL was 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6338 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='2079, while the 11 (a) (b) (c) Figure 3: Transfer learning applied adhesive free energies of sequence defined polymers using a DNN fit to Data-PS1 adapted to Data-PS2 using a small dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (a) 1000 pairs of R2 DL for direct learning (blue line) and R2 TL for transfer learning (red line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The improvement from transfer learning (∆R2 = R2 TL − R2 DL is shown in green line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The Case ID numbers on the x axis are sorted by the value ∆R2 in descending order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (b) R2 TL plotted against R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (c) Improvement ∆R2 as a function of R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' same metric from TL was improved to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='7998 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We note that DL’s performance for Data-PS3 was better than that for Data-PS2, attributable to the reason that the standard variation (σ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='77kBT) of the whole 2 × 104 Data-PS3’s ∆F is smaller than that of Data- PS2 (σ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='55kBT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 We conclude that the improvement from TL is less robust on this dataset than Data-PS2, likely because of the tighter distribution for adhesive energies (see Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 19 for context).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Still there remains a marked improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Another significant reason for the differences in this case is that the source and data sets is the more dissimilar adhesion properties related to the randomization of the surface pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Still, DL has 15 cases where R2 DL is not greater than zero, while TL only has 2 cases where R2 TL is not greater than zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The green line in Figure 4(a) and data in Figure 4(b) show that in most examined cases (910 out of 1000), TL gives positive improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In Figure 4(c), there is a generally negative linear relationship between ∆R2 and model accuracy from DL, though the linear relationship is not as strong as the prior dataset in Figure 3(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Thus, we infer that when 12 the target tasks have very high model accuracy from DL already, the transfer of knowledge from the source task does not always help further improve the model accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (a) (b) (c) Figure 4: Transfer learning applied adhesive free energies of sequence defined polymers using a DNN fit to Data-PS1 adapted to Data-PS3 using a small dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (a) R2 values for direct learning (R2 DL, blue line), transfer learning (R2 TL, red line) and improvement from transfer learning (∆R2 = R2 TL − R2 DL, green line) of the 1000 target cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Case ID numbers on the x axis are sorted by the value ∆R2 in descending order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (b) R2 TL plotted against R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (c) Improvement ∆R2 as a function of R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Knowledge Transfer from Data-PS1 to Data-PS4 Finally we test the application of TL from Data-PS1 to Data-PS4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' the surface PS4 is a randomized version of PS2 whose composition (red:green = 206:194) differs slightly from the 1:1 composition of PS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 The average R2 on 1000 test sets through DL is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='4502 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1849, while the same metric from TL is improved to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6578 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='1341.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' DL’s performance for Data- PS4 was better than that for Data-PS2, attributable to the relative tightness of the free energy distribution of Data-PS4 (σ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='31kBT) compared to Data-PS2 (σ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='55kBT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='19 The improvement is not as evident as in the first case (Data-PS1 to Data-PS2), but is overall much better than the performance using TL on the completely randomized PS3 surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The green line in Figure 5(a) and Figure 5(b) show most cases (939 out of 1000) are 13 positively impacted by TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In Figure 5(c), the negatively correlated relationship between the improvement in ∆R2 and the model accuracy from DL appeared weaker than the previous two cases in Figure 3(c) and Figure 4(c), though we note that a single testing point with good TL and poor DL performance skews the plots visually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (b) (c) (a) Figure 5: Transfer learning applied adhesive free energies of sequence defined polymers using a DNN fit to Data-PS1 adapted to Data-PS4 using a small dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (a) R2 values for direct learning (R2 DL, blue line), transfer learning (R2 TL, red line) and improvement from transfer learning (∆R2 = R2 TL − R2 DL, green line) of the 1000 target cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Case ID numbers on the x axis are sorted by the value ∆R2 in descending order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (b) R2 TL plotted against R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (c) Improvement ∆R2 as a function of R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Feature Importance Analysis The structure of the one-hot encoding of our sequence-defined polymers permitted the in- terrogation of the feature importance of various sites on the polymer backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We utilize the entire data set of Data-PS1, Data-PS2, Data-PS3, and Data-PS4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The details of the training process were identical to those stated in the Methods section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Permutation feature importance, which is defined to be the decrease in predictive accuracy (∆R2) when a sin- gle feature value is randomly shuffled, was used to evaluate descriptor importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='60,74 The 14 feature importance for the feature i is computed by FIi = ∆R2 = R2 − R2 i , where R2 is the predictive accuracy without randomly shuffling and R2 i is the predictive accuracy after randomly shuffling the ith dimensional feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We used the permutation feature importance implementation in the Python package ELI575 to perform this analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Since our input is the one-hot encoding of the polymer sequence, a 20-dimensional vector, we alternatively shuffled the feature value of each dimension and calculated the descriptor importance for every input variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Figure 6: Permutation feature importance of 20-dimensional input vector representing for bead in the sequence-defined polymer for four data set, Data-PS1 (red), Data-PS2 (blue), Data-PS3 (orange) and Data-PS4 (purple).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Essentially, only the endpoints are significantly different with the 18 interior beads having rough similar importance to one another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The results of the feature importance analysis are shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Even though the absolute value of feature importance is different for each patterned surface, some common features exist for all four patterned surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The head (first) and the tail (twentieth) beads had relatively lower values of feature importance, and the other eighteen beads’ feature importance were almost the same within the individual surface dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Statt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=',37 also found that the ends of an intrinsically disordered protein (IDP) have a distinct effect on the phase behavior (critical temperature) compared with mutations in the middle of the 15 chain, though the ends are seen there to have a more pronounced effect on the proteins’ phase behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The common features we found among Data-PS1, Data-PS2, Data-PS3, and Data-PS4, represent the shareable knowledge from TL and can explain the successful application of TL in these cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pre-trained models were able to obtain these features before fine-tuning with the small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Size Effects for TL Improvements From the above investigations, we illustrated that transfer learning can improve the accuracy of the DNN models trained on a small target dataset (200 data points).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' It is of interest to see how this scales with the amount of the available data, thus we also explored the effects of the size of the target data set on the improvements from TL, increasing the “small” data set to values between 200 and 4000 points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' As previously, other training settings were kept the same as for N = 200 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Figure 7(a) illustrates the prediction performance of DL and TL for Data-PS2 as a function of the size of the target data set, as quantified by the R2 score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The mean value increases and the SD decrease for the R2 score in both DL and TL as the size of the target data set increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' These values are seen to change more rapidly in DL than TL, which is perhaps to be expected, as more data should drastically improve the behavior of DL given the initial datasets are so small;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' the pretrained TL model is able to approach an optimal fit more easily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We note R2 TL quickly converged with increasing size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' After the target data size exceeded 800, accuracy (measured by R2 TL) saturated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Thus, the improvement between TL and DL decreased as the target data set size increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Still, the performance of the TL models was always better on average than DL models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' This indicates that the transfer of knowledge from the source task can offer significant improvement when scant target data is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The DNN model can learn sufficiently directly from the feed data in large target data sets, and TL does not offer significant improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In our data sets, 2 × 104 independent points are available, and TL’s efficacy saturates relative to DL when approximately 20% of the target 16 (a) (c) (b) Data-PS2 Data-PS3 Data-PS4 Figure 7: (a) R2 DL (blue), R2 TL (red) as a function of the size of the target dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Data- PS2 was used for this comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The error bars reflect the SD of the R2 score from 1000 random draws.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' As the size of the target data set increases, R2 DL and R2 TL both increase and their SD values decrease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R2 DL increases more dramatically relative to R2 TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Though the improvement ∆R2 decreases with increasing size of the data set, R2 TL is always larger than R2 DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Similar improvements of DL and TL were seen in Data-PS3 (b) and Data-PS4 (c), though the behavior of TL saturated at lower accuracy in each relative to Data-PS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 17 data set is use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' This suggests a threshold below which TL should always be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Similar improvements of DL and TL were seen in Data-PS3 and Data-PS4 [see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 7(b,c)], though the behavior of TL saturated at lower accuracy in each relative to Data-PS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nonetheless, improvements on the order of one standard deviation in R2 were seen up to the same 20% threshold applied to the target data size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Conclusion In summary, a comprehensive TL study of the polymer–surface interaction between polymers with defined sequences and different surfaces was conducted through pre-training a DNN with a large dataset from the source domain (Data-PS1) and fine-tuning the pre-trained DNN with the small dataset from the target domain (Data-PS2, Data-PS3, or Data-PS4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Knowledge was transferable among the polymer interactions with different patterned sur- faces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' TL significantly upgraded the performance of the model trained on the small dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In addition, our results showed that TL’s model is more stable than the DL and less likely to be affected by the random selection of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The study of permutation feature importance revealed that the four patterned surfaces have some similar features, representing part of the reasons the transferable knowledge can work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' We also demonstrate that the increase in target data size can diminish the improvement from TL, ascribable to the fact that DL learns more knowledge from the feed data directly when the size of the target data increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' However, the TL model with the full-fine-tuning architecture always performs better than the DL model, even though the improvement diminishes at large sizes of datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Our work highlights the importance of transfer learning in elevating the performance of an ML model regarding the polymer–surface interaction under insufficient data dilemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Usually, tens or hundreds of data points are insufficient to train an accurate ML model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' With transfer learning tools, the shareable knowledge from a pre-trained model can help the ML model trained for polymer–surface interaction with a new surface to obtain higher 18 performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Our test cases are all simulation datasets, and the knowledge is shared among different surfaces in simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' But the benefit of knowledge sharing is not only limited to simulation data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' For example, Briceno-Mena et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='29 have utilized transfer learning techniques to increase the performance of an ML model trained with insufficient experi- mental data by transferring knowledge from an ML model trained with a large simulation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='29 Similarly, for the prediction and optimization of adhesive energies, transfer learn- ing can be used to maximize our knowledge within a new chemical domain from a smaller amount of simulations or experiments, perhaps allowing purely computational and coarse- grained models to cheaply explore compositional space and predictive models to be refined by directed experimentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' As transfer learning, especially few-shot learning, has achieved dramatic successes in computer vision and language models by building a series of sizeable pre-trained ML models, such as YOLO,76 BERT,56 and GPT-3,55 we anticipate knowledge about network structure and problem complexity may be used to guide these algorithms and their applications to materials problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' With more experimental and simulation data about polymer–surface interactions being produced and collected in the future, it is expected to obtain a sizeable pre-trained ML model for polymer surface interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Data Availability The adhesive free energy datasets that used in this article are available online at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='com/shijiale0609/ML_PSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Example scripts and information necessary to run the examples contained in this article are posted at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='com/shijiale0609/TL_PSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Author Contributions JS, FA, SS, and JKW conceived the study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' JS and JKW designed the research plan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' YJC provided suggestions and guidance on the transfer learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' JS conducted transfer learning model training and analysed the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' JS, FA, YJC, SS, and JKW interpreted 19 the results and wrote the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Acknowledgement JS, and JKW acknowledge the support of MICCoM, the Midwest Center for Computational Materials, as part of the Computational Materials Sciences Program funded by the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, for the development of algorithms and codes used within this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' JS, and JKW acknowledge computational resources at the Notre Dame Center for Research Computing (CRC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' YJC acknowledges support from the US National Science Foundation Faculty Early Career Development Program (CAREER) through award CBET-2143346.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' References (1) Chakraborty, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Golumbfskie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Polymer Adsorption–Driven Self-Assembly of Nanostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Annu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2001, 52, 537–573, PMID: 11326074.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (2) Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Duncan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ke, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Advanced Polymer Designs for Direct- Ink-Write 3D Printing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chemistry – A European Journal 2019, 25, 10768–10781.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (3) Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Solak, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Stoykovich, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ferrier, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' De Pablo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nealey, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Epitaxial self-assembly of block copolymers on lithographically defined nanopatterned substrates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature 2003, 424, 411–414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (4) Ikawa, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yamada, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Matsui, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Minemawari, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tsutsumi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Horii, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chika- matsu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Azumi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kumai, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hasegawa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Simple push coating of polymer thin- film transistors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature Communications 2012, 3, 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (5) Annabi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tamayol, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Shin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ghaemmaghami, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Peppas, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 20 Khademhosseini, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Surgical materials: Current challenges and nano-enabled solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nano Today 2014, 9, 574–589.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (6) Xiu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dick, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ju, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mirzaie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Abdi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Cocklin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Inhibitors of SARS-CoV-2 Entry: Current and Future Opportunities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem 2020, 63, 12256–12274.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (7) Wong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Damania, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' SARS-CoV-2 dependence on host pathways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Science 2021, 371, 884–885.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (8) Callaway, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Making sense of coronavirus mutations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature 2020, 174–177.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (9) Plante, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xia, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Johnson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lokugamage, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Muruato, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Fontes-Garfias, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mirchandani, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Scharton, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bilello, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ku, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' An, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kalveram, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Freiberg, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Menachery, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Plante, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Weaver, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Shi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Spike mutation D614G alters SARS-CoV-2 fitness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature 2021, 592, 116–121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (10) Hie, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhong, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Berger, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bryson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Learning the language of viral evolution and escape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Science 2021, 371, 284–288.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (11) Ozboyaci, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kokh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Corni, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wade, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Modeling and simulation of protein– surface interactions: achievements and challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Quarterly Reviews of Biophysics 2016, 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (12) Chakraborty, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bratko, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A simple theory and Monte Carlo simulations for recog- nition between random heteropolymers and disordered surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 1998, 108, 1676–1682.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (13) Muthukumar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pattern recognition by polyelectrolytes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 1995, 103, 4723–4731.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 21 (14) Chakraborty, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Disordered heteropolymers: models for biomimetic polymers and polymers with frustrating quenched disorder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2001, 342, 1–61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (15) Muthukumar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pattern recognition in self-assembly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Curr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Opin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Colloid Interface Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 1998, 3, 48–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (16) Chauhan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Simpson, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Abel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Crowding-induced interactions of ring poly- mers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Soft Matter 2021, 17, 16–23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (17) Kriksin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Khalatur, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Khokhlov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adsorption of multiblock copolymers onto a chemically heterogeneous surface: A model of pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2005, 122, 114703.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (18) Tereshkin, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tereshkina, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Krupyanskii, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Predicting Binding Free Ener- gies for DPS Protein-DNA Complexes and Crystals Using Molecular Dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Super- computing Frontiers and Innovations 2022, 9, 33–45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (19) Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Quevillon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Amorim Valen¸ca, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Predicting Adhesive Free Energies of Polymer–Surface Interactions with Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ACS Applied Materials & Interfaces 2022, 14, 37161–37169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (20) Bishop, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pattern Recognition and Machine Learning (Information Science and Statistics);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Springer-Verlag: Berlin, Heidelberg, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (21) Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Machine learning: a probabilistic perspective;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' MIT press, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (22) Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Probabilistic Machine Learning: An introduction;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' MIT Press, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (23) Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Probabilistic Machine Learning: Advanced Topics;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' MIT Press, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (24) Artrith, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Butler, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Coudert, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Han, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Isayev, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jain, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Walsh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Best practices in machine learning for chemistry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2021, 13, 505–508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 22 (25) de Pablo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jackson, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Webb, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Moore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Morgan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jacobs, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pollock, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Schlom, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Toberer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Analytis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dabo, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' De- Longchamp, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Fiete, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Grason, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hautier, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rajan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Reed, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rodriguez, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Stevanovic, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Suntivich, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Thornton, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' New frontiers for the materials genome initiative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' npj Computational Materials 2019, 5, 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (26) Gormley, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Webb, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Machine learning in combinatorial polymer chemistry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature Reviews Materials 2021, 1–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (27) Huang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Fu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Glass, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zitnik, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xiao, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' DeepPurpose: a deep learning library for drug–target interaction prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bioinformatics 2020, 36, 5545– 5547.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (28) Liang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yuan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Machine-learning exploration of polymer compatibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Cell Reports Physical Science 2022, 3, 100931.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (29) Briceno-Mena, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Romagnoli, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Arges, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' PemNet: A Transfer Learning- Based Modeling Approach of High-Temperature Polymer Electrolyte Membrane Elec- trochemical Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Industrial & Engineering Chemistry Research 2022, 61, 3350– 3357.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (30) Sattari, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Data-driven algorithms for inverse design of polymers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Soft Matter 2021, 17, 7607–7622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (31) Sevgen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Guo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sidky, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' de Pablo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Combined Force- Frequency Sampling for Simulation of Systems Having Rugged Free Energy Landscapes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Journal of Chemical Theory and Computation 2020, 16, 1448–1455, PMID: 31951703.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (32) Sidky, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Learning free energy landscapes using artificial neural net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The Journal of Chemical Physics 2018, 148, 104111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 23 (33) Jumper, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Evans, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pritzel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Green, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Figurnov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ronneberger, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tun- yasuvunakool, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bates, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ˇZ´idek, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Potapenko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bridgland, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Meyer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kohl, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ballard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Cowie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Romera-Paredes, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nikolov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jain, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adler, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Back, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Petersen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Reiman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Clancy, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zielinski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Steinegger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pacholska, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Berghammer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bodenstein, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hassabis, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Highly accurate protein structure prediction with AlphaFold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature 2021, 596, 583–589.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (34) Tunyasuvunakool, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adler, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Green, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zielinski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ˇZ´idek, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bridg- land, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Cowie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Meyer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Laydon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Velankar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kleywegt, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bateman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Evans, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pritzel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Figurnov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ronneberger, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bates, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kohl, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Potapenko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ballard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Romera-Paredes, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nikolov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jain, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Clancy, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Reiman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Petersen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Birney, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jumper, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hassabis, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Highly accurate protein structure prediction for the human proteome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nature 2021, 596, 590–596.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (35) Baek, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' DiMaio, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Anishchenko, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dauparas, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ovchinnikov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lee, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Cong, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kinch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Schaeffer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mill´an, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Park, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adams, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Glassman, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' DeGiovanni, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pereira, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rodrigues, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' van Dijk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ebrecht, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Opperman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sagmeister, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Buhlheller, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pavkov-Keller, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rathinaswamy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dalwadi, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yip, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Burke, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Garcia, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Gr- ishin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adams, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Read, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Baker, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Accurate prediction of protein structures and interactions using a three-track neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Science 2021, 373, 871–876.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (36) Webb, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jackson, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Gil, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' de Pablo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Targeted sequence design within the coarse-grained polymer genome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2020, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (37) Statt, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Casademunt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Brangwynne, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Panagiotopoulos, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Model for disor- 24 dered proteins with strongly sequence-dependent liquid phase behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2020, 152, 075101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (38) Statt, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kleeblatt, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Reinhart, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Unsupervised learning of sequence-specific aggregation behavior for a model copolymer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Soft Matter 2021, 17, 7697–7707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (39) Meenakshisundaram, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hung, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Patra, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Simmons, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Designing Sequence-Specific Copolymer Compatibilizers Using a Molecular-Dynamics-Simulation- Based Genetic Algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Macromolecules 2017, 50, 1155–1166.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (40) Ma, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Huang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Determining influential descriptors for polymer chain conformation based on empirical force-fields and molecular dynamics simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2018, 704, 49–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (41) Arora, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rebello, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Av-Ron, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mochigase, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Olsen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Random Forest Predictor for Diblock Copolymer Phase Behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ACS Macro Letters 2021, 10, 1339–1345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (42) Patel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Borca, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Webb, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Featurization strategies for polymer sequence or composition design by machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Des.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2022, 7, 661–676.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (43) Ma, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Exploring High Thermal Conductivity Amorphous Polymers Using Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ACS Applied Materials & Interfaces 2022, 14, 15587– 15598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (44) Ma, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Evaluating Polymer Representations via Quantifying Structure–Property Relationships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Journal of Chemical Information and Modeling 2019, 59, 3110–3119, PMID: 31268306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (45) Lin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Coley, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mochigase, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Beech, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Woods, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Craig, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Johnson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kalow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jensen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Olsen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' BigSMILES: A 25 Structurally-Based Line Notation for Describing Macromolecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ACS Central Science 2019, 5, 1523–1531.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (46) Jindong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', Transfer Learning Tutorial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='com/jindongwang/transferlearning-tutorial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (47) Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer Learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Cambridge University Press, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (48) Yosinski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Clune, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lipson, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' How transferable are features in deep neural networks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Advances in Neural Information Processing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' pp 1–9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (49) Zhuang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Qi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Duan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xiong, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' He, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A compre- hensive survey on transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Proceedings of the IEEE 2020, 109, 43–76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (50) Pan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A survey on transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' IEEE Transactions on Knowledge and Data Engineering 2009, 22, 1345–1359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (51) Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zheng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer learning for speech and language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2015 Asia- Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' pp 1225–1237.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (52) Kunze, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kirsch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kurenkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Krug, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Johannsmeier, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Stober, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer Learning for Speech Recognition on a Budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' CoRR 2017, abs/1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='00290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (53) Ng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nguyen, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Vonikakis, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Winkler, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Deep Learning for Emotion Recognition on Small Datasets Using Transfer Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Proceedings of the 2015 ACM on International Conference on Multimodal Interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' New York, NY, USA, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' p 443–449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (54) Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lv, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Image recognition of wind turbine blade dam- age based on a deep learning model with transfer learning and an ensemble learning classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Renewable Energy 2021, 163, 386–397.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 26 (55) Brown, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mann, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ryder, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Subbiah, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kaplan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Neelakan- tan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Shyam, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sastry, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Askell, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Agarwal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Herbert-Voss, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Krueger, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Henighan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Child, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ramesh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ziegler, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Winter, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hesse, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sigler, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Litwin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Gray, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chess, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Clark, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Berner, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' McCan- dlish, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Amodei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Language Models are Few-Shot Learn- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Advances in Neural Information Processing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' pp 1877–1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (56) Devlin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Toutanova, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bert: Pre-training of deep bidirec- tional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='04805 2018, (57) Tsubaki, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mizoguchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Quantum Deep Descriptor: Physically Informed Transfer Learning from Small Molecules to Polymers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Journal of Chemical Theory and Compu- tation 2021, 17, 7814–7821, PMID: 34846893.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (58) Sultan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wayment-Steele, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pande, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transferable Neural Networks for Enhanced Sampling of Protein Dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Journal of Chemical Theory and Computa- tion 2018, 14, 1887–1894, PMID: 29529369.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (59) K¨aser, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Boittier, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Upadhyay, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Meuwly, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer Learning to CCSD(T): Accurate Anharmonic Frequencies from Machine Learning Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Journal of Chemical Theory and Computation 2021, 17, 3687–3699, PMID: 33960787.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (60) Ma, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Colón, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Transfer Learning Study of Gas Adsorption in Metal–Organic Frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ACS Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Interfaces 2020, 12, 34041–34048, PMID: 32613831.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (61) Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jiang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Leverage electron properties to predict phonon properties via transfer learning for semiconductors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Science Advances 2020, 6, eabd1356.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (62) Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kondo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kakimoto, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='-a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yamada, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kuwajima, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lambard, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Hongo, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Shiomi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Schick, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Morikawa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yoshida, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Machine-learning- 27 assisted discovery of polymers with high thermal conductivity using a molecular design algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' npj Computational Materials 2019, 5, 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (63) Thompson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Aktulga, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Berger, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bolintineanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Brown, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Crozier, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' in ’t Veld, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kohlmeyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Moore, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Nguyen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Shan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Stevens, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tranchida, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Trott, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Plimpton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Comp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2022, 271, 108171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (64) Darve, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rodr´iguez-Gómez, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Pohorille, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adaptive biasing force method for scalar and vector free energy calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2008, 128, 144120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (65) Sidky, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Colón, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Helfferich, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sikora, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bezik, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Giberti, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Guo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Jiang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lequieu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Moller, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Quevillon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rahimi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ramezani-Dakhel, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rathee, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Reid, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sevgen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Thapar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Webb, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' de Pablo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' SSAGES: Software Suite for Advanced General Ensemble Simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The Journal of Chemical Physics 2018, 148, 044104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (66) Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Huang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Gygi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Free-Energy Landscape and Isomerization Rates of Au4 Clusters at Finite Temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The Journal of Physical Chemistry A 2022, 126, 3392–3400, PMID: 35584205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (67) Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sidky, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Automated determination of n-cyanobiphenyl and n-cyanobiphenyl binary mixtures elastic constants in the nematic phase from molecular simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Mol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Des.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2020, 5, 1131–1136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (68) Leonhard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Accurate Determination of Cavitand Binding Free Energies via Unrestrained Advanced Sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Journal of Chemical Theory and Com- putation 2019, 15, 5761–5768, PMID: 31566977.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (69) Cort´es-Morales, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Rathee, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ghobadi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Whitmer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' A molecular view of plasticization of polyvinyl alcohol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' The Journal of Chemical Physics 2021, 155, 174903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 28 (70) Zhang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lipton, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Smola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Dive into Deep Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='11342 2021, (71) Xu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Empirical evaluation of rectified activations in convolutional network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' arXiv preprint arXiv:1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='00853 2015, (72) Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Ba, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='6980 2014, (73) Paszke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Gross, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Massa, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lerer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bradbury, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chanan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Killeen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Gimelshein, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Antiga, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Desmaison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Kopf, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Yang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' DeVito, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Raison, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tejani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chilamkurthy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Steiner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Fang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Chintala, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems 32;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', d’Alch´e Buc, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', Fox, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' pp 8024–8035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (74) Altmann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Tolo¸si, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Sander, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Lengauer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Permutation importance: a corrected feature importance measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Bioinformatics 2010, 26, 1340–1347.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (75) TeamHG-Memex, ELI5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content='com/TeamHG-Memex/eli5, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' (76) Redmon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Divvala, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Girshick, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Farhadi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' You only look once: Unified, real-time object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' pp 779–788.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} +page_content=' 29' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNE0T4oBgHgl3EQfSQBx/content/2301.02219v1.pdf'} diff --git a/KdAzT4oBgHgl3EQfVfys/content/tmp_files/2301.01286v1.pdf.txt b/KdAzT4oBgHgl3EQfVfys/content/tmp_files/2301.01286v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..9928629edb18072ab3c51dc73003655c3cb6586e --- /dev/null +++ b/KdAzT4oBgHgl3EQfVfys/content/tmp_files/2301.01286v1.pdf.txt @@ -0,0 +1,625 @@ +PSEUDO-INVERTED BOTTLENECK CONVOLUTION FOR DARTS SEARCH SPACE +Arash Ahmadian1, Yue Fei1, Louis S.P. Liu1, Konstantinos N. Plataniotis1, Mahdi S. Hosseini2 +1The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, University of Toronto, Canada +2Computer Science and Software Engineering (CSSE), Concordia University, Canada +ABSTRACT +Differentiable Architecture Search (DARTS) has attracted con- +siderable attention as a gradient-based Neural Architecture +Search (NAS) method. Since the introduction of DARTS, +there has been little work done on adapting the action space +based on state-of-art architecture design principles for CNNs. +In this work, we aim to address this gap by incrementally aug- +menting the DARTS search space with micro-design changes +inspired by ConvNeXt and studying the trade-off between ac- +curacy, evaluation layer count, and computational cost. To this +end, we introduce the Pseudo-Inverted Bottleneck conv block +intending to reduce the computational footprint of the inverted +bottleneck block proposed in ConvNeXt. Our proposed archi- +tecture is much less sensitive to evaluation layer count and +outperforms a DARTS network with similar size significantly, +at layer counts as small as 2. Furthermore, with less layers, +not only does it achieve higher accuracy with lower GMACs +and parameter count, GradCAM comparisons show that our +network is able to better detect distinctive features of target +objects compared to DARTS. +1. INTRODUCTION +Since the introduction of Vision Transformers (ViTs) by Doso- +vitskiy et al. [1], a new class of research has emerged, pushing +the boundaries of Transformer-based architectures on a va- +riety of computer vision tasks [2, 3, 4, 5]. These advances +make it seem inevitable that ViTs would overtake conven- +tional Convolutional Neural Networks (CNNs). Recently, Liu +et al.’s ConvNeXt [6] has sparked a resurgence in further +exploring the architectural designs of CNNs in image recog- +nition. Specifically, they argued that by adapting components +from Transformers into the standard ResNet backbone [7], the +trained models can match or outperform state-of-the-art ViTs +in image classification, objection detection, and segmentation. +If CNNs can still be improved by design elements that were +previously overlooked, this begs the question: Can we ap- +ply the same Transformer principles to a Neural Architecture +Search (NAS) framework to improve its performance? +NAS has historically seen immense success on large-scale +image classification prior to ViTs [8, 9, 10] as it alleviates +the task of manually designing for the optimal neural network +architecture. Early works of NAS employed Reinforcement +Learning [11], Evolutionary Search [12], and Bayesian Op- +timization [13] while more recent works have shifted to the +One-Shot NAS paradigm [14], which leverages weight-sharing +of models within a supernet to reduce computation time. +One popular branch stream of NAS is Differentiable Archi- +tecture Search (DARTS) [15]. This method relaxes the search +space from discrete to continuous by attributing weights to +operations sampled from set and using a Softmax function to +choose the best candidate. This enables end-to-end training +using common optimizers such as Stochastic Gradient Descent. +Many works have investigated ways of improving the NAS +operation space using methods such as: increasing the gran- +ularity of operations by breaking down search units across +input convolution channels [16], grouping similar operations +to combat the effects of multi-collinearity [17], creating more +expressive operations by replacing the DFT matrices in convo- +lution’s diagonalization with K-matrices [18], and reducing the +operation set [19]. In this work, we investigate optimizations +to the search space through a different set of lens by drawing +inspiration from ConvNeXt. +We start with the second-order DARTSV2 cell (vanilla) +structure and incrementally augment the search operations by +adapting design elements from ConvNeXt. For each stage, we +conduct search and evaluation phases on CIFAR-10 [20] using +the same training setup and hyper-parameters as DARTS [15]. +In our experiments, we encountered a large increase in param- +eter count when directly adopting the ConvNeXt convolution +block with hindering performances. To combat this, we also +propose a new Pseudo-Inverted Bottleneck structure to incorpo- +rate an inverted bottleneck while minimizing model size. Our +proposed architecture is much less sensitive to evaluation layer +count and achieves better test error than the original DARTSV2 +with comparable parameter count and computations. We fur- +ther demonstrate its effectiveness by performing a GradCAM +[21] analysis, showing that it is able to capture prominent +image features at 10 layers vs. a 20-layer DARTSV2. +Our contributions are summarized as follows: +[C1.] We present an incremental experiment procedure to +evaluate how design components from ConvNeXt impact the +performance of DARTS by redesigning its search space. +[C2.] We introduce a Pseudo-Inverted Bottleneck block to +implement an inverted bottleneck structure while minimizing +model footprint and computations. This outperforms vanilla +DARTSV2 with lower number of layers, parameter count, and +GMACs. +arXiv:2301.01286v1 [cs.LG] 31 Dec 2022 + +2. METHODOLOGY +Our approach to modernizing the DARTS operation set in- +volves incrementally making micro-changes to the design of +the separable convolution block used within DARTS. However, +not all changes proposed in ConvNeXt can be transferred to +DARTS. (1) Changing the stage compute ratio to match that of +the Swin Transformer [3] is not applicable as it would require +major restructuring of the DARTS framework (i.e. changing +the placement of reduction cells) which is beyond our scope +of updating the operation set. (2) Modifying the stem cell to +mimic the “patchify” operation in Swin is not applicable since +a 4× downsampling is too aggressive for the 32 × 32 images +in CIFAR-10. With every change, we search for a cell struc- +ture (or genotype), under hyper-parameter settings described +in Section 4 and evaluate on different layer counts (1). We +compare the highest achieved accuracies and corresponding +GMACs. Below we present this exploration step by step. +Fig. 1: Roadmap of the incremental augmentations described +in Section 3, along with their corresponding accuracies and +methodologies. +Fig. 2: Convolution Blocks : (a) DARTS Separable Convolu- +tion Block; (b) Inverted Bottleneck ConvNeXt Convolution +Block (Cinv = C × 4); (c) Pseudo-Inverted Bottleneck Cell +(Cinv = C × 2) +Replacing ReLU with GELU We replace the widely used +ReLu [22] activation with GELU [23] which provides an ap- +proximation of the former with the key distinction that a small +portion of negative signals are let through to the next layer. +This boosts the accuracy by 0.12% and from now on we use +GELU instead of ReLU. +Replacing BatchNorm with LayerNorm There have +been multiple attempts to develop an alternative to normal- +ization however it remains a key ingredient in modern NN +design [24]. In ConvNeXt, replacing BN with LN slightly +improves the accuracy of the network. We replace BatchNorm +[25] with LayerNorm [26] in our separable convolution oper- +ation. Initially, this results in minor degradation in accuracy. +We also experiment with retaining LN and adding the various +micro-changes proposed in this section. We did not achieve +a performance boost from LN in any setting. We will use BN +instead of LN. +Adapting the ConvNeXt Block Vanilla DARTS uses +depthwise separable convolution as popularized by Xception +[27]. The stacked topology used in DARTS is depicted in +Fig. 2a. However, the inverted bottleneck popularized by +MobileNetV2 [28] has made its way to multiple modern net- +works [8, 29] and thus warrants exploration in the DARTS +framework. We implement the ConvNeXt block structure in +Fig. 2b. It consists of three key changes: (1) Reducing the +number of activation and normalization functions, (2) Adapt- +ing to an inverted bottleneck structure, and (3) Moving up the +depthwise separable conv layer to facilitate training with large +kernel sizes. However, directly adapting the ConvNeXt block +significantly increases the number of parameters and GMACs +while sharply decreasing accuracy. +To manage the number of learnable parameters, we intro- +duce the Pseudo-Inverted Bottleneck block as depicted in Fig. +2. We add a depthwise convolution after the intermediate point- +wise conv layer which reduces the number of channels. We +keep the positions of the activation and normalization the same +relative to the next layer based on the ConvNeXt block. This +structure also inhibits the stacked architecture which has been +shown to increase accuracy by 1−2% when introduced to sep- +arable convolution-based operations in the DARTS framework +[15] (which the vanilla inverted bottleneck does not have), as +well as an inverted bottleneck structure. +We compare the number of weights per block to estimate +the parameter size and computational complexity of both net- +works. Define C to be the input and output channel size, Cinv +to be the inverted bottleneck channel size, and K to be the +kernel size of the depthwise convolution. Similarly, define +F = Cinv/C to be the inverted bottleneck ratio for the first +pointwise convolution. The total number of weights between +the ConvNeXt block (1) and our Pseudo-Inverted Bottleneck +block (2) are compared below: +2FC2 + K2C +(1) +(F + 1)C2 + 2K2C +(2) + +GMAC +0.547 +97.24% +DARTSV2 (Baseline) +97.36% +0.547 +Replace RELU with GELU +0.54 +97.28% +Replace BN with LN +2.38 +95.97% +Adapt ConvNeXt Block +0.969 +Adapt Pseudo-Inverted +97.76% +Bottleneck Block +95% +96% +97% +98%GELU +d3 x 3, 36 →> 36 +1 × 1, 36 →> 36 +BN +GELU +d3 x 3, 36 → 36 +1 x 1, 36 → 36 +BNd5 x 5, 72 → 72 +1 x 1, 72 288 +1 x 1, 288→ 72d5 x 5, 36 → 36 +1 × 1, 36→ 72 +d5 x 5, 72 →> 36 +1 x 1, 36 → 36In practice, the dominant variable in both equations is the +channel size C, which is initialized to 16 and doubled at each +reduction cell. Additionally, the conv operation dominates +both DARTSV2 and our searched genotypes. Thus, comparing +the coefficients of the quadratic term C2 provides an estimate +for the difference in parameter size and computational com- +plexity of these networks. Our Pseudo-Inverted Bottleneck +block has approximately 0.63 times the number of weights as +the ConvNeXt block. We further choose F = 2 in the final +block topology after experimentation with various values in +{1.5, 4.5} since it achieved the best accuracy-GMAC trade-off. +The use of the Pseudo-Inverted Bottleneck block boosts the +accuracy by 0.4%. +3. EXPERIMENTS +Experimental Setup We present our hyperparameter settings +and experimental setup next. Following the DARTS frame- +work, we search with an initial channel size of 16, 4 nodes, 8 +layers, 50 epochs, and a batch size of 64. We use the SGD opti- +mizer coupled with a cosine-annealing learning rate scheduler +(no restarts) [30], 0.0025 initial learning rate, 3e−4 weight de- +cay, and 0.9 momentum. As for the evaluation phase, we train +for 600 epochs with a batch size of 96, cutout augmentation +[31], path dropout with probability 0.2 and auxiliary towers +with 0.4 weight. Other hyper-parameter settings remain the +same as the search phase. Both our search and evaluation +phases are performed on CIFAR-10. +Fig. 3: Evolution of the architecture weights if searched for +115 epochs +Search Phase Our final operation set after the in- +cremental +changes described +previously +is +comprised +of the following 10 operations: +none, +skip_connect, +pseudo_inv_bn_3x3, pseudo_inv_bn_5x5, pseudo_inv_bn_7x7, +dialated_conv_3x3, +dialated_conv_5x5, +conv_7x1_1x7, +max_pool_3x3, avg_pool_3x3. We argue that our genotype is +trained to convergence with 50 epochs and avoids a common +pitfall of falling back on skip-connections in later stages of +training [32]. As depicted by Fig. 7, the decision boundary be- +tween the favored operation (in this case, pseudo_inv_bn_5x5) +and skip-connection, is not crossed even very late into training. +After searching with the mentioned hyperparameters and final +operation set, we arrive at the genotype in Fig. 4. +Fig. 4: Proposed Genotype: (a) Normal cell; (b) Reduction +cell +Fig. 5: Searched genotypes in comparison with DARTSV2: +(a) Accuracy vs. Parameter count (b) Accuracy vs. Number of +Evaluation Layers +Evaluation Phase We evaluate our final genotype at mul- +tiple evaluation layers to observe the effect of layer count on +test accuracy and report the results in Table 1. We observe that +the evaluation accuracy of our proposed genotype is signifi- +cantly less affected by the evaluation layer count compared +to DARTSV2. Specifically, at 10 layers, we achieve a higher +test accuracy compared to a 20 layer DARTSV2 network. Fur- +thermore, at 2 layers, our architectures exceed the DARTSV2 +genotype at 3 layers by over 20%, while at the same time +maintaining similar GMACs. At 4 layers, we outperform the +DARTSV2 genotype at 7 layers (to match the model size for +a fair comparison) by 0.24%, while still maintaining lower +GFLOPs. Fig. 6 presents a comparison between the Grad- +CAM [21] visualizations produced from the last cell of each +network for DARTSV2 at 20 layer, Our genotype at 10 and +20 layers. Our proposed genotype, in a 10 cell network, can +effectively capture the prominent features of the classification. +The increase in the number of cascaded cells leads to the grad- +ual collapse of the heat-map boundaries, onto the outline of +the object outperforming DARTS. We argue that this supports +our claim that the proposed genotype, is inherently superior to +that of DARTS. + +Arch.WeightsforEdge#10 +max pool3x3 +avg_pool_3x3 +0.25 +skip_connect +PseudolnvBn 3x3 +PseudolnvBn 5x5 +Architecture Weight +0.20 +PseudolnvB_7x7 +dilated conv 3x3 +dilated conv 5x5 +0.15 +0.10 +0.05 +0 +20 +40 +60 +80 +100 +EpochPseudolnvBn_3x3 +c_{k-2] +PseudolnvBn_3x3 +PseudolnvBn_3x3 +PseudolnvBn 3x3 +3 +PseudolnvBn_3x3 +PseudolnvBn_5x5 +c_{k-1) +0 +PseudolnvBn_3x3 PseudolnvBn_3x3 +2PseudolnvBn 5x5 +dil_conv_5x5 +c_{k-1] +0 +avg_pool_3x3 +skip_connect +avg_pool_3x3 +PseudolnvBn_ 3x3 +PseudolnvBn_3x3 +C_(k-2) +3 +avg_pool_3x3 +2Accuracyv.s.LearnableParams. +100 +95 +90 +85 +Accuracy +80 +75 +70 +65 +60 +DARTSV2 +Pseudo-Inv. Bottl. +55 +0 +1 +2 +3 +4 +5 +6 +ParamsAccuracy v.s. Layer +100 +95 +90 +85 +Accuracy +80 +75 +70 +65 +60 +DARTSV2 +55 +Pseudo-lnv. Bottl. +4 +6 +8 +10 +12 +14 +16 +18 +20 +Eval.LayerTable 1: Performance comparison of different genotypes on +CIFAR-10 dataset: Our genotype evaluated on 10 and 5 layers +are highlighted to be compared with DARTSV2 genotype +evaluated with 20 layers. +Genotype +Eval. Layers +Test Acc. (%) +Params (M) +GMAC +DARTSV2 +20 +97.24 +3.30 +0.547 +15 +96.93 +2.28 +0.408 +10 +96.72 +1.6 +0.265 +8 +96.32 +1.15 +0.207 +7 +96.05 +1.05 +0.180 +6 +95.73 +0.635 +0.153 +5 +94.56 +0.605 +0.121 +4 +93.74 +0.487 +0.090 +3 +71.68 +0.116 +0.067 +2 +54.52 +0.082 +0.035 +Our +Geno- +type +20 +97.76 +6.06 +0.969 +15 +97.40 +4.21 +0.724 +10 +97.29 +3.02 +0.470 +8 +97.15 +2.26 +0.369 +7 +97.03 +2.07 +0.320 +6 +96.86 +1.36 +0.275 +5 +96.65 +1.30 +0.218 +4 +96.24 +1.10 +0.166 +3 +94.63 +0.443 +0.123 +2 +92.15 +0.385 +0.067 +Fig. 6: GradCAM: The first row shows the 32 × 32 input im- +ages with labels: dog, automobile, airplane, ship; The second +row shows DARTSV2 evaluated on 20 layers; Then third and +fourth rows show our genotype evaluated on {10, 20} lay- +ers, respectively. (Note: All of the images are up-sampled to +224 × 224 for better readability) +Fig. 7: Search phase of our proposed genotype: Top 1% Test +Error vs Epochs +4. CONCLUSION +In this work, we attempt to revise the DARTS search space. +We incrementally augment the convolution operation with +micro-changes inspired by ConvNeXt and propose the Pseudo- +Inverted Bottleneck block to reduce the number of parameters +used in the vanilla Inverted Bottleneck. Our proposed geno- +type’s performance is much less sensitive to evaluation layer +count compared to that of DARTSV2. It achieves a higher +accuracy at a lower GMAC/ parameter count with 10 evalu- +ation layers compared to DARTSV2 evaluated at 20 layers. +Furthermore, we perform a GradCAM visualization on our +genotype and compare it with that of DARTSV2. +Our network’s high performance at lower layer counts, cor- +respondingly with low GMACs and parameter count, makes it +an attractive choice image processing applications as sharpen- +ing and blurring, as shallow networks suit these applications +best. Consequently, a potential avenue for future work would +be to explore the applications of our genotype/ Pseudo-Inverted +Bottleneck block, to image processing tasks. +It is worth noting that our aim in this paper was not to +combat the SOTA methods related to DARTS; but shedding +light on the granularity of search space which is commonly +shared across many DARTS variants in the literature. We hope +our work initiates new ideas to investigate optimum search +space designs in DARTS framework to build more robust and +generalized models for representational learning problems. +References +[1] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, +X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, +G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An +image is worth 16x16 words: Transformers for image +recognition at scale,” in 9th International Conference + +Search Test Err. +Test Err. Top-1 +45 +40 +35 +30 +Test +25 +20 +15 +0 +10 +20 +30 +40 +50on Learning Representations, ICLR 2021, Virtual Event, +Austria, May 3-7, 2021, OpenReview.net, 2021. +[2] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablay- +rolles, and H. Jégou, “Training data-efficient image +transformers & distillation through attention,” CoRR, +vol. abs/2012.12877, 2020. +[3] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, +and B. Guo, “Swin transformer: Hierarchical vision trans- +former using shifted windows,” in Proceedings of the +IEEE/CVF International Conference on Computer Vi- +sion (ICCV), pp. 10012–10022, October 2021. +[4] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, +T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: +A versatile backbone for dense prediction without convo- +lutions,” in Proceedings of the IEEE/CVF International +Conference on Computer Vision, pp. 568–578, 2021. +[5] L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z.-H. Jiang, +F. E. Tay, J. Feng, and S. Yan, “Tokens-to-token vit: +Training vision transformers from scratch on imagenet,” +in Proceedings of the IEEE/CVF International Confer- +ence on Computer Vision, pp. 558–567, 2021. +[6] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, +and S. Xie, “A convnet for the 2020s,” in Proceedings +of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, pp. 11976–11986, 2022. +[7] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual +learning for image recognition,” in 2016 IEEE Con- +ference on Computer Vision and Pattern Recognition +(CVPR), pp. 770–778, 2016. +[8] M. Tan and Q. Le, “Efficientnet: Rethinking model scal- +ing for convolutional neural networks,” in International +conference on machine learning, pp. 6105–6114, PMLR, +2019. +[9] M. Tan and Q. V. Le, “Efficientnetv2: Smaller models +and faster training,” CoRR, vol. abs/2104.00298, 2021. +[10] A. Howard, M. Sandler, G. Chu, L. Chen, B. Chen, +M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. +Le, and H. Adam, “Searching for mobilenetv3,” CoRR, +vol. abs/1905.02244, 2019. +[11] B. Zoph and Q. V. Le, “Neural architecture search with re- +inforcement learning,” CoRR, vol. abs/1611.01578, 2016. +[12] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and +K. Kavukcuoglu, “Hierarchical representations for ef- +ficient architecture search,” ArXiv, vol. abs/1711.00436, +2018. +[13] J. Liu, M. Zhang, Y. Sun, B. Liu, G. Song, Y. Liu, and +H. Li, “Fnas: Uncertainty-aware fast neural architecture +search,” 2021. +[14] Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, and +J. Sun, “Single path one-shot neural architecture search +with uniform sampling,” CoRR, vol. abs/1904.00420, +2019. +[15] H. Liu, K. Simonyan, and Y. Yang, “DARTS: differen- +tiable architecture search,” CoRR, vol. abs/1806.09055, +2018. +[16] J. Mei, Y. Li, X. Lian, X. Jin, L. Yang, A. L. Yuille, +and J. Yang, “Atomnas: Fine-grained end-to-end neural +architecture search,” CoRR, vol. abs/1912.09640, 2019. +[17] G. Li, X. Zhang, Z. Wang, Z. Li, and T. Zhang, +“Stacnas: Towards stable and consistent optimization +for differentiable neural architecture search,” ArXiv, +vol. abs/1909.11926, 2019. +[18] N. Roberts, M. Khodak, T. Dao, L. Li, C. R’e, and +A. S. Talwalkar, “Rethinking neural operations for di- +verse tasks,” in NeurIPS, 2021. +[19] X. Dong and Y. Yang, “Nas-bench-201: Extending the +scope of reproducible neural architecture search,” CoRR, +vol. abs/2001.00326, 2020. +[20] A. Krizhevsky, “Learning multiple layers of features +from tiny images,” 2009. +[21] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, +D. Parikh, and D. Batra, “Grad-cam: Visual explanations +from deep networks via gradient-based localization,” in +Proceedings of the IEEE international conference on +computer vision, pp. 618–626, 2017. +[22] A. F. Agarap, “Deep learning using rectified linear units +(relu),” CoRR, vol. abs/1803.08375, 2018. +[23] D. Hendrycks and K. Gimpel, “Bridging nonlinearities +and stochastic regularizers with gaussian error linear +units,” CoRR, vol. abs/1606.08415, 2016. +[24] T. Salimans and D. P. Kingma, “Weight normalization: A +simple reparameterization to accelerate training of deep +neural networks,” CoRR, vol. abs/1602.07868, 2016. +[25] S. Ioffe and C. Szegedy, “Batch normalization: Accelerat- +ing deep network training by reducing internal covariate +shift,” CoRR, vol. abs/1502.03167, 2015. +[26] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normal- +ization,” 2016. +[27] F. Chollet, “Xception: Deep learning with depthwise sep- +arable convolutions,” CoRR, vol. abs/1610.02357, 2016. +[28] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and +L. Chen, “Inverted residuals and linear bottlenecks: Mo- +bile networks for classification, detection and segmenta- +tion,” CoRR, vol. abs/1801.04381, 2018. +[29] M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, +A. Howard, and Q. V. Le, “Mnasnet: Platform-aware +neural architecture search for mobile,” 2018. +[30] I. Loshchilov and F. Hutter, “SGDR: stochastic gradient +descent with restarts,” CoRR, vol. abs/1608.03983, 2016. +[31] T. Devries and G. W. Taylor, “Improved regularization +of convolutional neural networks with cutout,” CoRR, +vol. abs/1708.04552, 2017. +[32] H. Liang, S. Zhang, J. Sun, X. He, W. Huang, +K. Zhuang, and Z. Li, “DARTS+: improved differen- +tiable architecture search with early stopping,” CoRR, +vol. abs/1909.06035, 2019. + diff --git a/KdAzT4oBgHgl3EQfVfys/content/tmp_files/load_file.txt b/KdAzT4oBgHgl3EQfVfys/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..c461d2ec31469df2b45a2b47848a247da989fdd0 --- /dev/null +++ b/KdAzT4oBgHgl3EQfVfys/content/tmp_files/load_file.txt @@ -0,0 +1,484 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf,len=483 +page_content='PSEUDO-INVERTED BOTTLENECK CONVOLUTION FOR DARTS SEARCH SPACE Arash Ahmadian1, Yue Fei1, Louis S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu1, Konstantinos N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Plataniotis1, Mahdi S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Hosseini2 1The Edward S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Rogers Sr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Department of Electrical & Computer Engineering, University of Toronto, Canada 2Computer Science and Software Engineering (CSSE), Concordia University, Canada ABSTRACT Differentiable Architecture Search (DARTS) has attracted con- siderable attention as a gradient-based Neural Architecture Search (NAS) method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' In this work, we aim to address this gap by incrementally aug- menting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between ac- curacy, evaluation layer count, and computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our proposed archi- tecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' INTRODUCTION Since the introduction of Vision Transformers (ViTs) by Doso- vitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [1], a new class of research has emerged, pushing the boundaries of Transformer-based architectures on a va- riety of computer vision tasks [2, 3, 4, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' These advances make it seem inevitable that ViTs would overtake conven- tional Convolutional Neural Networks (CNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Recently, Liu et al.’s ConvNeXt [6] has sparked a resurgence in further exploring the architectural designs of CNNs in image recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Specifically, they argued that by adapting components from Transformers into the standard ResNet backbone [7], the trained models can match or outperform state-of-the-art ViTs in image classification, objection detection, and segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' If CNNs can still be improved by design elements that were previously overlooked, this begs the question: Can we ap- ply the same Transformer principles to a Neural Architecture Search (NAS) framework to improve its performance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' NAS has historically seen immense success on large-scale image classification prior to ViTs [8, 9, 10] as it alleviates the task of manually designing for the optimal neural network architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Early works of NAS employed Reinforcement Learning [11], Evolutionary Search [12], and Bayesian Op- timization [13] while more recent works have shifted to the One-Shot NAS paradigm [14], which leverages weight-sharing of models within a supernet to reduce computation time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' One popular branch stream of NAS is Differentiable Archi- tecture Search (DARTS) [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' This method relaxes the search space from discrete to continuous by attributing weights to operations sampled from set and using a Softmax function to choose the best candidate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' This enables end-to-end training using common optimizers such as Stochastic Gradient Descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Many works have investigated ways of improving the NAS operation space using methods such as: increasing the gran- ularity of operations by breaking down search units across input convolution channels [16], grouping similar operations to combat the effects of multi-collinearity [17], creating more expressive operations by replacing the DFT matrices in convo- lution’s diagonalization with K-matrices [18], and reducing the operation set [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' In this work, we investigate optimizations to the search space through a different set of lens by drawing inspiration from ConvNeXt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We start with the second-order DARTSV2 cell (vanilla) structure and incrementally augment the search operations by adapting design elements from ConvNeXt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' For each stage, we conduct search and evaluation phases on CIFAR-10 [20] using the same training setup and hyper-parameters as DARTS [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' In our experiments, we encountered a large increase in param- eter count when directly adopting the ConvNeXt convolution block with hindering performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' To combat this, we also propose a new Pseudo-Inverted Bottleneck structure to incorpo- rate an inverted bottleneck while minimizing model size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our proposed architecture is much less sensitive to evaluation layer count and achieves better test error than the original DARTSV2 with comparable parameter count and computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We fur- ther demonstrate its effectiveness by performing a GradCAM [21] analysis, showing that it is able to capture prominent image features at 10 layers vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' a 20-layer DARTSV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our contributions are summarized as follows: [C1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='] We present an incremental experiment procedure to evaluate how design components from ConvNeXt impact the performance of DARTS by redesigning its search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [C2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='] We introduce a Pseudo-Inverted Bottleneck block to implement an inverted bottleneck structure while minimizing model footprint and computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' This outperforms vanilla DARTSV2 with lower number of layers, parameter count, and GMACs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='01286v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='LG] 31 Dec 2022 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' METHODOLOGY Our approach to modernizing the DARTS operation set in- volves incrementally making micro-changes to the design of the separable convolution block used within DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' However, not all changes proposed in ConvNeXt can be transferred to DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (1) Changing the stage compute ratio to match that of the Swin Transformer [3] is not applicable as it would require major restructuring of the DARTS framework (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' changing the placement of reduction cells) which is beyond our scope of updating the operation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (2) Modifying the stem cell to mimic the “patchify” operation in Swin is not applicable since a 4× downsampling is too aggressive for the 32 × 32 images in CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' With every change, we search for a cell struc- ture (or genotype), under hyper-parameter settings described in Section 4 and evaluate on different layer counts (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We compare the highest achieved accuracies and corresponding GMACs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Below we present this exploration step by step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 1: Roadmap of the incremental augmentations described in Section 3, along with their corresponding accuracies and methodologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 2: Convolution Blocks : (a) DARTS Separable Convolu- tion Block;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (b) Inverted Bottleneck ConvNeXt Convolution Block (Cinv = C × 4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (c) Pseudo-Inverted Bottleneck Cell (Cinv = C × 2) Replacing ReLU with GELU We replace the widely used ReLu [22] activation with GELU [23] which provides an ap- proximation of the former with the key distinction that a small portion of negative signals are let through to the next layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' This boosts the accuracy by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='12% and from now on we use GELU instead of ReLU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Replacing BatchNorm with LayerNorm There have been multiple attempts to develop an alternative to normal- ization however it remains a key ingredient in modern NN design [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' In ConvNeXt, replacing BN with LN slightly improves the accuracy of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We replace BatchNorm [25] with LayerNorm [26] in our separable convolution oper- ation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Initially, this results in minor degradation in accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We also experiment with retaining LN and adding the various micro-changes proposed in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We did not achieve a performance boost from LN in any setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We will use BN instead of LN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Adapting the ConvNeXt Block Vanilla DARTS uses depthwise separable convolution as popularized by Xception [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' The stacked topology used in DARTS is depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 2a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' However, the inverted bottleneck popularized by MobileNetV2 [28] has made its way to multiple modern net- works [8, 29] and thus warrants exploration in the DARTS framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We implement the ConvNeXt block structure in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' It consists of three key changes: (1) Reducing the number of activation and normalization functions, (2) Adapt- ing to an inverted bottleneck structure, and (3) Moving up the depthwise separable conv layer to facilitate training with large kernel sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' However, directly adapting the ConvNeXt block significantly increases the number of parameters and GMACs while sharply decreasing accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' To manage the number of learnable parameters, we intro- duce the Pseudo-Inverted Bottleneck block as depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We add a depthwise convolution after the intermediate point- wise conv layer which reduces the number of channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We keep the positions of the activation and normalization the same relative to the next layer based on the ConvNeXt block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' This structure also inhibits the stacked architecture which has been shown to increase accuracy by 1−2% when introduced to sep- arable convolution-based operations in the DARTS framework [15] (which the vanilla inverted bottleneck does not have), as well as an inverted bottleneck structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We compare the number of weights per block to estimate the parameter size and computational complexity of both net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Define C to be the input and output channel size, Cinv to be the inverted bottleneck channel size, and K to be the kernel size of the depthwise convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Similarly, define F = Cinv/C to be the inverted bottleneck ratio for the first pointwise convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' The total number of weights between the ConvNeXt block (1) and our Pseudo-Inverted Bottleneck block (2) are compared below: 2FC2 + K2C (1) (F + 1)C2 + 2K2C (2) GMAC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='547 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='24% DARTSV2 (Baseline) 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='36% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='547 Replace RELU with GELU 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='54 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='28% Replace BN with LN 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='38 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='97% Adapt ConvNeXt Block 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='969 Adapt Pseudo-Inverted 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='76% Bottleneck Block 95% 96% 97% 98%GELU d3 x 3, 36 →> 36 1 × 1, 36 →> 36 BN GELU d3 x 3, 36 → 36 1 x 1, 36 → 36 BNd5 x 5, 72 → 72 1 x 1, 72 288 1 x 1, 288→ 72d5 x 5, 36 → 36 1 × 1, 36→ 72 d5 x 5, 72 →> 36 1 x 1, 36 → 36In practice, the dominant variable in both equations is the channel size C, which is initialized to 16 and doubled at each reduction cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Additionally, the conv operation dominates both DARTSV2 and our searched genotypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Thus, comparing the coefficients of the quadratic term C2 provides an estimate for the difference in parameter size and computational com- plexity of these networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our Pseudo-Inverted Bottleneck block has approximately 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='63 times the number of weights as the ConvNeXt block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We further choose F = 2 in the final block topology after experimentation with various values in {1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='5, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='5} since it achieved the best accuracy-GMAC trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' The use of the Pseudo-Inverted Bottleneck block boosts the accuracy by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' EXPERIMENTS Experimental Setup We present our hyperparameter settings and experimental setup next.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Following the DARTS frame- work, we search with an initial channel size of 16, 4 nodes, 8 layers, 50 epochs, and a batch size of 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We use the SGD opti- mizer coupled with a cosine-annealing learning rate scheduler (no restarts) [30], 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='0025 initial learning rate, 3e−4 weight de- cay, and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='9 momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' As for the evaluation phase, we train for 600 epochs with a batch size of 96, cutout augmentation [31], path dropout with probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='2 and auxiliary towers with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='4 weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Other hyper-parameter settings remain the same as the search phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Both our search and evaluation phases are performed on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 3: Evolution of the architecture weights if searched for 115 epochs Search Phase Our final operation set after the in- cremental changes described previously is comprised of the following 10 operations: none, skip_connect, pseudo_inv_bn_3x3, pseudo_inv_bn_5x5, pseudo_inv_bn_7x7, dialated_conv_3x3, dialated_conv_5x5, conv_7x1_1x7, max_pool_3x3, avg_pool_3x3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We argue that our genotype is trained to convergence with 50 epochs and avoids a common pitfall of falling back on skip-connections in later stages of training [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' As depicted by Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 7, the decision boundary be- tween the favored operation (in this case, pseudo_inv_bn_5x5) and skip-connection, is not crossed even very late into training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' After searching with the mentioned hyperparameters and final operation set, we arrive at the genotype in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 4: Proposed Genotype: (a) Normal cell;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (b) Reduction cell Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 5: Searched genotypes in comparison with DARTSV2: (a) Accuracy vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Parameter count (b) Accuracy vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Number of Evaluation Layers Evaluation Phase We evaluate our final genotype at mul- tiple evaluation layers to observe the effect of layer count on test accuracy and report the results in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We observe that the evaluation accuracy of our proposed genotype is signifi- cantly less affected by the evaluation layer count compared to DARTSV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Specifically, at 10 layers, we achieve a higher test accuracy compared to a 20 layer DARTSV2 network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fur- thermore, at 2 layers, our architectures exceed the DARTSV2 genotype at 3 layers by over 20%, while at the same time maintaining similar GMACs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' At 4 layers, we outperform the DARTSV2 genotype at 7 layers (to match the model size for a fair comparison) by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='24%, while still maintaining lower GFLOPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 6 presents a comparison between the Grad- CAM [21] visualizations produced from the last cell of each network for DARTSV2 at 20 layer, Our genotype at 10 and 20 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our proposed genotype, in a 10 cell network, can effectively capture the prominent features of the classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' The increase in the number of cascaded cells leads to the grad- ual collapse of the heat-map boundaries, onto the outline of the object outperforming DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We argue that this supports our claim that the proposed genotype, is inherently superior to that of DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='WeightsforEdge#10 max pool3x3 avg_pool_3x3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='25 skip_connect PseudolnvBn 3x3 PseudolnvBn 5x5 Architecture Weight 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='20 PseudolnvB_7x7 dilated conv 3x3 dilated conv 5x5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='05 0 20 40 60 80 100 EpochPseudolnvBn_3x3 c_{k-2] PseudolnvBn_3x3 PseudolnvBn_3x3 PseudolnvBn 3x3 3 PseudolnvBn_3x3 PseudolnvBn_5x5 c_{k-1) 0 PseudolnvBn_3x3 PseudolnvBn_3x3 2PseudolnvBn 5x5 dil_conv_5x5 c_{k-1] 0 avg_pool_3x3 skip_connect avg_pool_3x3 PseudolnvBn_ 3x3 PseudolnvBn_3x3 C_(k-2) 3 avg_pool_3x3 2Accuracyv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='LearnableParams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 100 95 90 85 Accuracy 80 75 70 65 60 DARTSV2 Pseudo-Inv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Bottl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 55 0 1 2 3 4 5 6 ParamsAccuracy v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Layer 100 95 90 85 Accuracy 80 75 70 65 60 DARTSV2 55 Pseudo-lnv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Bottl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 4 6 8 10 12 14 16 18 20 Eval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='LayerTable 1: Performance comparison of different genotypes on CIFAR-10 dataset: Our genotype evaluated on 10 and 5 layers are highlighted to be compared with DARTSV2 genotype evaluated with 20 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Genotype Eval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Layers Test Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (%) Params (M) GMAC DARTSV2 20 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='24 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='547 15 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='93 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='408 10 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='72 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='265 8 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='207 7 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='180 6 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='635 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='153 5 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='605 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='121 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='487 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='090 3 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='116 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='067 2 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='082 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='035 Our Geno- type 20 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='76 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='969 15 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='40 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='724 10 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='29 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='470 8 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='15 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='369 7 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='03 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='320 6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='86 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='275 5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='218 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='166 3 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='443 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='123 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='385 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='067 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 6: GradCAM: The first row shows the 32 × 32 input im- ages with labels: dog, automobile, airplane, ship;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' The second row shows DARTSV2 evaluated on 20 layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Then third and fourth rows show our genotype evaluated on {10, 20} lay- ers, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' (Note: All of the images are up-sampled to 224 × 224 for better readability) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 7: Search phase of our proposed genotype: Top 1% Test Error vs Epochs 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' CONCLUSION In this work, we attempt to revise the DARTS search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We incrementally augment the convolution operation with micro-changes inspired by ConvNeXt and propose the Pseudo- Inverted Bottleneck block to reduce the number of parameters used in the vanilla Inverted Bottleneck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our proposed geno- type’s performance is much less sensitive to evaluation layer count compared to that of DARTSV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' It achieves a higher accuracy at a lower GMAC/ parameter count with 10 evalu- ation layers compared to DARTSV2 evaluated at 20 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Furthermore, we perform a GradCAM visualization on our genotype and compare it with that of DARTSV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Our network’s high performance at lower layer counts, cor- respondingly with low GMACs and parameter count, makes it an attractive choice image processing applications as sharpen- ing and blurring, as shallow networks suit these applications best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Consequently, a potential avenue for future work would be to explore the applications of our genotype/ Pseudo-Inverted Bottleneck block, to image processing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' It is worth noting that our aim in this paper was not to combat the SOTA methods related to DARTS;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' but shedding light on the granularity of search space which is commonly shared across many DARTS variants in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' We hope our work initiates new ideas to investigate optimum search space designs in DARTS framework to build more robust and generalized models for representational learning problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' References [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Dosovitskiy, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Beyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Kolesnikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Weissenborn, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Unterthiner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Dehghani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Minderer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Heigold, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Gelly, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Uszkoreit, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in 9th International Conference Search Test Err.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Test Err.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Top-1 45 40 35 30 Test 25 20 15 0 10 20 30 40 50on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [2] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Touvron, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Cord, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Douze, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Massa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sablay- rolles, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Jégou, “Training data-efficient image transformers & distillation through attention,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='12877, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [3] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Cao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wei, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Lin, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Guo, “Swin transformer: Hierarchical vision trans- former using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vi- sion (ICCV), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 10012–10022, October 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [4] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Song, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Lu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Luo, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convo- lutions,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 568–578, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [5] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yuan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Shi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Jiang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Tay, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Feng, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yan, “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” in Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 558–567, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [6] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Mao, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Feichtenhofer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Darrell, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Xie, “A convnet for the 2020s,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 11976–11986, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [7] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sun, “Deep residual learning for image recognition,” in 2016 IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Tan and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Le, “Efficientnet: Rethinking model scal- ing for convolutional neural networks,” in International conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 6105–6114, PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Tan and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Le, “Efficientnetv2: Smaller models and faster training,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='00298, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Howard, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sandler, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Tan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Pang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Vasudevan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Le, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Adam, “Searching for mobilenetv3,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='02244, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [11] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zoph and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Le, “Neural architecture search with re- inforcement learning,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='01578, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [12] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Simonyan, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Vinyals, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Fernando, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Kavukcuoglu, “Hierarchical representations for ef- ficient architecture search,” ArXiv, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='00436, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sun, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Song, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, “Fnas: Uncertainty-aware fast neural architecture search,” 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [14] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Guo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Mu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Heng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wei, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sun, “Single path one-shot neural architecture search with uniform sampling,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='00420, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [15] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Simonyan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yang, “DARTS: differen- tiable architecture search,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='09055, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Mei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Lian, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Jin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yuille, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yang, “Atomnas: Fine-grained end-to-end neural architecture search,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='09640, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [17] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, “Stacnas: Towards stable and consistent optimization for differentiable neural architecture search,” ArXiv, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='11926, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [18] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Roberts, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Khodak, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Dao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' R’e, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Talwalkar, “Rethinking neural operations for di- verse tasks,” in NeurIPS, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [19] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Dong and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Yang, “Nas-bench-201: Extending the scope of reproducible neural architecture search,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='00326, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [21] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Selvaraju, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Cogswell, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Das, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Vedantam, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Parikh, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' 618–626, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Agarap, “Deep learning using rectified linear units (relu),” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='08375, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [23] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Hendrycks and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Gimpel, “Bridging nonlinearities and stochastic regularizers with gaussian error linear units,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='08415, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [24] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Salimans and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Kingma, “Weight normalization: A simple reparameterization to accelerate training of deep neural networks,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='07868, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [25] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Ioffe and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Szegedy, “Batch normalization: Accelerat- ing deep network training by reducing internal covariate shift,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='03167, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [26] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Ba, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Kiros, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Hinton, “Layer normal- ization,” 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [27] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chollet, “Xception: Deep learning with depthwise sep- arable convolutions,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='02357, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [28] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sandler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Howard, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhmoginov, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chen, “Inverted residuals and linear bottlenecks: Mo- bile networks for classification, detection and segmenta- tion,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='04381, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [29] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Tan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Pang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Vasudevan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sandler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Howard, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Le, “Mnasnet: Platform-aware neural architecture search for mobile,” 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [30] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Loshchilov and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Hutter, “SGDR: stochastic gradient descent with restarts,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1608.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='03983, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [31] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Devries and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Taylor, “Improved regularization of convolutional neural networks with cutout,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='04552, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' [32] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Liang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Sun, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' He, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Huang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Zhuang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' Li, “DARTS+: improved differen- tiable architecture search with early stopping,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content=' abs/1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} +page_content='06035, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KdAzT4oBgHgl3EQfVfys/content/2301.01286v1.pdf'} diff --git a/MNFPT4oBgHgl3EQfkjUi/content/tmp_files/2301.13118v1.pdf.txt b/MNFPT4oBgHgl3EQfkjUi/content/tmp_files/2301.13118v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ce3e11ec15f8b26ccb01f0fd7757bcc9497077ac --- /dev/null +++ b/MNFPT4oBgHgl3EQfkjUi/content/tmp_files/2301.13118v1.pdf.txt @@ -0,0 +1,1034 @@ +A Fully-Automated Framework Integrating Gaussian Process +Regression and Bayesian Optimization to Design Pin-Fins +Susheel Dharmadhikari +Department of Mechanical Engineering +Pennsylvania State University +State College, PA 16802 +sud85@psu.edu +Reid A. Berdanier +Department of Mechanical Engineering +Pennsylvania State University +State College, PA 16802 +rberdanier@psu.edu +Karen A. Thole +Department of Mechanical Engineering +Pennsylvania State University +State College, PA 16802 +kat18@psu.edu +Amrita Basak∗ +Department of Mechanical Engineering +Pennsylvania State University +State College, PA 16802 +aub1526@psu.edu +January 31, 2023 +Abstract +Pin fins are imperative in the cooling of turbine blades. The designs of pin fins, therefore, have +seen significant research in the past. With the developments in metal additive manufacturing, +novel design approaches toward complex geometries are now feasible. To that end, this +article presents a Bayesian optimization approach for designing inline pins that can achieve +low pressure loss. The pin-fin shape is defined using featurized (parametrized) piecewise +cubic splines in 2D. The complexity of the shape is dependent on the number of splines +used for the analysis. From a method development perspective, the study is performed +using three splines. Owing to this piece-wise modeling, a unique pin fin design is defined +using five features. After specifying the design, a computational fluid dynamics-based model +is developed that computes the pressure drop during the flow. Bayesian optimization is +carried out on a Gaussian processes-based surrogate to obtain an optimal combination of +pin-fin features to minimize the pressure drop. The results show that the optimization tends +to approach an aerodynamic design leading to low pressure drop corroborating with the +existing knowledge. Furthermore, multiple iterations of optimizations are conducted with +varying degree of input data. The results reveal that a convergence to similar optimal design +is achieved with a minimum of just twenty five initial design-of-experiments data points +for the surrogate. Sensitivity analysis shows that the distance between the rows of the pin +fins is the most dominant feature influencing the pressure drop. In summary, the newly +developed automated framework demonstrates remarkable capabilities in designing pin fins +with superior performance characteristics. +1 +Introduction +The hot-section components, e.g., turbine rotor blades of gas turbines operate at upwards of 1500 K +creating a harsh environment. For this reason, innovative technologies are used to facilitate cooling of these +components. A rotor blade typically has a complex cooling configuration. The blade can be divided into +three primary sections, each of which has a dedicated coolant supply plenum. These sections include: (i) +the leading edge region, (ii) the mid-chord region, and (iii) the trailing edge region. The pressure side of +a turbine blade has a relatively higher temperature than the suction side and is, therefore, provided with +arXiv:2301.13118v1 [physics.flu-dyn] 30 Jan 2023 + +A preprint - January 31, 2023 +additional cooling. This is accomplished by internal and external cooling that favors the pressure side of the +blade, particularly in the trailing edge region [1]. +Internal cooling of the trailing edge has been investigated extensively. Heat transfer enhancement +designs have been developed to fully utilize the cooling capacity of the coolant. Unique to this blade region is +the pressure drop between the internal plenum and the external mainstream conditions. This region allows +for greater heat transfer enhancement at the cost of increased frictional loss. Fully-bridged pin-fin arrays +result in large pressure losses but also excel at heat transfer enhancement. As a result, pin-fins represent +an ideal candidate for trailing edge cooling. As an additional benefit, the fully-bridged pin-fin design also +increases the structural rigidity of the blade. For these reasons, a vast amount of literature supports the +implementation of impingement arrays to enhance heat transfer and structurally support the trailing edge +section of airfoils [2]. +One of the primary objectives of the pin-fin design is, therefore, to decrease the pressure drop while +increasing the heat transfer. Both experimental and computational investigations have been carried out in the +past to achieve this objective. Otto et al. [3] performed a particle image velocimetry study to understand the +developing flow characteristics of a staggered pin-fin array. Horseshoe type vortices and Karman instabilities +were identified as the key contributors to turbulent mixing. Chyu et al. [4] investigated the dependence of +pin-fin cross-sections on the thermal performance. Square, circular, and diamond-shaped pins were studied +and circular pin-fins were found to have the best trade-off between pressure drop and heat transfer. +With the advancements in metal additive manufacturing, the feasibility of making complex pin fin +designs that are not limited to the traditional manufacturing constraints has increased multi-fold. This is +evident from the recent research efforts towards the testing of such unique designs [5], and their corresponding +parameters [6]. The introduction of this manufacturing technology has led to the initiation of research towards +more innovative design strategies, particularly with the use of data-driven tools [7]. These strategies are built +upon the parametrization of the pin fin designs followed by an optimization that leads to the desired pin fin +shape. +Accordingly, there are several parameters pertaining to the pin shape that could be optimized to +minimize pressure drop and maximize heat transfer. Existing literature has shown experimental results with +unique geometries for pins such as a star or a dimpled sphere [5]. However, experimental optimization of +pin-fins is expensive and time-consuming. For this reason, computational investigations have played a crucial +role in the development of novel designs and creative solutions. Eyi et al. [8, 9] used parameterized Bezier +curves to define and, then, optimize the leading edge of a fin in a parametric form. Wileke et al. [10] used +adjoint optimization for a U-shaped channel to reduce the total pressure loss. On a similar note, Ghosh et +al. and Dilgen et. al. implemented a topology optimization technique to explore manufacturing constraints +[11, 12]. Fabbri [13] applied a genetic algorithm to optimize pin fin designs. Hamadneh et al. used particle +swarm optimization (PSO) to evaluate several pin fin geometries for enhanced thermal performance [14]. +Recently, Ghosh et al. used Gaussian process (GP) surrogates with constrained Bayesian Optimization +(BO) for optimizing the thermal performance of the pin-fin arrays [7]. Due to the black-box nature of CFD +simulations for complex geometries, the use of GP and BO has shown promising results, particularly while +working with limited data. However, due to the relatively nascent percolation of such techniques for pin fin +optimization, more complex formulations in the design space are not yet fully explored. In addition to that, +such studies have been hampered by the lack of automated simulations using established computational tools +such as ANSYS Fluent, thereby, restricting the optimization to a few iterations. This paper addresses these +existing shortcomings by applying a novel spline-based pin-fin definition that offers a potential to explore +complex geometries. In addition to that, the simulations conducted in this analysis are completely automated +leading to a relatively high number of design iterations. +The optimization process in the present study is performed to minimize the pressure drop. The results +reveal that the framework can learn the design principles with limited training data and converge to a +desired solution with < 50 iterations. A study on the data requirement of the algorithm is also conducted to +quantify the need of initial data for the algorithm to perform adequately. Furthermore, a sensitivity analysis +is presented to understand the impact of the features on the pressure drop. Although the analysis presented +in the article is studied for minimizing pressure drop, it can be easily modified to address other desired +objectives. +2 + +A preprint - January 31, 2023 +2 +Methodology +2.1 +Featurization of Pin Fins +2.1.1 +Features of Pin Fin Shape +A general closed shape in 2D is a locus of points, x = f(θ), and y = g(θ), parameterized over θ. The +most common form of this parametrization is visualized with a circle of radius r, an outcome of x = r cos θ +and y = r sin θ. By employing more complex functional representations in the construction of f(θ) and +g(θ), a range of variations in the shapes can be generated. Among several such strategies, this paper uses +piecewise-cubic splines to generate parametric shapes in 2D. The complexity of these shapes, owing to their +construction, further depends on the number of splines used in the process. +Figure 1: (a) Two curves (x = f(θ) and y = g(θ)) constructed with three piece-wise cubic splines. (b) The +pin-fin shape resulting from three splines with the defining parameters. (c) A pin fin array constructed with +two rows and two columns. +With this in mind, the analysis in this paper is performed by using three splines. As an example, Fig. +1(a) depicts the two curves (x = f(θ) and y = g(θ)), both constructed using three cubic splines. The resultant +shape, indicating the contribution of the individual splines, is shown in Fig. 1(b). The shape indicates three +radial distances (r1, r2, and r3) that provide the necessary coordinates for spline interpolation. +To be able to optimize this shape, features that impact the geometry need to be chosen. Three such +features, viz. r2, r3, and θ∗ are identified to be the defining elements for any shape generated using the +aforementioned procedure. The feature, r1, is maintained at a constant magnitude of 1 mm to ensure a +reference dimension to prevent the optimization algorithm from choosing extreme (either too small or too +large) geometries. The angle of r1 from the X-axis is denoted by θ∗, as shown in Fig. 1(a) and (b). It controls +the orientation of the pin fin. Fig. 1(b) also depicts Dx and Dy which denote the projection length of the pin +on X and Y axis, respectively. +2.1.2 +Features of Pin-Fin Arrays +In addition to the three features of a single fin, the setup of the array is controlled with two additional +features that account for the distance between the rows and columns of the fins. In literature, the distance +between the rows is denoted by S, whereas, for columns, it is denoted by X as shown in Fig. 1(c). Accordingly, +S/D and X/D are the two ratios that are commonly discussed in pin fin literature where D is the projected +length of the fin perpendicular to the flow. In this formulation, a minor variation of these ratios is used +3 + +1 +(0)6 = +Dx +0.5 +0.5 +Coordinates +r1 +x = f(0) +0 +0 +r2 +r3 +-0.5 +-0.5 +D +-1 +0 +2 +4 +6 +8 +-0.5 +0 +0.5 +1 + (radians) +(a) +(b) +4 +3 +2 +s +1 +X +0 +-1 +0 +1 +2 +3 +4 +(c)A preprint - January 31, 2023 +to avoid numerical inconsistencies in design. Instead of using S/D and X/D, the authors define S/Dy and +X/Dx as the two additional features based on their projection lengths from Fig. 1(b). The inclusion of this +modification avoids intersection of two fin shapes on the grid, which is observed to be happening in cases +where the optimization converges toward fins with a high Dx/Dy ratio. +Therefore, in total, the design (Ω) of a pin-fin array can be uniquely defined by five features, viz. r2, r3, +θ∗, S/Dy, and X/Dx: +Ω = [r2, r3, θ∗, S/Dy, X/Dx] +(1) +Figure 2: Feature space variation for (a) θ∗, (b) r2, (c) r3, (d) X/Dx, and (e) S/Dy. The (red) arrows +indicate the extent and the direction of variation of the parameter. +2.1.3 +Design Space +To proceed with the optimization, some constraints on the range of variation for all the five features +are required. Because a reference radial distance (r1) is maintained at 1 mm, the other two radii r2, and r3 +are varied from a lower bound of 0.1 mm to an upper bound of 1 mm. The lower bound is chosen to avoid +numerical complications in spline computation which are observed for radii approaching zero. The impact of +variation of these parameters on the fin shape is shown in Fig. 2. +As depicted in Fig. 2, the orientation parameter (θ∗) is varied from 0 to π radians. Due to the nature +of the flow, any pin fin design with θ∗ has the same characteristics as 2π − θ∗. Therefore, the range 0 to π +ensures all distinct orientations are taken into account. Every single variation between these three parameters +4 + +Variation in r. +Variation in " +2 +1 +0.5 +0.5 +0 +0 +-0.5 +-0.5 +-1 +-1 +-1 +-0.5 +0 +0.5 +1 +-1 +-0.5 +0 +0.5 +1 +X +X +(a) +(q) +Variation in r +Variation in X/D +3 +X +3 +0.5 +2 +y +0 +-0.5 +-1 +-1 +-0.5 +0 +0.5 +1 +0 +2 +4 +6 +x +X +(c) +(d) +Variation in S/D +V +6 +4 +0 +-1 +0 +1 +2 +3 +4 +x +(e)A preprint - January 31, 2023 +θ∗, r2, and r3 leads to a unique shape and thereby provides a multitude of design combinations. The variation +in array parameters is relatively straightforward. Based on literature [5], the range of these parameters is +chosen to vary from 2 to 3 and its impact on the array is shown in Fig. 2(d) and (e). The design space is +represented through a vector Ω∗ as follows: +Ω∗ =[r2 ∈ [0.1, 1], +r3 ∈ [0.1, 1], +θ∗ ∈ [0, π], +S/Dy ∈ [2, 3], +X/Dx ∈ [2, 3]] +(2) +2.2 +Development of the Computational Fluid Dynamics (CFD) Model +2.2.1 +Simulation Domain and Boundary Conditions +The geometry of the 2D pin-fin array and the corresponding flow domain is shown in Fig. 3. The inlet +passage of the domain is extended to 10Dx upstream to allow the flow to fully develop before interacting +with the fins. An inlet velocity (vin) of 100 m/s is maintained with a temperature of 300 K. The fins are +modelled as a stationary wall with a no slip condition and are maintained at a constant temperature (Tfin) +of 350 K. The domain outlet is defined at atmospheric pressure (Pgauge = 0), with a temperature of 300 K, +and the outlet passage is also extended to 5Dx to capture the flow behavior in the wake region. The top and +bottom regions have a symmetry boundary condition. This ensures that any variation in the design space +(for ex. due to S/Dy) would not have any impact on the simulations. The region of interest encapsulating +the pins, shown in the inset in Fig. 3, will be shown in the results going further. +Figure 3: Simulation domain and boundary conditions implemented in the CFD model. +2.2.2 +Flow-Thermal Governing Equations +The dynamics of the fluid flow is governed by the conservation of mass and the Cauchy momentum +equation. In addition, the following assumptions about the physics of the fluid flow are made: +1. The fluid (i.e., air) is considered incompressible (with the constant density), thus the mass conservation +translates into the volume conservation. +2. The material properties of air are modeled with a linear constitutive behavior, i.e. a Newtonian fluid, +where the internal shear stress is proportional to the shear rate. +3. The fluid adheres to the surfaces of the fins and the walls; there is a no slip condition. +4. The turbulence is specified using a 5% turbulent intensity and a viscosity ratio of 10. +5. In specifying the wall properties for the pin fin, a standard roughness model with a roughness constant +of 0.5. +5 + +Tfin = 350K +gauge = OPa +Symmetry +T = 300K +Vin = 100 m/s +T = 300K +10Dx +5Dx +SymmetryA preprint - January 31, 2023 +Using the above assumptions, the time-averaged Reynolds-averaged Navier–Stokes (RANS) equations +are used to describe the flow through pin fin arrays. The equations in Einstein notation for an incompressible +Newtonian fluid and a stationary flow are written as: +ρ¯uj +∂¯ui +∂xj += ρ ¯fi + +∂ +∂xj +� +−¯pδij + µ +� ∂¯ui +∂xj ++ ∂¯uj +∂xi +� +− ρu′ +iu′ +j +� +(3) +Here, ρ is the density of the fluid, ¯uj is the mean velocity in jth direction, and ∂¯ui +∂xj is the velocity +gradient with respect to the jth direction. Therefore, ρ¯uj ∂¯ui +∂xj is the change in the mean momentum of a fluid +element owing to the convection in the mean flow. This change is balanced by the mean body force ρ ¯fi, the +isotropic stress due to the mean pressure field −¯pδij, the viscous stresses µ +� +∂¯ui +∂xj + ∂¯uj +∂xi +� +, and apparent stress +ρu′ +iu′ +j owing to the fluctuating velocity field, generally referred to as the Reynolds stress. Here fi represents +the external force, ¯p is the average pressure, ¯u′ +i denotes the mean of the fluctuating component of the velocity. +For modelling turbulence, the SST k − ω formulation is used [15]. +One of the future objectives of this research is to perform optimization studies that simultaneously +minimize pressure drop and maximize heat transfer in a pin-fin array. Hence, in the present research the +energy conservation equations are also solved. The transport of thermal energy is computed using the +following equation: +∂(ρE) +∂t ++ ∇ · [V(ρE + p)] = ∇ · [keff∇T + τeff · V] +(4) +Here, E is the energy per unit mass, V is the velocity vector, p is the pressure, keff is the effective +thermal conductivity, and τeff is the effective shear stress. +2.2.3 +Model Implementation +The fluid flow and thermal evolution are simulated with the software ANSYS® Fluent 2020 R2, which +is based on the finite-volume method. The governing equations are integrated over a finite set of quadrilateral +control volumes that meshes the simulation domain. Details of the mesh are represented in Fig. 4. Following +a cell-centered discretization, the numerical solver computes discrete values of the continuous velocity and +pressure fields at the center of the control volumes. The values at any other locations are interpolated from +the discrete values, whenever required. The numerical scheme evaluates the advection and diffusion fluxes +of momentum through all the faces of the control volumes. Then, the accumulated quantity of momentum +inside each control volume is updated according to its net fluxes. As the fluid is incompressible, the pressure +field is a result of the continuity constraint. A pressure equation is derived from the law of mass conservation. +Figure 4: Mesh distribution near the fin boundaries. +The evolution of the system is solved incrementally with an automatic time stepping method using +pseudo transient settings. The simulation is run for 200 iterations which is observed to be a sufficient duration +for model convergence. In the simulations, the maximum element size is maintained at 0.05 mm. The +convergence conditions are set at 1e − 6 for all residuals. The discretized momentum equations and pressure +equations relative to the set of control volumes are solved with an implicit solver that ensured stability of the +numerical scheme. At each incremental time step, the numerical algorithm computes the new values of the +primary discrete variables that are the local velocities, the pressure and temperature. Secondary results, such +6 + +A preprint - January 31, 2023 +as the streamline of the flow, the shear rate, the viscous stress, or fluxes, are computed from the primary +variables. The pressure drop (∆P), which represents the critical objective for optimization in this study, is +calculated by simply recording the inlet pressure at the end of the simulation (since the outlet is maintained +at Pgauge = 0 Pa). +2.3 +Optimization Framework +2.3.1 +Gaussian Process-Based Surrogate +The surrogate development strategy is based on a class of stochastic processes called Gaussian Processes +(GPs) that assume any finite collection of random variables to follow a multivariate jointly Gaussian +distribution. For a finite collection of n designs, Ω, the corresponding function outputs, φ are assumed to +have a multivariate jointly Gaussian distribution, +φ ∼ N(m(Ω) , K(Ω, Ω′)) +(5) +Here, N implies a Gaussian distribution. The underlying GP is completely characterized by a mean +function: m(Ω) = E[φ], and a covariance function: K(Ω, Ω′) = E[φ − m(Ω))(φ′ − m(Ω′))] [16]. Here, E[⋆] +denotes the expectation of ⋆. Ω′ and φ′ denote a set of finite designs other than Ω and the corresponding +functional output of it, respectively. +In order to understand the application of surrogate modeling, consider the situation where n designs +denoted by the Ω are being computationally evaluated to generate the outputs φ. Using this data, a surrogate +model can be established with the multivariate Gaussian formulation. The surrogate model, can now be used +to estimate the output of a new design Ωn+1 using the following formulation for a conditional distribution: +φn+1|φ, φn+1, Ω ∼ N(mn+1, Kn+1) +(6) +Here, +mn+1 = K(Ωn+1, Ω)K(Ω, Ω)−1φ +(7) +Kn+1 = K(Ωn+1, Ωn+1) − K(Ωn+1, Ω)K(Ω, Ω)−1K(Ω, Ωn+1) +(8) +Here, K is the covariance matrix. Thus, the predicted posterior distribution of the outputs at every +test data point is also a Gaussian distribution, characterized by the mean, mn+1 and covariance, Kn+1. A +detailed mathematical account of GPs can be found in [16]. +2.3.2 +Bayesian Optimization +The term optimization is used to denote minimization of an objective function. A maximization problem +can be posed similarly by taking the negative of the objective function, φ. To minimize φ over its domain, +the solver needs to find: +ˆΩ = argmin +Ω∈Ω∗ φ(Ω) +(9) +Here, ‘argmin’ finds the argument that gives the minimum value of φ. The functional form of φ is +typically unknown and, hence, a gradient-free or black-box optimization is often utilized. BO is one such +black-box optimization technique [17] that leverages the predictions through a surrogate for sequential active +learning to find the global optima of the objective function. The active learning strategies find a trade-off +between exploration and exploitation in possibly noisy settings [17], which facilitates a balance between the +global search and local optimization through acquisition functions. One commonly used acquisition function +in BO is Expected Improvement (EI ). +The objective function, φ, expressed as a GP, yields a posterior predictive Gaussian distribution +characterized by the mean m(Ω) and standard deviation K(Ω) for Ω ∈ Ω∗, where Ω∗ is the search space of the +optimization challenge. The optimization algorithm proceeds sequentially by sampling ˆΩ = argmaxΩEI(Ω) +at every step of the iteration process to add on to the dataset, after which the GP surrogate is retrained +with the new data set to predict the acquisition potential for the next iterative step. This process continues +7 + +A preprint - January 31, 2023 +until an optimum is reached, or the computational budget is extinguished. Since the acquisition potential +is predicted over the entire search space by the surrogate, BO can achieve fast predictions without a lot of +function calls in the search space (i.e., without having to run the simulations to obtain the objectives at all +the search locations). This process otherwise, might be computationally infeasible when the search space is +high-dimensional and the simulations are expensive. +2.4 +Implementation of Iterative and Automated GP and BO +The implementation of the proposed optimization framework is hinged on training and updating a +surrogate model. Fig. 5 depicts the workflow of the algorithm during the training and updating phase. In +the training phase, a surrogate model is trained using an initial design population that is generated through +a Latin hypercube sampling (LHS)-based design-of-experiment (DOE). For every design in the population +a CAD model is generated in MATLAB, followed by an ANSYS simulation. Based on these outputs, a +GP-based surrogate is trained. This surrogate forms the basis of the BO framework that again consists of +three main computation aspects that need to be operated in sync iteratively. These aspects are (i) numerical +pin fin shape generation and translation to a CAD geometry, (ii) CFD model setup and simulation, and (iii) +Iterative BO using steps (i)-(ii). To achieve these steps, multiple softwares are operated through a master +script in Python. The pin fin shapes are generated in MATLAB using the spline toolbox. The shape is then +converted to an AutoCad file (.dxf) from MATLAB using the open source library DXFLib. The geometry is +imported to ANSYS and the simulation is setup using pre-recorded journals. The output of the simulations +is read from the simulations backup files by the python masterscript and is used to update the Bayesian +optimizer until convergence or maximum specified iterations. The BO algorithm is employed using the GPyOpt +toolbox. The entire framework is run without any manual intervention. +Figure 5: Flow chart of the optimization framework. +3 +Results and Discussion +3.1 +Validation of the CFD Model +Grid convergence is a necessary test in CFD simulations. In this study, the variation in the performance +parameters e.g., ∆P is studied by altering the element size from 0.5 mm to 0.03 mm. This leads to a variation +of 3,471 to 0.8 million nodes. All parameters, except the element size, are kept identical for all simulations. +The variation in pressure drop is not significant (2%) beyond 0.05 mm element size. Therefore, the element +size of 0.05 mm is chosen for all analyses. To verify the setup of the Fluent module, a study is performed to +compare the wake length and the point of separation to similar published research [18]. The comparison is +shown in Fig. 6 indicating that the Fluent module is adequately set. +8 + +Initial population +BO predicts new +(LHS-based DOE) +pin fin design +2 +Qn+1 +MATLAB +MATLAB +CAD models +CAD model +No +ANSYS Simulations +ANSYS Simulation +@n+1 +Train Surrogate +Update Surrogate +Yes +Stopping Criterion +Optimized design Qn+1 +Reached?A preprint - January 31, 2023 +Figure 6: Comparison of the (a) point of separation and (b) wake length with published literature [18] +3.2 +Performance of the Gaussian Process (GP) Surrogate +Before moving to optimization, an LHS-based DOE is conducted with 100 designs to develop a surrogate. +This surrogate forms the basis of the Bayesian optimization framework, and an efficient search of the optimal +design depends on the construction of this model. Therefore, before diving into optimization, it is often +advisable to check the accuracy of the surrogate using some regression metrics. In this study, to test the +surrogate, the available data is randomly split using a 75%-25% ratio into a training and testing set. The +training set is used to build the surrogate and the testing set is used to assess it. The result of the predictions +against the actual data is shown in Fig. 7. The model predicts 92% of the testing data within the 95% +confidence interval indicating that the surrogate is capable of emulating the actual physics. The Pearson +correlation (R-squared), however, is low at 0.67. There is one conspicuous outlier in the data that shows +a ∆P of 6 kPa. By removing that outlier, the model is capable of achieving an almost perfect accuracy. +However, since the reasons for the outlier are enmeshed in the physics of the system, it is not removed during +the optimization computations. It is also important for the GP to have some data for worst designs which +helps in avoiding those design combinations later during optimization. The choice of 100 for the initial designs +to build the surrogate is random. With optimization problems that are computationally expensive, 100 initial +simulations already pose a challenge, and therefore, it is also essential to address these data requirements for +the proposed algorithm to be successful. To address this issue, a discussion is followed in Section 3.4. +Figure 7: Comparison of ∆P between the prediction of the GP-based model against actual data. +9 + +Literature +Simulation +* +145 +2.8 +Point of +* +separation +2.6 +* +2.4 +Point of seperation +140 +Wake Length +2.2 +* +2 +135 +1.8 +Wake Length +1.6 +* +130 +1.4 +* +* +1.2 +125 +2 +3 +4 +5 +6 +7 +8 +2 +3 +4 +5 +6 +7 +8 +H/D +H/D +(a) +(b)92.00% p0ints within 95% CI +Predicted △P (kPa) +2 +2 +3 +4 +5 +6 +Actual △P (kPa)A preprint - January 31, 2023 +3.3 +Performance of Bayesian Optimization (BO) +The convergence of the BO algorithm against the iterations and initial DOE information is shown in +Fig. 8(a). The algorithm, that uses a surrogate built with 100 initial designs, finds an optimum within a few +iterations, followed by an occasional (unsuccessful) exploitation of the design space indicated by the peaks +in the convergence curve. The best design (Fig. 8(b)) with the corresponding pressure and velocity fields +(Figs. 8(c) and (d)) provided by the initial DOE exhibits a ∆P of 1.46 kPa. The optimization algorithm is +able to reduce it further to 1.3 kPa with a design (and pressure, velocity fields) shown in Figs. 8(d), (e), and +(f). A comparison between the two designs in Figs. 8(b) and (e) reveal the impact of BO on making a more +aerodynamic design resulting in an improved ∆P. +Figure 8: (a) Convergence of the BO algorithm. (b), (c), and (d) Best design, the corresponding pressure and +velocity field, respectively, provided through the DOE. (d), (e), and (f) Optimized design, the corresponding +pressures and velocity field, respectively. +3.4 +Evaluation of the Optimization Algorithm +The optimization result in the preceding section could reduce the pressure drop by 0.1 kPa using the +information from 100 initial designs. In some practical scenarios, evaluating 100 simulations may not be +feasible. Therefore, the capability of the BO algorithm to work with less information needs to be studied. +In order to do that, the optimization is now carried out using four instances of less initial data by testing +the functionality of the BO algorithm with 75, 50, 25, and 0 initial designs. The omission of the designs in +each instance is such that the best designs are removed, thereby providing incrementally low information +about the optimal solution to the BO algorithm. The best designs obtained through these simulations are +shown in Fig. 9 and the convergence rates are shown in Fig. 10. The features for all the optimized designs +are tabulated in Table 1. +10 + +6 +DOE +BO +△P (kPa) +4 +2 +0 +50 +100 +150 +Iterations +(a) +-12.5 +-6.5 +1.3 +7.3 +0 +51 +119 +170 +Pressure (kPa) +Velocity Magnitude (m/s) +(b) +(c) +(p) +(e) +(f) +(g)A preprint - January 31, 2023 +Figure 9: Optimized designs for cases with (a) 75, (b) 50, (c) 25, and (d) 0 initial designs prior to optimization, +with their respective pressure fields in (e), (f), (g), and (h), and velocity fields in (i), (j), (k), and (l). +Figure 10: Convergence plot for BO with (a) 75, (b) 50, (c) 25, and (d) 0 initial designs. +11 + +6 +6 +DOE +DOE +BO +BO +△P (kPa) +△P (kPa) +2 +2 +0 +50 +100 +0 +25 +50 +75 +100 +Iterations +Iterations +(a) +(q) +6 +6 +DOE +BO (25) +BO +BO (50) +△P (kPa) +△P (kPa) +BO (75) +4 +2 +2 +0 +20 +40 +60 +0 +20 +40 +60 +Iterations +Iterations +(c) +(d)-12.5 +-6.5 +1.3 +7.3 +0 +51 +119 +170 +Pressure (kPa) +Velocity Magnitude (m/s) +(a) +(e) +(i) +(b) +(f) +(!) +(c) +(g) +(>) +(d) +(h) +)A preprint - January 31, 2023 +Table 1: Features of the optimized designs for different combinations of initial designs and BO steps +# DOE Designs +r2 +r3 +θ∗ +S/Dy +X/Dx +∆P +(BO Steps) +(mm) +(mm) +(rad.) +(kPa) +100 (50) +0.1 +0.74 +2.8 +3 +2.3 +1.3 +75 (50) +0.1 +1.0 +2.6 +3 +2.26 +1.3 +50 (50) +0.17 +0.75 +2.8 +3 +2.04 +1.3 +25 (50) +0.1 +1.0 +2.7 +3 +2.24 +1.27 +0 (75) +1.0 +0.1 +0.49 +3 +3 +1.25 +0 (50) +1.0 +0.1 +0.49 +3 +3 +1.25 +0 (25) +0.62 +1.0 +2.86 +3 +2 +1.4 +The optimized designs (Figs. 9(a)-(d)) tend to approach a similar shape for all the instances indicating +the presence of a global optima for this particular problem. The similarity in the performance of these designs +can be further compared with the pressure and velocity fields in Figs. 9(e)-(l). The subtle differences in them +can be studied through the numerical values in Table 1. The designs, however, do not reveal the intricacies in +which the algorithm approached the optima. That behavior is better exemplified by the convergence rates in +Fig. 10. With 75 initial designs (Fig. 10(a)), the behavior of BO is almost similar to the previous case with +all information (Fig. 8(a)). With 50 designs (Fig. 10(b)), more understanding of BO can be inferred. By +comparing Figs. 10(a) and (b), one can notice that the first prediction of the BO algorithm is almost similar +in both cases even after removing 25 best designs. This implies that the underlying GP learnt by BO with 75 +and 50 designs is similar in its functional form. +This interpretation is further emphasized by comparing with Fig. 10(c) where BO gradually moves +towards an optimal design until 40 iterations, thereby indicating that the GP needed more that 25 designs +to make a better informed decision. With the final case of 0 initial designs (Fig. 10(d)), the convergence +is not as steady as the previous cases. Since there is no initial data for this case, multiple instances with +different limitations on maximum allowable iterations are conducted. The results for BO (25), BO (50), and +BO (75) in Fig. 10(d) exemplify the random nature of convergence for these simulations. Moreover, the +intermittent peaks for ∆P that correspond to the exploitation phase in optimization have larger variance +than the previous cases due to the unavailability of data. The predicted optima, however, is still close to +the previous cases indicating the intelligent sampling procedure of BO. However, the predictions from such +optimizations have a high probability of exploring local optima and are therefore unreliable. On an average, +the BO algorithm is able to improve ∆P by more than 1 kPa as compared to the best design provided by the +DOEs in all the cases. +3.5 +Sensitivity Analysis +From a design and manufacturing point of view, it is essential to understand the relative impact of the +features on the performance. Moreover, exploring the functional forms learnt by the GP can further help in +understanding the system behavior. Hence, a global and local sensitivity analysis is now performed. The +global analysis is essential to understand the impact of the features, whereas the functional forms from the +GP can only be understood in a local context due to the multi-parametric nature of the problem. A SHAP +(SHapley Additive exPlanations) analysis is performed to understand the global sensitivity of the features. +SHAP is a method from coalitional game theory, developed to understand the individual impact of all the +features in a prediction [19]. Visually, the interpretation from this analysis can be presented in two forms, viz. +(i) Using a bar chart as shown in Fig. 11(a), and (ii) Using a summary plot as shown in Fig. 11(b). Both the +figures reveal important information about the feature behavior. +The bar graph indicates the relative impact of the features on ∆P. The X-axis of the graph shows the +mean SHAP values that denote the average contribution of the features towards ∆P. For example, for S/Dy, +the mean SHAP value of 0.48 indicates that S/Dy contributes to 0.48 kPa of the total ∆P predicted by the +GP surrogate. Fig. 11(a) shows S/Dy to be the most dominant factor influencing ∆P, whereas X/Dx has the +least impact. Among r2, r3, and θ∗ (the three features that create a shape), θ∗ has the largest influence on +∆P. Although this information is useful, it is impossible from the bars in Fig. 11(a) to interpret how these +12 + +A preprint - January 31, 2023 +features impact the outcome. For example, the bar chart does not tell whether increasing or decreasing S/Dy +is beneficial. This shortcoming is addressed through the summary plot in Fig. 11(b). The summary plot +shows a scatter of color-coded violins across the X-axis for different features. The colors represent the relative +magnitude of the features and the X-axis is the SHAP value. The length of the scatter indicates the relative +influence. For example, for S/Dy, the scatter is the largest, indicating that it has the largest influence on the +output. And the higher magnitudes of S/Dy (red color) are towards the left end of the spectrum indicating +that a higher S/Dy would reduce ∆P. This interpretation is also aligned with the optimized features (Table +1) where all designs have converged to the maximum possible S/Dy to reduce ∆P. +Figure 11: (a) A bar plot showing the relative absolute impact of the features. (b) A summary plot revealing +the impact of the features on the output with respect to the changes in feature magnitudes. +To understand the local sensitivity, the behavior of the surrogate models is studied for the optimized +design with 100 initial points. Fig. 12 shows the variation of each feature with ∆P as learned by GP. To +compute the variation for each feature, all other features are held constant at the optimized value indicated +by BO. Therefore, the functional forms are heavily influenced by the constant feature values and the analysis +is thereby termed as local. Even so, the variations are useful in understanding the impact on the optimized +design. All the features, except θ∗ show monotonic variation with ∆P. The periodic variation in θ∗ alludes +to a symmetry that may be embedded in the CFD model. Among all the features, the relative total variation +in ∆P indicates the impact of the feature on the outcome. As identified from the SHAP analysis, S/Dy again +causes the maximum variation in ∆P indicating its dominant impact. The star indicates the feature value in +the optimized design. The sensitivity analysis therefore provides a comprehensive relationship between the +objective and the features which ultimately aids in the design and manufacturing phases. The SHAP analysis +provides a toolkit for varying features to satisfy the objective, i.e. setting a high value of S/Dy in this case. +13 + +S/Dy ++0.48 +θ* (rad.) ++0.23 +r2 (mm) ++0.16 +r3 (mm) ++0.14 +X/Dx ++0.06 +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +mean(SHAP value) +(a) +High +S/Dy +θ* (rad.) +r2 (mm) +r3 (mm) +X/Dx +Low +-1.0 +-0.5 +0.0 +0.5 +1.0 +SHAP value (impact on model output) +(b)A preprint - January 31, 2023 +Once an optimal design is selected, the local sensitivity analysis helps in identifying the features that need to +be monitored (or controlled) more strictly than others depending on their impact on the outcome. +Figure 12: Local feature sensitivity on ∆P for (a) r2, (b) r3, (c) θ∗, (d) X/Dx, and (e) S/Dy. +4 +Conclusion and Future Work +The article presents a unique piece-wise cubic spline based framework for featurizing pin fins. An +optimization problem for computing the pin fin arrays with minimum pressure drop is setup using a CFD +model coupled with a surrogate-based Bayesian optimization approach. The optimized designs are observed to +follow an aerodynamic shape leading to a reduction in the pressure drop. The capability of the BO framework +is further tested with low initial information. The optimization is observed to efficiently find an optimum +design with 25-50 initial data points. Furthermore, a sensitivity analysis is performed to reveal S/Dy to be +the most dominant feature to influence the pressure drop. Knowledge of the minimum number of designs +needed for optimization coupled with the sensitivity analysis provide valuable information to design engineers. +The convergence to an aerodynamic shape with piece-wise cubic splines shows promise and will be +explored further to test the capabilities of the method. With higher number of splines, more complex shapes +emulating some of the tested prototypes [5] can be generated. The mathematical setup of the pin fin designs +also provides opportunities to include shape distortion that has been observed in additively manufactured +specimens [6]. The studies with the modelling and impacts of such shape distortions will also be conducted +for optimization. Geometrical constraints to compensate for these effects will make this approach more +14 + +3.5 +3.5 +3.0 +3.0 + (kPa) +△P (kPa) +2.5 +2.5 +2.0 +2.0 +1.5 +1.5 +1.0 +1.0 +0.2 +0.4 +0.6 +0.8 +1.0 +0.2 +0.4 +0.6 +0.8 +1.0 +r2 (mm) +r3 (mm) +(a) +(b) +3.5 +3.5 +3.0 +3.0 - + (kPa) +△P (kPa) +2.5 +2.5 +2.0 +2.0 +1.5 +1.5 +1.0 +1.0 +0 +1 +2 +3 +2.00 +2.25 +2.50 +2.75 +3.00 +θ* (radians) +X/Dx +(c) +(d) +3.5 +3.0 - +(kPa) +2.5 +1.5 +1.0 +2.00 +2.25 +2.50 +2.75 +3.00 +S/Dy +(e)A preprint - January 31, 2023 +impactful and application-oriented. In addition to that, an extension of the method to three dimensions will +also be pursued in the future. An imperative part of the current approach is the symmetry condition in +the CFD model which in-theory implies infinite arrays of pins and an unbounded domain. To improve the +predictions further, a bounded simulation emulating the actual testing environment will be conducted after +finding the optimal pin fin shape. Moreover, the current method only tackles the pressure drop minimization +problem. In the future, studies will also be conducted to perform a multi-objective optimization targeted +towards enhancing heat transfer while reducing pressure drop. Experimental investigations will be performed +to validate the efficacy of the newly developed framework and multi-fidelity modeling [20] will be pursued to +intelligently blend experimental data with numerical data. +Credit Authorship +Conceptualization, A.B., K.A.T., R.A.B., and S.D.; methodology, S.D.; software, S.D.; validation, S.D.; +formal analysis, S.D.; investigation, S.D., A.B., K.A.T., and R.A.B.; resources, A.B.; data curation, S.D.; +writing—original draft preparation, S.D., and A.B.; writing—review and editing, A.B., K.A.T., R.A.B., and +S.D.; visualization, S.D.; supervision, A.B., K.A.T., and R.A.B.; project administration, K.A.T., and A.B.; +funding acquisition, K.A.T., A.B., and R.A.B. All authors have read and agreed to the published version of +the manuscript. +Acknowledgement +The authors would like to thank Ritam Pal and Nandana Menon, PhD Students, Mechanical Engineering, +Penn State for their help with CFD modelling and Bayesian Optimization, respectively, and Evan Mihalko, +PhD Student, Mechanical Engineering, Penn State for proof-reading the manuscript. +Funding Information +The research is funded by the NASA University Leadership Initiative program through grant number +80NSSC21M0068. Any opinions, findings, and conclusions in this paper are those of the authors and do not +necessarily reflect the views of the supporting institution. +Data Availability Statement +The data are available from the communicating author on reasonable request. +Conflicts of Interest +The authors declare no conflict of interest. +References +[1] Jason Town, Douglas Straub, James Black, Karen A Thole, and Tom IP Shih. State-of-the-art cooling +technology for a turbine rotor blade. Journal of Turbomachinery, 140(7):071007, 2018. +[2] ME Taslim, L Setayeshgar, and SD Spring. An experimental evaluation of advanced leading edge +impingement cooling concepts. J. Turbomach., 123(1):147–153, 2001. +[3] Marcel Otto, Justin Hodges, Gaurav Gupta, and Jayanta S Kapat. Vortical structures in pin fin arrays +for turbine cooling applications. In Turbo Expo: Power for Land, Sea, and Air, volume 58646, page +V05AT16A003. American Society of Mechanical Engineers, 2019. +[4] MK Chyu, CH Yen, and S Siw. Comparison of heat transfer from staggered pin fin arrays with circular, +cubic and diamond shaped elements. In Turbo Expo: Power for Land, Sea, and Air, volume 47934, pages +991–999, 2007. +[5] Katharine K Ferster, Kathryn L Kirsch, and Karen A Thole. Effects of geometry, spacing, and number +of pin fins in additively manufactured microchannel pin fin arrays. Journal of Turbomachinery, 140(1), +2018. +15 + +A preprint - January 31, 2023 +[6] Thomas M Corbett, Karen A Thole, and Sudhakar Bollapragada. Impacts of pin fin shape and spacing +on heat transfer and pressure losses. Journal of Turbomachinery, 145(5):051014, 2023. +[7] Shinjan Ghosh, Sudeepta Mondal, Jayanta S Kapat, and Asok Ray. Shape optimization of pin fin arrays +using gaussian process surrogate models under design constraints. In Turbo Expo: Power for Land, Sea, +and Air, volume 84164, page V07AT15A021. American Society of Mechanical Engineers, 2020. +[8] Sinan Eyi, Kyle M Hanquist, and Iain D Boyd. Aerothermodynamic design optimization of hypersonic +vehicles. Journal of Thermophysics and Heat Transfer, 33(2):392–406, 2019. +[9] Sinan Eyi, Kyle M Hanquist, and Iain D Boyd. Shape optimization of reentry vehicles to minimize heat +loading. Journal of Thermophysics and Heat Transfer, 33(3):785–796, 2019. +[10] Sebastian Willeke and Tom Verstraete. Adjoint optimization of an internal cooling channel u-bend. In +Turbo Expo: Power for Land, Sea, and Air, volume 56710, page V05AT11A029. American Society of +Mechanical Engineers, 2015. +[11] Shinjan Ghosh and Jayanta S Kapat. Topology optimization of serpentine channels for minimization of +pressure loss and maximization of heat transfer performance as applied for additive manufacturing. In +Turbo Expo: Power for Land, Sea, and Air, volume 58653, page V05BT21A006. American Society of +Mechanical Engineers, 2019. +[12] Sumer B Dilgen, Cetin B Dilgen, David R Fuhrman, Ole Sigmund, and Boyan S Lazarov. Density +based topology optimization of turbulent flow heat transfer systems. Structural and Multidisciplinary +Optimization, 57(5):1905–1918, 2018. +[13] Giampietro Fabbri. A genetic algorithm for fin profile optimization. International journal of heat and +mass transfer, 40(9):2165–2172, 1997. +[14] Nawaf Hamadneh, Waqar A Khan, Saratha Sathasivam, and Hong Choon Ong. Design optimization of +pin fin geometry using particle swarm optimization algorithm. PloS one, 8(5):e66080, 2013. +[15] Stephen P Lynch, Karen A Thole, Atul Kohli, and Christopher Lehane. Computational predictions of +heat transfer and film-cooling for a turbine blade with nonaxisymmetric endwall contouring. Journal of +turbomachinery, 133(4), 2011. +[16] Carl Edward Rasmussen. Gaussian processes in machine learning. In Summer school on machine learning, +pages 63–71. Springer, 2003. +[17] Peter I Frazier. A tutorial on bayesian optimization. arXiv preprint arXiv:1807.02811, 2018. +[18] Sintu Singha and KP Sinhamahapatra. Flow past a circular cylinder between parallel walls at low +reynolds numbers. Ocean Engineering, 37(8-9):757–769, 2010. +[19] Christoph Molnar. Interpretable machine learning. Lulu. com, 2020. +[20] Nandana Menon, Sudeepta Mondal, and Amrita Basak. Multi-fidelity surrogate-based process mapping +with uncertainty quantification in laser directed energy deposition. Materials, 15(8):2902, 2022. +16 + diff --git a/MNFPT4oBgHgl3EQfkjUi/content/tmp_files/load_file.txt b/MNFPT4oBgHgl3EQfkjUi/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..37963c67a8dc0b1c28d69ba5b45e2d2dae153e1d --- /dev/null +++ b/MNFPT4oBgHgl3EQfkjUi/content/tmp_files/load_file.txt @@ -0,0 +1,706 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf,len=705 +page_content='A Fully-Automated Framework Integrating Gaussian Process Regression and Bayesian Optimization to Design Pin-Fins Susheel Dharmadhikari Department of Mechanical Engineering Pennsylvania State University State College, PA 16802 sud85@psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='edu Reid A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Berdanier Department of Mechanical Engineering Pennsylvania State University State College, PA 16802 rberdanier@psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='edu Karen A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Thole Department of Mechanical Engineering Pennsylvania State University State College, PA 16802 kat18@psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='edu Amrita Basak∗ Department of Mechanical Engineering Pennsylvania State University State College, PA 16802 aub1526@psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='edu January 31, 2023 Abstract Pin fins are imperative in the cooling of turbine blades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The designs of pin fins, therefore, have seen significant research in the past.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With the developments in metal additive manufacturing, novel design approaches toward complex geometries are now feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To that end, this article presents a Bayesian optimization approach for designing inline pins that can achieve low pressure loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The pin-fin shape is defined using featurized (parametrized) piecewise cubic splines in 2D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The complexity of the shape is dependent on the number of splines used for the analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' From a method development perspective, the study is performed using three splines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Owing to this piece-wise modeling, a unique pin fin design is defined using five features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' After specifying the design, a computational fluid dynamics-based model is developed that computes the pressure drop during the flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Bayesian optimization is carried out on a Gaussian processes-based surrogate to obtain an optimal combination of pin-fin features to minimize the pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The results show that the optimization tends to approach an aerodynamic design leading to low pressure drop corroborating with the existing knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Furthermore, multiple iterations of optimizations are conducted with varying degree of input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The results reveal that a convergence to similar optimal design is achieved with a minimum of just twenty five initial design-of-experiments data points for the surrogate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Sensitivity analysis shows that the distance between the rows of the pin fins is the most dominant feature influencing the pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In summary, the newly developed automated framework demonstrates remarkable capabilities in designing pin fins with superior performance characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1 Introduction The hot-section components, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', turbine rotor blades of gas turbines operate at upwards of 1500 K creating a harsh environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For this reason, innovative technologies are used to facilitate cooling of these components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A rotor blade typically has a complex cooling configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The blade can be divided into three primary sections, each of which has a dedicated coolant supply plenum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' These sections include: (i) the leading edge region, (ii) the mid-chord region, and (iii) the trailing edge region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The pressure side of a turbine blade has a relatively higher temperature than the suction side and is, therefore, provided with arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='13118v1 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='flu-dyn] 30 Jan 2023 A preprint - January 31, 2023 additional cooling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This is accomplished by internal and external cooling that favors the pressure side of the blade, particularly in the trailing edge region [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Internal cooling of the trailing edge has been investigated extensively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Heat transfer enhancement designs have been developed to fully utilize the cooling capacity of the coolant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Unique to this blade region is the pressure drop between the internal plenum and the external mainstream conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This region allows for greater heat transfer enhancement at the cost of increased frictional loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Fully-bridged pin-fin arrays result in large pressure losses but also excel at heat transfer enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' As a result, pin-fins represent an ideal candidate for trailing edge cooling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' As an additional benefit, the fully-bridged pin-fin design also increases the structural rigidity of the blade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For these reasons, a vast amount of literature supports the implementation of impingement arrays to enhance heat transfer and structurally support the trailing edge section of airfoils [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' One of the primary objectives of the pin-fin design is, therefore, to decrease the pressure drop while increasing the heat transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Both experimental and computational investigations have been carried out in the past to achieve this objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Otto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [3] performed a particle image velocimetry study to understand the developing flow characteristics of a staggered pin-fin array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Horseshoe type vortices and Karman instabilities were identified as the key contributors to turbulent mixing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Chyu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [4] investigated the dependence of pin-fin cross-sections on the thermal performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Square, circular, and diamond-shaped pins were studied and circular pin-fins were found to have the best trade-off between pressure drop and heat transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With the advancements in metal additive manufacturing, the feasibility of making complex pin fin designs that are not limited to the traditional manufacturing constraints has increased multi-fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This is evident from the recent research efforts towards the testing of such unique designs [5], and their corresponding parameters [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The introduction of this manufacturing technology has led to the initiation of research towards more innovative design strategies, particularly with the use of data-driven tools [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' These strategies are built upon the parametrization of the pin fin designs followed by an optimization that leads to the desired pin fin shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Accordingly, there are several parameters pertaining to the pin shape that could be optimized to minimize pressure drop and maximize heat transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Existing literature has shown experimental results with unique geometries for pins such as a star or a dimpled sphere [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' However, experimental optimization of pin-fins is expensive and time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For this reason, computational investigations have played a crucial role in the development of novel designs and creative solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Eyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [8, 9] used parameterized Bezier curves to define and, then, optimize the leading edge of a fin in a parametric form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Wileke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [10] used adjoint optimization for a U-shaped channel to reduce the total pressure loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' On a similar note, Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' and Dilgen et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' implemented a topology optimization technique to explore manufacturing constraints [11, 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Fabbri [13] applied a genetic algorithm to optimize pin fin designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Hamadneh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' used particle swarm optimization (PSO) to evaluate several pin fin geometries for enhanced thermal performance [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Recently, Ghosh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' used Gaussian process (GP) surrogates with constrained Bayesian Optimization (BO) for optimizing the thermal performance of the pin-fin arrays [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Due to the black-box nature of CFD simulations for complex geometries, the use of GP and BO has shown promising results, particularly while working with limited data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' However, due to the relatively nascent percolation of such techniques for pin fin optimization, more complex formulations in the design space are not yet fully explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In addition to that, such studies have been hampered by the lack of automated simulations using established computational tools such as ANSYS Fluent, thereby, restricting the optimization to a few iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This paper addresses these existing shortcomings by applying a novel spline-based pin-fin definition that offers a potential to explore complex geometries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In addition to that, the simulations conducted in this analysis are completely automated leading to a relatively high number of design iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The optimization process in the present study is performed to minimize the pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The results reveal that the framework can learn the design principles with limited training data and converge to a desired solution with < 50 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A study on the data requirement of the algorithm is also conducted to quantify the need of initial data for the algorithm to perform adequately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Furthermore, a sensitivity analysis is presented to understand the impact of the features on the pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Although the analysis presented in the article is studied for minimizing pressure drop, it can be easily modified to address other desired objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2 A preprint - January 31, 2023 2 Methodology 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 Featurization of Pin Fins 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 Features of Pin Fin Shape A general closed shape in 2D is a locus of points, x = f(θ), and y = g(θ), parameterized over θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The most common form of this parametrization is visualized with a circle of radius r, an outcome of x = r cos θ and y = r sin θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' By employing more complex functional representations in the construction of f(θ) and g(θ), a range of variations in the shapes can be generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Among several such strategies, this paper uses piecewise-cubic splines to generate parametric shapes in 2D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The complexity of these shapes, owing to their construction, further depends on the number of splines used in the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 1: (a) Two curves (x = f(θ) and y = g(θ)) constructed with three piece-wise cubic splines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' (b) The pin-fin shape resulting from three splines with the defining parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' (c) A pin fin array constructed with two rows and two columns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With this in mind, the analysis in this paper is performed by using three splines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' As an example, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1(a) depicts the two curves (x = f(θ) and y = g(θ)), both constructed using three cubic splines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The resultant shape, indicating the contribution of the individual splines, is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The shape indicates three radial distances (r1, r2, and r3) that provide the necessary coordinates for spline interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To be able to optimize this shape, features that impact the geometry need to be chosen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Three such features, viz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' r2, r3, and θ∗ are identified to be the defining elements for any shape generated using the aforementioned procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The feature, r1, is maintained at a constant magnitude of 1 mm to ensure a reference dimension to prevent the optimization algorithm from choosing extreme (either too small or too large) geometries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The angle of r1 from the X-axis is denoted by θ∗, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1(a) and (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' It controls the orientation of the pin fin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1(b) also depicts Dx and Dy which denote the projection length of the pin on X and Y axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 Features of Pin-Fin Arrays In addition to the three features of a single fin, the setup of the array is controlled with two additional features that account for the distance between the rows and columns of the fins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In literature, the distance between the rows is denoted by S, whereas, for columns, it is denoted by X as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Accordingly, S/D and X/D are the two ratios that are commonly discussed in pin fin literature where D is the projected length of the fin perpendicular to the flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In this formulation, a minor variation of these ratios is used 3 1 (0)6 = Dx 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 Coordinates r1 x = f(0) 0 0 r2 r3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 D 1 0 2 4 6 8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1 (radians) (a) (b) 4 3 2 s 1 X 0 1 0 1 2 3 4 (c)A preprint - January 31, 2023 to avoid numerical inconsistencies in design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Instead of using S/D and X/D, the authors define S/Dy and X/Dx as the two additional features based on their projection lengths from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The inclusion of this modification avoids intersection of two fin shapes on the grid, which is observed to be happening in cases where the optimization converges toward fins with a high Dx/Dy ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, in total, the design (Ω) of a pin-fin array can be uniquely defined by five features, viz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' r2, r3, θ∗, S/Dy, and X/Dx: Ω = [r2, r3, θ∗, S/Dy, X/Dx] (1) Figure 2: Feature space variation for (a) θ∗, (b) r2, (c) r3, (d) X/Dx, and (e) S/Dy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The (red) arrows indicate the extent and the direction of variation of the parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 Design Space To proceed with the optimization, some constraints on the range of variation for all the five features are required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Because a reference radial distance (r1) is maintained at 1 mm, the other two radii r2, and r3 are varied from a lower bound of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 mm to an upper bound of 1 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The lower bound is chosen to avoid numerical complications in spline computation which are observed for radii approaching zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The impact of variation of these parameters on the fin shape is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' As depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2, the orientation parameter (θ∗) is varied from 0 to π radians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Due to the nature of the flow, any pin fin design with θ∗ has the same characteristics as 2π − θ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, the range 0 to π ensures all distinct orientations are taken into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Every single variation between these three parameters 4 Variation in r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Variation in " 2 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1 X X (a) (q) Variation in r Variation in X/D 3 X 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 2 y 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1 0 2 4 6 x X (c) (d) Variation in S/D V 6 4 0 1 0 1 2 3 4 x (e)A preprint - January 31, 2023 θ∗, r2, and r3 leads to a unique shape and thereby provides a multitude of design combinations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The variation in array parameters is relatively straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Based on literature [5], the range of these parameters is chosen to vary from 2 to 3 and its impact on the array is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2(d) and (e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The design space is represented through a vector Ω∗ as follows: Ω∗ =[r2 ∈ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1, 1], r3 ∈ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1, 1], θ∗ ∈ [0, π], S/Dy ∈ [2, 3], X/Dx ∈ [2, 3]] (2) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 Development of the Computational Fluid Dynamics (CFD) Model 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 Simulation Domain and Boundary Conditions The geometry of the 2D pin-fin array and the corresponding flow domain is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The inlet passage of the domain is extended to 10Dx upstream to allow the flow to fully develop before interacting with the fins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' An inlet velocity (vin) of 100 m/s is maintained with a temperature of 300 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The fins are modelled as a stationary wall with a no slip condition and are maintained at a constant temperature (Tfin) of 350 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The domain outlet is defined at atmospheric pressure (Pgauge = 0), with a temperature of 300 K, and the outlet passage is also extended to 5Dx to capture the flow behavior in the wake region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The top and bottom regions have a symmetry boundary condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This ensures that any variation in the design space (for ex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' due to S/Dy) would not have any impact on the simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The region of interest encapsulating the pins, shown in the inset in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 3, will be shown in the results going further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 3: Simulation domain and boundary conditions implemented in the CFD model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 Flow-Thermal Governing Equations The dynamics of the fluid flow is governed by the conservation of mass and the Cauchy momentum equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In addition, the following assumptions about the physics of the fluid flow are made: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The fluid (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', air) is considered incompressible (with the constant density), thus the mass conservation translates into the volume conservation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The material properties of air are modeled with a linear constitutive behavior, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' a Newtonian fluid, where the internal shear stress is proportional to the shear rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The fluid adheres to the surfaces of the fins and the walls;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' there is a no slip condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The turbulence is specified using a 5% turbulent intensity and a viscosity ratio of 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In specifying the wall properties for the pin fin, a standard roughness model with a roughness constant of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 5 Tfin = 350K gauge = OPa Symmetry T = 300K Vin = 100 m/s T = 300K 10Dx 5Dx SymmetryA preprint - January 31, 2023 Using the above assumptions, the time-averaged Reynolds-averaged Navier–Stokes (RANS) equations are used to describe the flow through pin fin arrays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The equations in Einstein notation for an incompressible Newtonian fluid and a stationary flow are written as: ρ¯uj ∂¯ui ∂xj = ρ ¯fi + ∂ ∂xj � −¯pδij + µ � ∂¯ui ∂xj + ∂¯uj ∂xi � − ρu′ iu′ j � (3) Here, ρ is the density of the fluid, ¯uj is the mean velocity in jth direction, and ∂¯ui ∂xj is the velocity gradient with respect to the jth direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, ρ¯uj ∂¯ui ∂xj is the change in the mean momentum of a fluid element owing to the convection in the mean flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This change is balanced by the mean body force ρ ¯fi, the isotropic stress due to the mean pressure field −¯pδij, the viscous stresses µ � ∂¯ui ∂xj + ∂¯uj ∂xi � , and apparent stress ρu′ iu′ j owing to the fluctuating velocity field, generally referred to as the Reynolds stress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Here fi represents the external force, ¯p is the average pressure, ¯u′ i denotes the mean of the fluctuating component of the velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For modelling turbulence, the SST k − ω formulation is used [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' One of the future objectives of this research is to perform optimization studies that simultaneously minimize pressure drop and maximize heat transfer in a pin-fin array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Hence, in the present research the energy conservation equations are also solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The transport of thermal energy is computed using the following equation: ∂(ρE) ∂t + ∇ · [V(ρE + p)] = ∇ · [keff∇T + τeff · V] (4) Here, E is the energy per unit mass, V is the velocity vector, p is the pressure, keff is the effective thermal conductivity, and τeff is the effective shear stress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 Model Implementation The fluid flow and thermal evolution are simulated with the software ANSYS® Fluent 2020 R2, which is based on the finite-volume method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The governing equations are integrated over a finite set of quadrilateral control volumes that meshes the simulation domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Details of the mesh are represented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Following a cell-centered discretization, the numerical solver computes discrete values of the continuous velocity and pressure fields at the center of the control volumes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The values at any other locations are interpolated from the discrete values, whenever required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The numerical scheme evaluates the advection and diffusion fluxes of momentum through all the faces of the control volumes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Then, the accumulated quantity of momentum inside each control volume is updated according to its net fluxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' As the fluid is incompressible, the pressure field is a result of the continuity constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A pressure equation is derived from the law of mass conservation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 4: Mesh distribution near the fin boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The evolution of the system is solved incrementally with an automatic time stepping method using pseudo transient settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The simulation is run for 200 iterations which is observed to be a sufficient duration for model convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In the simulations, the maximum element size is maintained at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='05 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The convergence conditions are set at 1e − 6 for all residuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The discretized momentum equations and pressure equations relative to the set of control volumes are solved with an implicit solver that ensured stability of the numerical scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' At each incremental time step, the numerical algorithm computes the new values of the primary discrete variables that are the local velocities, the pressure and temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Secondary results, such 6 A preprint - January 31, 2023 as the streamline of the flow, the shear rate, the viscous stress, or fluxes, are computed from the primary variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The pressure drop (∆P), which represents the critical objective for optimization in this study, is calculated by simply recording the inlet pressure at the end of the simulation (since the outlet is maintained at Pgauge = 0 Pa).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 Optimization Framework 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 Gaussian Process-Based Surrogate The surrogate development strategy is based on a class of stochastic processes called Gaussian Processes (GPs) that assume any finite collection of random variables to follow a multivariate jointly Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For a finite collection of n designs, Ω, the corresponding function outputs, φ are assumed to have a multivariate jointly Gaussian distribution, φ ∼ N(m(Ω) , K(Ω, Ω′)) (5) Here, N implies a Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The underlying GP is completely characterized by a mean function: m(Ω) = E[φ], and a covariance function: K(Ω, Ω′) = E[φ − m(Ω))(φ′ − m(Ω′))] [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Here, E[⋆] denotes the expectation of ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Ω′ and φ′ denote a set of finite designs other than Ω and the corresponding functional output of it, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In order to understand the application of surrogate modeling, consider the situation where n designs denoted by the Ω are being computationally evaluated to generate the outputs φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Using this data, a surrogate model can be established with the multivariate Gaussian formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The surrogate model, can now be used to estimate the output of a new design Ωn+1 using the following formulation for a conditional distribution: φn+1|φ, φn+1, Ω ∼ N(mn+1, Kn+1) (6) Here, mn+1 = K(Ωn+1, Ω)K(Ω, Ω)−1φ (7) Kn+1 = K(Ωn+1, Ωn+1) − K(Ωn+1, Ω)K(Ω, Ω)−1K(Ω, Ωn+1) (8) Here, K is the covariance matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Thus, the predicted posterior distribution of the outputs at every test data point is also a Gaussian distribution, characterized by the mean, mn+1 and covariance, Kn+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A detailed mathematical account of GPs can be found in [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 Bayesian Optimization The term optimization is used to denote minimization of an objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A maximization problem can be posed similarly by taking the negative of the objective function, φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To minimize φ over its domain, the solver needs to find: ˆΩ = argmin Ω∈Ω∗ φ(Ω) (9) Here, ‘argmin’ finds the argument that gives the minimum value of φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The functional form of φ is typically unknown and, hence, a gradient-free or black-box optimization is often utilized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' BO is one such black-box optimization technique [17] that leverages the predictions through a surrogate for sequential active learning to find the global optima of the objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The active learning strategies find a trade-off between exploration and exploitation in possibly noisy settings [17], which facilitates a balance between the global search and local optimization through acquisition functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' One commonly used acquisition function in BO is Expected Improvement (EI ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The objective function, φ, expressed as a GP, yields a posterior predictive Gaussian distribution characterized by the mean m(Ω) and standard deviation K(Ω) for Ω ∈ Ω∗, where Ω∗ is the search space of the optimization challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The optimization algorithm proceeds sequentially by sampling ˆΩ = argmaxΩEI(Ω) at every step of the iteration process to add on to the dataset, after which the GP surrogate is retrained with the new data set to predict the acquisition potential for the next iterative step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This process continues 7 A preprint - January 31, 2023 until an optimum is reached, or the computational budget is extinguished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Since the acquisition potential is predicted over the entire search space by the surrogate, BO can achieve fast predictions without a lot of function calls in the search space (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', without having to run the simulations to obtain the objectives at all the search locations).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This process otherwise, might be computationally infeasible when the search space is high-dimensional and the simulations are expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 Implementation of Iterative and Automated GP and BO The implementation of the proposed optimization framework is hinged on training and updating a surrogate model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 5 depicts the workflow of the algorithm during the training and updating phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In the training phase, a surrogate model is trained using an initial design population that is generated through a Latin hypercube sampling (LHS)-based design-of-experiment (DOE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For every design in the population a CAD model is generated in MATLAB, followed by an ANSYS simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Based on these outputs, a GP-based surrogate is trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This surrogate forms the basis of the BO framework that again consists of three main computation aspects that need to be operated in sync iteratively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' These aspects are (i) numerical pin fin shape generation and translation to a CAD geometry, (ii) CFD model setup and simulation, and (iii) Iterative BO using steps (i)-(ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To achieve these steps, multiple softwares are operated through a master script in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The pin fin shapes are generated in MATLAB using the spline toolbox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The shape is then converted to an AutoCad file (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='dxf) from MATLAB using the open source library DXFLib.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The geometry is imported to ANSYS and the simulation is setup using pre-recorded journals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The output of the simulations is read from the simulations backup files by the python masterscript and is used to update the Bayesian optimizer until convergence or maximum specified iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The BO algorithm is employed using the GPyOpt toolbox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The entire framework is run without any manual intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 5: Flow chart of the optimization framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 3 Results and Discussion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 Validation of the CFD Model Grid convergence is a necessary test in CFD simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In this study, the variation in the performance parameters e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', ∆P is studied by altering the element size from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 mm to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='03 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This leads to a variation of 3,471 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 million nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' All parameters, except the element size, are kept identical for all simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The variation in pressure drop is not significant (2%) beyond 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='05 mm element size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, the element size of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='05 mm is chosen for all analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To verify the setup of the Fluent module, a study is performed to compare the wake length and the point of separation to similar published research [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The comparison is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 6 indicating that the Fluent module is adequately set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8 Initial population BO predicts new (LHS-based DOE) pin fin design 2 Qn+1 MATLAB MATLAB CAD models CAD model No ANSYS Simulations ANSYS Simulation @n+1 Train Surrogate Update Surrogate Yes Stopping Criterion Optimized design Qn+1 Reached?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A preprint - January 31, 2023 Figure 6: Comparison of the (a) point of separation and (b) wake length with published literature [18] 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 Performance of the Gaussian Process (GP) Surrogate Before moving to optimization, an LHS-based DOE is conducted with 100 designs to develop a surrogate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This surrogate forms the basis of the Bayesian optimization framework, and an efficient search of the optimal design depends on the construction of this model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, before diving into optimization, it is often advisable to check the accuracy of the surrogate using some regression metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In this study, to test the surrogate, the available data is randomly split using a 75%-25% ratio into a training and testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The training set is used to build the surrogate and the testing set is used to assess it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The result of the predictions against the actual data is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The model predicts 92% of the testing data within the 95% confidence interval indicating that the surrogate is capable of emulating the actual physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The Pearson correlation (R-squared), however, is low at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' There is one conspicuous outlier in the data that shows a ∆P of 6 kPa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' By removing that outlier, the model is capable of achieving an almost perfect accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' However, since the reasons for the outlier are enmeshed in the physics of the system, it is not removed during the optimization computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' It is also important for the GP to have some data for worst designs which helps in avoiding those design combinations later during optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The choice of 100 for the initial designs to build the surrogate is random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With optimization problems that are computationally expensive, 100 initial simulations already pose a challenge, and therefore, it is also essential to address these data requirements for the proposed algorithm to be successful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To address this issue, a discussion is followed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 7: Comparison of ∆P between the prediction of the GP-based model against actual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 9 Literature Simulation 145 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 Point of separation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 Point of seperation 140 Wake Length 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 2 135 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 Wake Length 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='6 130 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 125 2 3 4 5 6 7 8 2 3 4 5 6 7 8 H/D H/D (a) (b)92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='00% p0ints within 95% CI Predicted △P (kPa) 2 2 3 4 5 6 Actual △P (kPa)A preprint - January 31, 2023 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 Performance of Bayesian Optimization (BO) The convergence of the BO algorithm against the iterations and initial DOE information is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The algorithm, that uses a surrogate built with 100 initial designs, finds an optimum within a few iterations, followed by an occasional (unsuccessful) exploitation of the design space indicated by the peaks in the convergence curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The best design (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8(b)) with the corresponding pressure and velocity fields (Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8(c) and (d)) provided by the initial DOE exhibits a ∆P of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='46 kPa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The optimization algorithm is able to reduce it further to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 kPa with a design (and pressure, velocity fields) shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8(d), (e), and (f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A comparison between the two designs in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8(b) and (e) reveal the impact of BO on making a more aerodynamic design resulting in an improved ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 8: (a) Convergence of the BO algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' (b), (c), and (d) Best design, the corresponding pressure and velocity field, respectively, provided through the DOE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' (d), (e), and (f) Optimized design, the corresponding pressures and velocity field, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 Evaluation of the Optimization Algorithm The optimization result in the preceding section could reduce the pressure drop by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 kPa using the information from 100 initial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In some practical scenarios, evaluating 100 simulations may not be feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, the capability of the BO algorithm to work with less information needs to be studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In order to do that, the optimization is now carried out using four instances of less initial data by testing the functionality of the BO algorithm with 75, 50, 25, and 0 initial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The omission of the designs in each instance is such that the best designs are removed, thereby providing incrementally low information about the optimal solution to the BO algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The best designs obtained through these simulations are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 9 and the convergence rates are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The features for all the optimized designs are tabulated in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10 6 DOE BO △P (kPa) 4 2 0 50 100 150 Iterations (a) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 0 51 119 170 Pressure (kPa) Velocity Magnitude (m/s) (b) (c) (p) (e) (f) (g)A preprint - January 31, 2023 Figure 9: Optimized designs for cases with (a) 75, (b) 50, (c) 25, and (d) 0 initial designs prior to optimization, with their respective pressure fields in (e), (f), (g), and (h), and velocity fields in (i), (j), (k), and (l).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 10: Convergence plot for BO with (a) 75, (b) 50, (c) 25, and (d) 0 initial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 11 6 6 DOE DOE BO BO △P (kPa) △P (kPa) 2 2 0 50 100 0 25 50 75 100 Iterations Iterations (a) (q) 6 6 DOE BO (25) BO BO (50) △P (kPa) △P (kPa) BO (75) 4 2 2 0 20 40 60 0 20 40 60 Iterations Iterations (c) (d)-12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 0 51 119 170 Pressure (kPa) Velocity Magnitude (m/s) (a) (e) (i) (b) (f) (!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=') (c) (g) (>) (d) (h) )A preprint - January 31, 2023 Table 1: Features of the optimized designs for different combinations of initial designs and BO steps # DOE Designs r2 r3 θ∗ S/Dy X/Dx ∆P (BO Steps) (mm) (mm) (rad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=') (kPa) 100 (50) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='74 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 75 (50) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='6 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='26 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 50 (50) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='75 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 25 (50) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='7 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='27 0 (75) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='49 3 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='25 0 (50) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='49 3 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='25 0 (25) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='86 3 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 The optimized designs (Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 9(a)-(d)) tend to approach a similar shape for all the instances indicating the presence of a global optima for this particular problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The similarity in the performance of these designs can be further compared with the pressure and velocity fields in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 9(e)-(l).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The subtle differences in them can be studied through the numerical values in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The designs, however, do not reveal the intricacies in which the algorithm approached the optima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' That behavior is better exemplified by the convergence rates in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With 75 initial designs (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10(a)), the behavior of BO is almost similar to the previous case with all information (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 8(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With 50 designs (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10(b)), more understanding of BO can be inferred.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' By comparing Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10(a) and (b), one can notice that the first prediction of the BO algorithm is almost similar in both cases even after removing 25 best designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This implies that the underlying GP learnt by BO with 75 and 50 designs is similar in its functional form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This interpretation is further emphasized by comparing with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10(c) where BO gradually moves towards an optimal design until 40 iterations, thereby indicating that the GP needed more that 25 designs to make a better informed decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With the final case of 0 initial designs (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10(d)), the convergence is not as steady as the previous cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Since there is no initial data for this case, multiple instances with different limitations on maximum allowable iterations are conducted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The results for BO (25), BO (50), and BO (75) in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 10(d) exemplify the random nature of convergence for these simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Moreover, the intermittent peaks for ∆P that correspond to the exploitation phase in optimization have larger variance than the previous cases due to the unavailability of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The predicted optima, however, is still close to the previous cases indicating the intelligent sampling procedure of BO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' However, the predictions from such optimizations have a high probability of exploring local optima and are therefore unreliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' On an average, the BO algorithm is able to improve ∆P by more than 1 kPa as compared to the best design provided by the DOEs in all the cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 Sensitivity Analysis From a design and manufacturing point of view, it is essential to understand the relative impact of the features on the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Moreover, exploring the functional forms learnt by the GP can further help in understanding the system behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Hence, a global and local sensitivity analysis is now performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The global analysis is essential to understand the impact of the features, whereas the functional forms from the GP can only be understood in a local context due to the multi-parametric nature of the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A SHAP (SHapley Additive exPlanations) analysis is performed to understand the global sensitivity of the features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' SHAP is a method from coalitional game theory, developed to understand the individual impact of all the features in a prediction [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Visually, the interpretation from this analysis can be presented in two forms, viz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' (i) Using a bar chart as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 11(a), and (ii) Using a summary plot as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 11(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Both the figures reveal important information about the feature behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The bar graph indicates the relative impact of the features on ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The X-axis of the graph shows the mean SHAP values that denote the average contribution of the features towards ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For example, for S/Dy, the mean SHAP value of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='48 indicates that S/Dy contributes to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='48 kPa of the total ∆P predicted by the GP surrogate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 11(a) shows S/Dy to be the most dominant factor influencing ∆P, whereas X/Dx has the least impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Among r2, r3, and θ∗ (the three features that create a shape), θ∗ has the largest influence on ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Although this information is useful, it is impossible from the bars in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 11(a) to interpret how these 12 A preprint - January 31, 2023 features impact the outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For example, the bar chart does not tell whether increasing or decreasing S/Dy is beneficial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This shortcoming is addressed through the summary plot in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 11(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The summary plot shows a scatter of color-coded violins across the X-axis for different features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The colors represent the relative magnitude of the features and the X-axis is the SHAP value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The length of the scatter indicates the relative influence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' For example, for S/Dy, the scatter is the largest, indicating that it has the largest influence on the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' And the higher magnitudes of S/Dy (red color) are towards the left end of the spectrum indicating that a higher S/Dy would reduce ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' This interpretation is also aligned with the optimized features (Table 1) where all designs have converged to the maximum possible S/Dy to reduce ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 11: (a) A bar plot showing the relative absolute impact of the features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' (b) A summary plot revealing the impact of the features on the output with respect to the changes in feature magnitudes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To understand the local sensitivity, the behavior of the surrogate models is studied for the optimized design with 100 initial points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 12 shows the variation of each feature with ∆P as learned by GP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To compute the variation for each feature, all other features are held constant at the optimized value indicated by BO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Therefore, the functional forms are heavily influenced by the constant feature values and the analysis is thereby termed as local.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Even so, the variations are useful in understanding the impact on the optimized design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' All the features, except θ∗ show monotonic variation with ∆P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The periodic variation in θ∗ alludes to a symmetry that may be embedded in the CFD model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Among all the features, the relative total variation in ∆P indicates the impact of the feature on the outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' As identified from the SHAP analysis, S/Dy again causes the maximum variation in ∆P indicating its dominant impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The star indicates the feature value in the optimized design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The sensitivity analysis therefore provides a comprehensive relationship between the objective and the features which ultimately aids in the design and manufacturing phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The SHAP analysis provides a toolkit for varying features to satisfy the objective, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' setting a high value of S/Dy in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 13 S/Dy +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='48 θ* (rad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=') +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='23 r2 (mm) +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='16 r3 (mm) +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='14 X/Dx +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 mean(SHAP value) (a) High S/Dy θ* (rad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=') r2 (mm) r3 (mm) X/Dx Low 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 SHAP value (impact on model output) (b)A preprint - January 31, 2023 Once an optimal design is selected, the local sensitivity analysis helps in identifying the features that need to be monitored (or controlled) more strictly than others depending on their impact on the outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Figure 12: Local feature sensitivity on ∆P for (a) r2, (b) r3, (c) θ∗, (d) X/Dx, and (e) S/Dy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 4 Conclusion and Future Work The article presents a unique piece-wise cubic spline based framework for featurizing pin fins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' An optimization problem for computing the pin fin arrays with minimum pressure drop is setup using a CFD model coupled with a surrogate-based Bayesian optimization approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The optimized designs are observed to follow an aerodynamic shape leading to a reduction in the pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The capability of the BO framework is further tested with low initial information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The optimization is observed to efficiently find an optimum design with 25-50 initial data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Furthermore, a sensitivity analysis is performed to reveal S/Dy to be the most dominant feature to influence the pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Knowledge of the minimum number of designs needed for optimization coupled with the sensitivity analysis provide valuable information to design engineers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The convergence to an aerodynamic shape with piece-wise cubic splines shows promise and will be explored further to test the capabilities of the method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' With higher number of splines, more complex shapes emulating some of the tested prototypes [5] can be generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The mathematical setup of the pin fin designs also provides opportunities to include shape distortion that has been observed in additively manufactured specimens [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' The studies with the modelling and impacts of such shape distortions will also be conducted for optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Geometrical constraints to compensate for these effects will make this approach more 14 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 (kPa) △P (kPa) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 r2 (mm) r3 (mm) (a) (b) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 - (kPa) △P (kPa) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 0 1 2 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='25 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='50 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='75 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='00 θ* (radians) X/Dx (c) (d) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 - (kPa) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='25 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='50 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='75 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='00 S/Dy (e)A preprint - January 31, 2023 impactful and application-oriented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In addition to that, an extension of the method to three dimensions will also be pursued in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' An imperative part of the current approach is the symmetry condition in the CFD model which in-theory implies infinite arrays of pins and an unbounded domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' To improve the predictions further, a bounded simulation emulating the actual testing environment will be conducted after finding the optimal pin fin shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Moreover, the current method only tackles the pressure drop minimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In the future, studies will also be conducted to perform a multi-objective optimization targeted towards enhancing heat transfer while reducing pressure drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Experimental investigations will be performed to validate the efficacy of the newly developed framework and multi-fidelity modeling [20] will be pursued to intelligently blend experimental data with numerical data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Credit Authorship Conceptualization, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' methodology, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' software, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' validation, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' formal analysis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' investigation, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' resources, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' data curation, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' writing—original draft preparation, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' writing—review and editing, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' visualization, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' supervision, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' project administration, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' funding acquisition, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' All authors have read and agreed to the published version of the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Acknowledgement The authors would like to thank Ritam Pal and Nandana Menon, PhD Students, Mechanical Engineering, Penn State for their help with CFD modelling and Bayesian Optimization, respectively, and Evan Mihalko, PhD Student, Mechanical Engineering, Penn State for proof-reading the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Funding Information The research is funded by the NASA University Leadership Initiative program through grant number 80NSSC21M0068.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Any opinions, findings, and conclusions in this paper are those of the authors and do not necessarily reflect the views of the supporting institution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Data Availability Statement The data are available from the communicating author on reasonable request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Conflicts of Interest The authors declare no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' References [1] Jason Town, Douglas Straub, James Black, Karen A Thole, and Tom IP Shih.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' State-of-the-art cooling technology for a turbine rotor blade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Journal of Turbomachinery, 140(7):071007, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [2] ME Taslim, L Setayeshgar, and SD Spring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' An experimental evaluation of advanced leading edge impingement cooling concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Turbomach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=', 123(1):147–153, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [3] Marcel Otto, Justin Hodges, Gaurav Gupta, and Jayanta S Kapat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Vortical structures in pin fin arrays for turbine cooling applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In Turbo Expo: Power for Land, Sea, and Air, volume 58646, page V05AT16A003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' American Society of Mechanical Engineers, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [4] MK Chyu, CH Yen, and S Siw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Comparison of heat transfer from staggered pin fin arrays with circular, cubic and diamond shaped elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In Turbo Expo: Power for Land, Sea, and Air, volume 47934, pages 991–999, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [5] Katharine K Ferster, Kathryn L Kirsch, and Karen A Thole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Effects of geometry, spacing, and number of pin fins in additively manufactured microchannel pin fin arrays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Journal of Turbomachinery, 140(1), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 15 A preprint - January 31, 2023 [6] Thomas M Corbett, Karen A Thole, and Sudhakar Bollapragada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Impacts of pin fin shape and spacing on heat transfer and pressure losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Journal of Turbomachinery, 145(5):051014, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [7] Shinjan Ghosh, Sudeepta Mondal, Jayanta S Kapat, and Asok Ray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Shape optimization of pin fin arrays using gaussian process surrogate models under design constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In Turbo Expo: Power for Land, Sea, and Air, volume 84164, page V07AT15A021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' American Society of Mechanical Engineers, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [8] Sinan Eyi, Kyle M Hanquist, and Iain D Boyd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Aerothermodynamic design optimization of hypersonic vehicles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Journal of Thermophysics and Heat Transfer, 33(2):392–406, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [9] Sinan Eyi, Kyle M Hanquist, and Iain D Boyd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Shape optimization of reentry vehicles to minimize heat loading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Journal of Thermophysics and Heat Transfer, 33(3):785–796, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [10] Sebastian Willeke and Tom Verstraete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Adjoint optimization of an internal cooling channel u-bend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In Turbo Expo: Power for Land, Sea, and Air, volume 56710, page V05AT11A029.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' American Society of Mechanical Engineers, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [11] Shinjan Ghosh and Jayanta S Kapat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Topology optimization of serpentine channels for minimization of pressure loss and maximization of heat transfer performance as applied for additive manufacturing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In Turbo Expo: Power for Land, Sea, and Air, volume 58653, page V05BT21A006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' American Society of Mechanical Engineers, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [12] Sumer B Dilgen, Cetin B Dilgen, David R Fuhrman, Ole Sigmund, and Boyan S Lazarov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Density based topology optimization of turbulent flow heat transfer systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Structural and Multidisciplinary Optimization, 57(5):1905–1918, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [13] Giampietro Fabbri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A genetic algorithm for fin profile optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' International journal of heat and mass transfer, 40(9):2165–2172, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [14] Nawaf Hamadneh, Waqar A Khan, Saratha Sathasivam, and Hong Choon Ong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Design optimization of pin fin geometry using particle swarm optimization algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' PloS one, 8(5):e66080, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [15] Stephen P Lynch, Karen A Thole, Atul Kohli, and Christopher Lehane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Computational predictions of heat transfer and film-cooling for a turbine blade with nonaxisymmetric endwall contouring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Journal of turbomachinery, 133(4), 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [16] Carl Edward Rasmussen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Gaussian processes in machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' In Summer school on machine learning, pages 63–71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Springer, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [17] Peter I Frazier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' A tutorial on bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' arXiv preprint arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content='02811, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [18] Sintu Singha and KP Sinhamahapatra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Flow past a circular cylinder between parallel walls at low reynolds numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Ocean Engineering, 37(8-9):757–769, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [19] Christoph Molnar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Interpretable machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Lulu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' com, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' [20] Nandana Menon, Sudeepta Mondal, and Amrita Basak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Multi-fidelity surrogate-based process mapping with uncertainty quantification in laser directed energy deposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' Materials, 15(8):2902, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} +page_content=' 16' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNFPT4oBgHgl3EQfkjUi/content/2301.13118v1.pdf'} diff --git a/O9E2T4oBgHgl3EQfVQdj/content/tmp_files/2301.03821v1.pdf.txt b/O9E2T4oBgHgl3EQfVQdj/content/tmp_files/2301.03821v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4240ee677e176c000fb46d4d4b7636cdeab68e1 --- /dev/null +++ b/O9E2T4oBgHgl3EQfVQdj/content/tmp_files/2301.03821v1.pdf.txt @@ -0,0 +1,4447 @@ +arXiv:2301.03821v1 [math.AG] 10 Jan 2023 +POINCAR´E DUALITY REVISITED +BOGDAN ZAVYALOV +Abstract. We revisit Poincar´e Duality in the context of an abstract 6-functor formalism. +In +particular, we provide a small list of assumptions that implies Poincar´e Duality. As an application, +we give new uniform (and essentially formal) proofs of some previously established Poincar´e Duality +results. +Contents +1. +Introduction +1 +2. +Abstract six functor formalisms +12 +3. +Abstract Poincar´e Duality +23 +4. +Dualizing object +29 +5. +First Chern classes +39 +6. +Poincar´e Duality in examples +59 +References +68 +1. Introduction +1.1. Historical overview. +1.1.1. Six functor formalisms. Historically, the first 6-functor formalism was introduced by A. Grothendieck +in [SGA IV] in the context of ´etale cohomology of Spec Z[1/n]-schemes. To explain what this means, +we note that ´etale cohomology come with the assignment X �→ D(X) = D(X´et; Z/nZ) that sends +a Spec Z[1/n]-scheme X to the derived category of ´etale sheaves of Z/nZ-modules on X. This +recovers the absolute ´etale cohomology via the formula +RΓ (X´et; Z/nZ) ≃ RHomD(X´et;Z/nZ) +� +Z/nZX, Z/nZX +� +. +It turns out that this assignment comes equipped with 6-operations +� +f ∗, Rf∗, ⊗L, RHom, Rf!, Rf !� +that satisfy the following list of “axioms”: +Axioms 1.1.1. +(1) the tensor product ⊗L defines the structure of a symmetric monoidal cat- +egory on D(X); +(2) every second functor is right adjoint to the previous one; +(3) the pullback functor f ∗ is symmetric monoidal; +(4) Rf! commutes with base change; +(5) Rf! satisfies the projection formula. +1 + +2 +BOGDAN ZAVYALOV +Since then, it turned out that many other cohomology theories come equipped with the cor- +responding 6-functor formalisms (e.g. D-modules, mixed Hodge modules, etc.). More precisely, it +often happens that interesting cohomology theories admit “coefficient” theories X �→ D(X) accom- +panied by 6-operations1 � +f ∗, f∗, ⊗, Hom, f!, f !� +satisfying the same set of axioms and recovering the +corresponding cohomology complexes via the formula +RΓ(X) = HomD(X)(1X, 1X), +where 1X is the unit object of D(X). +However, it is somewhat difficult to make the definition of a 6-functor formalism precise. To +point out the main difficulty, we stick our attention to the projection formula. For a morphism +f : X → Y , there is no canonical morphism between f! (F ⊗ f ∗G) and f! F ⊗ G, so Axiom 1.1.1(5) +should really specify, for every f : X → Y , an isomorphism +f!(− ⊗ f ∗−) ≃ f!(−) ⊗ − +of functors D(X) × D(Y ) → D(Y ). +Then the natural question is how functorial this isomorphism is, how well it interacts with +composition of morphisms or base change, and etc, etc. Answering these questions would involve +further choices of equivalences between equivalences that we would like to also be functorial in some +precise way. But these higher coherences are pretty difficult to spell out explicitly making it hard +to give a precise definition of a 6-functor formalism. +This problem has been recently beautifully resolved by defining2 a 6-functor formalism to be an +∞-functor D: Corr → Cat∞ from the appropriate category of correspondences to the ∞-category of +∞-categories. This idea originally goes back to J. Lurie, and was first spelled out by D. Gaitsgory +and N. Rozenblyum in [GR17]. +Unfortunately, some of their claims still seem to be unproven, +so we instead use a recent (weaker) version of the formalization of a 6-functor formalism due to +L. Mann [Man22b] (based on the work of Y. Liu and W. Zheng, see [LZ17]). We review this theory +in Section 2. +1.1.2. Recent examples of six functors. Recently, there has been a huge rise of interest in construct- +ing new 6-functor formalisms (see [LZ17], [Sch17], [CS19], [CS22], [Man22b], [Man22a]). What +unites all these examples (and all interesting previous examples) is that they all satisfy a version of +Poincar´e Duality. Namely, in each of these 6-functor formalisms, any smooth morphism f : X → Y +admits an invertible object ωf ∈ D(X) and an equivalence +f !(−) ≃ f ∗(−) ⊗ ωf +of functors D(Y ) → D(X). Furthermore, in most of these examples, it is possible to give an easy +formula for the dualizing object ωf. +Despite this similarity, the proofs of Poincar´e Duality in each particular context are pretty hard +and require a lot of work specific to each situation. As far as we are aware, there is no uniform +approach. +The main goal of this paper is to provide a uniform approach to the question of proving Poincar´e +Duality, also simplifying previously existing proofs. However, before we discuss our results, we wish +to discuss two examples of the proofs of Poincar´e Dualities in more detail to show how each of these +6-functor formalisms depends on the specifics of the situation. +1From now on, we will follow the notation that supresses R’s except for the RΓ notation. +2We refer to Definition 2.3.10 for the actual definition of a 6-functor formalism used in this paper. + +POINCAR´E DUALITY REVISITED +3 +First Example (ℓ-adic ´etale sheaves in analytic geometry) In [Sch17], P. Scholze proves a (weaker) +version of Poincar´e Duality for (ℓ-adic) ´etale sheaves on diamonds (see [Sch17, Prop. 24.4]). Using +standard reductions, it suffices to consider the case of the relative unit ball D1 +X → X over a diamond +X. In this case, approximation arguments and the comparison with Huber’s theory to reduce the +question to the usual ´etale Poincar´e Duality for D1 +X → X over a strongly noetherian analytic adic +space X that has been established before by R. Huber in [Hub96, Thm. 7.5.3]. Therefore, the crux +of the argument lies in the proof of [Hub96, Thm. 7.5.3] that we discuss in more detail now. +Huber’s proof of Poincar´e Duality follows the strategy of proving Poincar´e Duality in ´etale +cohomology of schemes: one first constructs the trace map by reducing to the case of curves, and +then one proves Deligne’s fundamental lemma. We note that both steps are specific to ´etale sheaves +and use almost all prior results established in the book. Furthermore, the proof of the adic version +of Deligne’s fundamental lemma uses non-trivial results from the theory of ´etale cohomology of +schemes, making the proof not intrinsic to adic spaces. One extra difficulty in Huber’s proof is the +need to work with fibers over points of higher ranks: these fibers do not admit any structure of an +adic space and so these fibers can be treated only in a somewhat artifical way. +Remark 1.1.2. We note [Sch17] is logically independent of [Hub96] except for the two facts: +quasi-compact base schange (see [Hub96, Thm. 4.1.1(c)]) and Poincar´e Duality. Therefore, it seems +desirable to give proofs of these facts entirely in the realm of diamonds making [Sch17] independent +of [Hub96]. We do not have anything to say about the first question, but Theorem 1.3.2 provides +a new soft proof of Poincar´e Duality that is essentially independent of the results in [Hub96]. +Second Example (solid almost O+/p-ϕ-modules) Another example that we want to consider in +more detail is the 6-functor formalism of “solid almost O+/p-ϕ-modules” +X �→ Da +□(X; O+ +X/p)ϕ +developed by L. Mann in [Man22b]. +This 6-functor formalism satisfies Poincar´e Duality (see +[Man22b, Thm. 3.10.20]). In order to prove this, Mann reduces the general question to the case of +the torus T1 +X → X over a strictly totally disconnected X. In this situation, he proves a strong ver- +sion of v-descent for Da +□(A+/p), and then argues by choosing a formal model of T1 and performing +explicit computations related to the Faltings trace map to reduce the question to (solid almost) +Grothendieck duality on the mod-p fiber of the formal model. +This argument is also specific to this particular 6-functor formalism: the formal model consid- +erations are not available in most other geometric situations and the reduction to Grothendieck +duality is very specific to the p-adic situation. +In this paper, we give a soft proof of Poincar´e duality for Da +□(X; O+ +X/p)ϕ that (essentially) only +uses the computation of the cohomology groups of the projective line. +1.2. Our results. +1.2.1. Formulation of the questions. We fix a base scheme (resp. locally noetherian analytic adic +space) S, C the category of locally finitely presented S-schemes (resp. +locally finite type adic +S-spaces), and 6-functor formalism D: Corr(C) → Cat∞ (see Definition 2.3.10). +As mentioned in Section 1.1.2, all interesting examples of 6-functor formalisms satisfy Poincar´e +Duality. In order to make this precise, we follow [Sch17] and introduce the following terminology: +Definition 1.2.1. (Definition 2.3.6 and Definition 2.3.7) A morphism f : X → Y is called weakly +cohomologically smooth (with respect to D) if + +4 +BOGDAN ZAVYALOV +(1) the co-projection morphism f ! (1Y ) ⊗ f ∗(−) → f !(−) from Notation 2.1.5(2) is an equiva- +lence; +(2) the dualizing object ωf := f ! (1Y ) is an invertible object of D(X), and it commutes with an +arbitrary base change Y ′ → Y , i.e., for any Cartesian diagram in C +X′ +X +Y ′ +Y, +g′ +f′ +f +g +the natural morphism (g′)∗f ! (1Y ) → f ′! (1Y ′) from Notation 2.1.5(3) is an isomorphism. +A morphism f : X → Y is called cohomologically smooth (with respect to D) if, for any morphism +g: Y ′ → Y in C, the base change f ′ : X′ → Y ′ is weakly cohomologically smooth. +Then the question of proving Poincar´e Duality reduces to the following two (essentially indepen- +dent) questions: +Question 1.2.2. What is a minimalistic set of conditions on D that would ensure that any smooth +morphism f : X → Y is cohomologically smooth (with respect to D)? +Question 1.2.3. If every smooth morphism is cohomologically smooth, is there a reasonable for- +mula for the dualizing object ωf? Is there a minimalistic set of conditions on D that would ensure +that ωf is equal to the Tate twist (appropriately defined)? +The main goal of this paper is to give positive answers to both questions. +Our answer to +Question 1.2.2 is optimal: it gives a characterization of all such D. For Question 1.2.3, it seems +harder to get an optimal answer; however, we give some results that cover all interesting examples +of 6-functors established up until the present moment. +Remark 1.2.4. Somewhat surprisingly, our answers are uniform for schemes and adic spaces. +Furthermore, the same results can be achieved in any “geometry” satisfying the property that, +for any f : X → Y , the diagonal morphism X → X ×Y X is “locally closed” and admitting a +reasonable notion of vector bundles and blow-ups (e.g. complex-analytic spaces, formal geometry, +derived schemes, etc.). However, it seems hard to make precise what the word “geometry” should +mean, so we stick to the examples of schemes and adic spaces in this paper. +Before we discuss the main results of this paper, we want to point out the main problem in +answering these questions, especially in the situation of an abstract 6-functor formalism. +Suppose that we have somehow guessed the correct formula for the dualizing object ωf. +So +the question of proving Poincar´e Duality essentially boils down to the question of constructing an +isomorphism +HomD(Y ) (F, f ∗G ⊗ ωf) ≃ HomD(X) (f! F, G) , +functorial in F ∈ D(X) and G ∈ D(Y ). Now the problem is that we do not have almost any control +over the categories D(X) and D(Y ) for a general 6-functor formalism D. This is probably not a big +issue in the classical 6-functor formalisms, but this becomes a serious issue in the recent 6-functor +formalisms (for example, [CS22] or [Man22b]), where the categories D(X) are defined abstractly +via descent so one does not have good control over D(X) for a general X. +Therefore, the main problem is to prove adjunction without really understanding the involved +categories. Miraculously, it turns out to be possible, as we explain in the next section. + +POINCAR´E DUALITY REVISITED +5 +1.2.2. Our Answers. Now we are ready to discuss the answers to Questions 1.2.2 and 1.2.3 that we +obtain in this paper. To answer Questions 1.2.2, we separate the exact conditions needed to prove +Poincar´e duality for one particular morphism f. We do this via the concept of a trace-cycle theory. +For this, we fix a morphism f : X → Y with the diagonal morphism +∆: X → X ×Y X +and the projections p1, p2 : X ×Y X → X. +Definition 1.2.5. (Definition 3.2.4) A trace-cycle theory on f is a triple (ωf, trf, cl∆) of +(1) an invertible object ωf ∈ D(X), +(2) a trace morphism +trf : f! ωf → 1Y +in the homotopy category D(Y ), +(3) a cycle map +cl∆ : ∆!1X −→ p∗ +2 ωf +in the homotopy category D(X ×S X) +such that +1X +p1,! (∆!1X) +1X +p1,! (p∗ +2 ωf) , +∼ +id +p1,!(cl∆) +trp1 +(1) +ωf +p2,! (p∗ +1ωf ⊗ ∆!1X) +p2,!(p∗ +1ωf ⊗ p∗ +2ωf) +ωf +1X ⊗ ωf +p2,!p∗ +1ωf ⊗ ωf, +∼ +id +p2,!(id⊗cl∆) +≀ +∼ +trp2 ⊗id +(2) +commute3 in D(X) (with the right vertical arrow in the second diagram being the projection formula +isomorphism). +Theorem 1.2.6. (Theorem 3.3.1, Remark 3.3.2) Let f : X → Y be a morphism in C. Then f is +cohomologically smooth if and only if f admits a trace-cycle theory (ωf, trf, cl∆). +Remark 1.2.7. The main point of Theorem 1.2.6 is that it allows us to “decategorify” the question +of Poincar´e Duality and reduce it to the question of constructing two morphisms and verifying +commutativity of two diagrams. In particular, one does not need to understand the categories +D(X) and D(Y ) itself (only maps between very specific objects). +Theorem 1.2.6 is sufficiently strong to answer Question 1.2.2 in full generality: +Theorem 1.2.8. (Theorem 3.3.3) The relative projective line g: P1 +S → S admits a trace-cycle +theory (ωg, trg, cl∆) if and only if every smooth morphism f : X → Y is cohomologically smooth +(with respect to D). +3See Construction 3.2.2 for the precise definition of trpi. Roughly, it is just the corresponding base change of trf. + +6 +BOGDAN ZAVYALOV +Theorem 1.2.8 implies that, in the presense of a trace-cycle theory on the relative projective line, +the question of proving the full version of Poincar´e Duality boils down to the question of computing +the dualizing object ωf = f !1Y for any smooth morphism f : X → Y . +In general, this is a pretty hard question. To see that there could not be any “trivial” formula +for the dualizing object, one could think about the case of the (solid) quasi-coherent 6-functor +formalism D□(−; O) on locally finite type (derived) Z-schemes (see [CS19]). In this situation, for +a smooth morphism f : X → Y of pure dimension d, the dualizing object is given by Ωd +X/Y [d]. In +particular, this object remembers the geometry of f in a non-trivial way. +Nevertheless, we are able to give a formula for the dualizing object for any smooth morphism +f : X → Y under some extra assumptions on the 6-functor formalism D. For the next construction, +we assume that all smooth morphisms are cohomologically smooth with respect to D. +Construction 1.2.9. (Variant 4.1.3) Let f : VX(E) → X be the total space of a vector bundle E +on X with the zero section s: X → VX(E). Then we define CX(E) ∈ D(X) as +CX(E) = s∗f !1X ∈ D(X). +Theorem 1.2.10. (Theorem 4.2.8 and Theorem 4.2.12) Suppose the 6-functor formalism D is +motivic or geometric (see Definition 4.2.1 and Definition 4.2.9). +Let f : X → Y be a smooth +morphism. Then there is a canonical isomorphism +f !1Y ≃ CX(Tf) ∈ D(X), +where Tf is the relative tangent bundle of f. +Remark 1.2.11. Theorem 1.2.8 implies that any A1-invariant 6-functor formalism (see Defini- +tion 2.1.10) with a trace-cycle theory on the relative projective line P1 +S → S is motivic in the sense +of Definition 4.2.1. In particular, Theorem 1.2.10 applies in this case. +Theorem 1.2.10 answers the first part of Question 1.2.3, at least under some further assumptions +on D. Now we discuss the second part of Question 1.2.3. The main tool in answering this question +will be the notion of first Chern classes. To introduce an abstract notion of first Chern classes, we +need to introduce some notation. +Notation 1.2.12. For the rest of this section, we fix an invertible object 1S⟨1⟩ ∈ D(S). For each +f : X → S, we define +1X⟨1⟩ := f ∗1S⟨1⟩ ∈ D(X). +For each integer d ≥ 0, we define +1X⟨d⟩ := 1X⟨1⟩⊗d ∈ D(X). +For d ≤ 0, we define 1X⟨d⟩ := 1X⟨−d⟩∨ ∈ D(X). +Definition 1.2.13. (Definition 5.2.4, Definition 5.2.8) A weak theory of first Chern classes on a +6-functor formalism D is a morphism4 of Sp-valued sheaves5 +c1 : RΓan(−, O×)[1] → RΓ(−, 1⟨1⟩): Cop → Sp. +A theory of first Chern classes is a weak theory of first Chern classes c1 such that, for the relative +projective line f : P1 +S → S, the morphism +c1 + f ∗⟨1⟩: 1S ⊕ 1S⟨1⟩ → f∗1P1 +S⟨1⟩. +4See Notation 5.2.3 for the definition of RΓ(−, 1⟨1⟩). +5The definition below is written in the context of adic spaces. +In the case of schemes, one has to replace +RΓan(−, O×)[1] with RΓZar(−, O×)[1]. + +POINCAR´E DUALITY REVISITED +7 +is an isomorphism6. +A strong theory of first Chern classes is a weak theory of first Chern classes c1 such that, for any +integer d ≥ 1 and the relative projective space f : Pd +S → S, the morphism +d +� +k=0 +ck +1⟨d − k⟩: +d +� +k=0 +1S⟨d − k⟩ → f∗1Pd +S⟨d⟩. +is an isomorphism. +Remark 1.2.14. Definition 5.2.8 implies that, if c1 is a theory of first Chern classes, then +1S⟨−1⟩ ≃ Cone +� +1S → f∗1P1 +S +� +. +So the invertible object 1S⟨1⟩ is unique up to an isomorphism, and axiomitizes the “Tate twist”. +Remark 1.2.15. A weak theory of first Chern classes is roughly just a sufficiently functorial +additive way to assign first Chern classes +c1(L) ∈ H0(X, 1X⟨1⟩) +for any line bundle L on a space X. A theory of first Chern classes is a weak theory satisfying the +projective bundle formula for P1 +S → S. A strong theory of first Chern classes is a weak theory of +first Chern classes satisfying the projective bundle formula Pd +S → S for all d ≥ 1. +With that definition at hand, we give an answer to the second part of Question 1.2.3 in the +following two theorems: +Theorem 1.2.16. (Theorem 5.7.7) Let D be a 6-functor formalism satisfying the excision axiom +(see Definition 2.1.8) and admitting a theory of first Chern classes c1. Suppose that f : X → Y is a +smooth morphism of pure relative dimension d. Then the right adjoint to the functor f! : D(X) → +D(Y ) is given by the formula +f !(−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X). +Remark 1.2.17. Theorem 1.2.16 is essentially the best possible answer to Question 1.2.16 in the +presence of the excision axiom. It reduces the question of proving Poincar´e Duality to constructing +a (weak) theory of first Chern classes and computing the cohomology of the projective line. +We also prove a version of Theorem 1.2.16 without assuming that D satisfies the excision axiom. +Unfortunately, this result is not as strong though it seems to be sufficiently strong to apply to the +potential crystalline and prismatic 6-functor formalisms: +Theorem 1.2.18. (Theorem 5.7.6) Suppose that a 6-functor formalism D is either A1-invariant +or pre-geometric (see Definition 2.1.10 and Definition 4.2.9). And let c1 be a strong theory of first +Chern classes on D underlying a theory of cycle maps cl• (see Definition 5.3.3), and f : X → Y be +a smooth morphism of pure relative dimension d. Then the right adjoint to the functor +f! : D(X) → D(Y ) +is given by the formula +f !(−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X). +Remark 1.2.19. The condition that D is pre-geometric is satisfied if, for example, for every space +Y and an invertible object L ∈ D(P1 +Y ) on the relative projective line f : P1 +Y → Y , there is an +invertible object N ∈ D(Y ) with an isomorphism f ∗N ∼= L. +6See Construction 5.2.7 for the precise meaning of the morphisms c1 and ck +1 in the formula below. + +8 +BOGDAN ZAVYALOV +1.3. Applications. +1.3.1. Simplification of the previous proofs. Using Theorem 1.2.16, we can give simpler proofs of +previously established Poincar´e Dualities. +Firstly, we can give new, easier proofs of the ´etale Poincar´e Duality in different settings: +Theorem 1.3.1. ([SGA IV, Exp. XVIII, Thm. 3.2.5], Remark 6.1.9) Let Y be a scheme and +f : X → Y a smooth morphism of pure dimension d, and n an integer invertible in OY . Then +the functor +Rf! : D(X´et; Z/nZ) → D(Y´et; Z/nZ) +admits a right adjoint given by the formula +f ∗(d)[2d]: D(Y´et; Z/nZ) → D(X´et; Z/nZ). +Theorem 1.3.2. ([Hub96, Thm. 7.5.3], Theorem 6.1.8) Let Y be a locally noetherian analytic adic +space, and f : X → Y a smooth morphism is of pure dimension d, and n is an integer invertible in +O+ +Y . Then the functor +Rf! : D(X´et; Z/nZ) → D(Y´et; Z/nZ) +admits a right adjoint given by the formula +f ∗(d)[2d]: D(Y´et; Z/nZ) → D(X´et; Z/nZ). +Remark 1.3.3. Our results are slightly stronger than the classical versions appearing in [SGA IV, +Exp. XVII, Thm. 3.2.5] and [Hub96, Thm. 7.5.3] respectively. +Namely, we do not assume that +f is separated and we do not make any boundedness assumptions on the derived categories +D(X´et; Z/nZ) and D(Y´et; Z/nZ). +Remark 1.3.4. As mentioned in Subsection 1.1.2, this gives a new proof of Poincar´e Duality +making [Sch17] almost independent of [Hub96]. +Before we go into the proofs of Theorem 1.3.1 and Theorem 1.3.2, we mention that these results +formally imply a big part of the standard foundational results in the theory of ´etale cohomology. +Application 1.3.5. (Cohomological purity) If i: X → Y is a (Zariski)-closed immersion of smooth +S-schemes (resp. adic spaces) of pure dimension dX and dY respectively, then +Ri!Z/nZ ≃ Z/nZ(−c)[−2c], +where c = dY − dY . This follows directly from Poincar´e Duality and the isomorphism Ri! ◦ Rf ! +Y ≃ +Rf ! +X, where fX and fY are the structure morphisms. +Application 1.3.6. (Smooth base change) Theorem 1.3.1 (resp. Theorem 1.3.2) and Proposi- +tion 2.3.9 imply the smooth base change in ´etale cohomology7. +For the next application, we recall that [Zav23, Lemma 10.2] provides a categorical description for +the category of constructible sheaves. Namely, it identifies Db,≥0 +cons(X´et; Z/nZ) with the subcategory +of compact objects in Db,≥0(X´et; Z/nZ) for any qcqs scheme (resp. qcqs locally noetherian adic +space) X. +7We are not aware of any other proof of smooth base change simpler than the original proof in [SGA IV, Exp. XVI, +Cor. 1.2]. The classical proof of Poincar´e Duality uses smooth base as an input. Therefore, one cannot deduce smooth +base change from the classical proof of Poincar´e Duality and Proposition 2.3.9. + +POINCAR´E DUALITY REVISITED +9 +Application 1.3.7. (Preservation of constructible sheaves) If f : X → Y is a smooth qcqs mor- +phism, then Rf! restricts to the functor +Rf!: D(b) +cons(X´et; Z/nZ) → D(b) +cons(Y´et; Z/nZ). +For this, we can assume that Y is qcqs, then the discussion above implies that we only need to +show that (the restriction) Rf!: D≥−N(X´et; Z/nZ) → D≥−N(Y´et; Z/nZ) preserves compact objects +for any integer N. This can be easily seen from the fact that the right adjoint Rf ! = f ∗(d)[2d] +commutes with infinite direct sums and is of finite cohomological dimension. +For the next application, we recall that [Zav23, Lemma 10.1] identifies D(b) +lisse(X´et; Z/nZ) with +the category of dualizable objects in D(X´et; Z/nZ). +Application 1.3.8. (Preservation of lisse sheaves) If f : X → Y is proper and smooth, then Rf∗ +restricts to the functor +Rf∗ : D(b) +lisse(X´et; Z/nZ) → D(b) +lisse(Y´et; Z/nZ). +By the discussion above, it suffices to show that Rf∗ preserves dualizable objects. +Now using +Poincar´e Duality, it is formal to see that, for a dualizable object L, Rf∗L is also dualizable with +the dual Rf∗(L∨(d)[2d]). +Now we briefly discuss the proofs of Theorem 1.3.1 and Theorem 1.3.2. Our strategy is to use +Theorem 1.2.16 to reduce the question to constructing first Chern classes (in a sufficiently functorial +manner) and verifying the projective bundle formula for the relative projective line. +The construction of the first Chern classes comes from the Kummer short exact sequence (see +Definition 6.1.2), so the question of proving Poincar´e Duality essentially boils down to the question +of computing cohomology of the relative projective line. +For this, one can reduce to the case +of S = Spec C or S = Spa(C, OC) for an algebraically closed (non-archimedean) field C. Then +this computation is standard in both theories. Apart from the computation of cohomology of the +projective line, the proofs in the analytic and algebraic situations are uniform. +Another concrete example of Poincar´e Daulity that we consider in this paper is the version of +Poincar´e Duality for the 6-functor formalism of “solid almost O+/p-ϕ-modules” Da +□(X; O+ +X/p)ϕ +developed by L. Mann in [Man22b]. In this context, we can give a new proof of the following result: +Theorem 1.3.9. ([Man22b, Thm. 3.10.20], Theorem 6.2.9) Let Y be a locally noetherian analytic +adic space over Spa(Qp, Zp), and f : X → Y a smooth morphism of pure dimension d. Then the +functor +f! : Da +□(X; O+ +X/p)ϕ → Da +□(Y ; O+ +Y /p)ϕ +admits a right adjoint given by the formula +f ∗ ⊗ O+,a +X /p(d)[2d]: Da +□(Y ; O+ +Y /p)ϕ → Da +□(X; O+ +X/p)ϕ. +The proof of Theorem 1.3.9 follows the same strategy as the one of Theorem 1.3.2: we define +first Chern classes and then compute cohomology of the relative P1 +Y → Y . +The two main complications come from the fact that it is not, a priori, clear that this 6-functor +formalism satisfies the excision axiom, and the definition of this 6-functor formalism is so abstract +that it seems difficult to compute even cohomology of the projective line from first principles. +However, it turns out that the verification of the excision axiom is not that hard, and we resolve +the second issue via the Primitive Comparison Theorem that reduces the computation to the +computation in ´etale cohomology. Besides these relatively minor points, the proof of Theorem 1.3.9 +is essentially identical to that of Theorem 1.3.2. + +10 +BOGDAN ZAVYALOV +1.3.2. Potential new examples of Poincar´e Duality. Recently, V. Drinfeld [Dri22] and B. Bhatt– +J. Lurie [BL22b] gave a new (stacky) perspective on prismatic cohomology. Namely, for a bounded +prism (A, I) and a bounded p-adic formal scheme X over A/I, they construct its (relative derived) +prizmatization stack WCartX/A. For an lci X, this comes equipped with an isomorphism +Dqc(WCartX/A) ≃ �Dcrys((X/A)∆, O∆) +between the ∞-categories of quasi-coherent sheaves on WCartX/A and prismatic O∆-crystals on X. +Therefore, it is reasonable to expect that Dqc(WCartX/A) provide a reasonable coefficient theory +for (relative) prismatic cohomology. Unfortunately, this assignment can not be promoted to a 6- +functor formalism because this is already impossible for Dqc(−) (even on schemes); the problem +being that the open immersion pullback j∗ does not admit a left adjoint. +In the case of (derived) schemes, P. Scholze and D. Clausen [CS19] were able to enlarge the +category Dqc(−) to the category of all solid modules D□(−) to get a 6-functor formalism on +(derived) schemes. Therefore, it is reasonable to expect that appropriately defined ∞-category +D□(WCartX/A) of solid sheaves on the stack WCartX/A should give the correct coefficient theory +for the prismatic cohomology and admit a 6-functor formalism. +Furthermore, L. Tang has recently proven Poincar´e Duality for prismatic cohomology of smooth +and proper p-adic formal A/I-schemes (see [Tan22, Theorem 1.2]). This makes it reasonable to +expect that this potential 6-functor formalism should satisfy the full version of Poincar´e Duality +with all solid coefficients. Once this 6-functor formalism D is constructed, Theorem 1.2.18 reduces +Poincar´e Duality to the question of constructing (strong) first Chern classes, cycle class maps for +divisors, and showing that D is pre-geometric (see Definition 4.2.9). We expect that, under the +correct formalization of D□(WCartX/A), all these questions should follow from the already existing +results: +(1) (First Chern classes) A strong theory of prismatic first Chern classes has already been +constructed in [BL22a, Notation 7.5.3 and Variant 9.1.6]; +(2) (Cycle maps for divisors) we expect that a theory of cycle maps should follow from [Tan22, +Construction 5.32]; +(3) (D is pre-geometric) By Remark 1.2.19, it suffices to show that every invertible object on +P1 +Y comes from Y . At least for an lci Y , we expect that, there should be an equivalence of +the ∞-categories of invertible objects +Pic +� +D□(WCartY/A) +� +≃ Pic +� +Dqc(WCartY/A) +� +. +This would reduce the question to showing that any prismatic line bundle on P1 +B/J comes +from a line bundle Spec B/J for any morphism of bounded prisms (A, I) → (B, J). This +can be explicitly seen by showing that the pullback along the natural morphism +P1 +B → WCartP1 +B/J /B +is fully faithful on line bundles and first Chern class considerations to trivialize the pullback. +We do not spell out the precise argument as it is beyond the scope of this paper. +We expect that similar considerations should apply to the absolute prismatizations X∆, XN, +and Xsyn introduced in [Dri22] and [Bha22]. +1.4. Strategy of the proof. Now we discuss the strategy of our proof of Theorem 1.2.16: + +POINCAR´E DUALITY REVISITED +11 +(1) (Section 3.2) We start by proving Theorem 1.2.6. The main step in the proof is to “de- +categorify” the question. +The key idea is to use the 2-category of cohomological corre- +spondences originally introduced in [LZ22] and reviewed in Section 2.2. After we establish +Theorem 1.2.6, we show that it implies Theorem 1.2.8 implying that any smooth morphism +is cohomologically smooth if P1 +S → S admits a trace-cycle theory. +(2) (Section 4) The next goal is to deduce a formula for the dualizing object f !1Y for a smooth +morphism f : X → Y . This is done via a version of Verdier’s diagonal trick and deformation +to the normal cone, we show8 that +f !1Y ≃ CX(Tf) ∈ D(X), +where CX(Tf) is defined in Construction 1.2.9. +Now the question of proving Poincar´e Duality boils down to the question of constructing +a trace-cycle theory of P1 +S → S and then computing CX(Tf) for every smooth morphism +f : X → Y . +(3) (Sections 5.2-5.5) We introduce the notion of a theory of first Chern classes. Then we show +that, in the presence of the excision axiom, existence of a theory of first Chern classes +automatically implies A1-invariance of D, existence of cycle maps for divisors, and the +projective bundle formula. +(4) (Section 5.6) Then we construct the trace morphism for projective bundles in the presence +of a theory of first Chern classes. Then we show that, for the projective line f : P1 +S → S, the +triple (1P1 +S⟨1⟩, trf, cl∆) forms a trace-cycle theory; this is essentially just a formal diagram +chase. +(5) (Section 5.7) Finally, the question of proving Theorem 1.2.6 boils down to the question of +computing +f !1Y ≃ CX(Tf) +for every smooth morphism f : X → Y of relative pure dimension d. For this, we compactify +the morphism g: VX(Tf) → X to the morphism g : PX(T∨ +f ⊕ O) → X with the “zero” +section s: X → PX(T∨ +f ⊕ O). Then the question reduces to constructing an isomorphism +s∗g!1X ≃ 1X⟨d⟩. +Roughly, the morphism comes from the trace map constructed in the previous step. In +order to show that this is an isomorphism, we can work locally on X. Thus we can assume +that Tf is a trivial vector bundle, so PX(T∨ +f ⊕ O) ≃ Pd +X. Then the cycle map of a point +gives an inverse to this map. +1.5. Terminology. We say that an analytic adic space X is locally noetherian if there is an open +covering by affinoids X = � +i∈I Spa(Ai, A+ +i ) with strongly noetherian Tate Ai. Sometimes, such +spaces are called locally strongly noetherian. +We follow [Hub96, Def. 1.3.3] for the definition of a locally finite type, locally weakly finite type, +and locally +-weakly finite type morphisms of locally noetherian adic spaces. +For a Grothendieck abelian category A, we denote by D(A) its triangulated derived category and +by D(A) its ∞-enhancement. +8At least under the assumptions of Theorem 1.2.10 that we will prove in later steps. + +12 +BOGDAN ZAVYALOV +For a symmetric monoidal ∞-category C⊗, we denote by Pic(C⊗) the full ∞-subcategory of C +consisting of invertible objects. We also denote by Pic(C⊗) the group of isomorphism classes of +invertible objects in C⊗. +1.6. Acknowledgements. We heartfully thank Ofer Gabber and Peter Scholze for their ques- +tions after author’s presentation of his previous work on p-adic Poincar´e Duality [Zav21b] at the +RAMpAGe seminar and the Oberwolfach workshop respectively; this was the starting point of this +paper. We are also very grateful to Peter Scholze for numerous illuminating conversations, which +have greatly influenced the development of this paper. The paper owes a huge intellectual debt to +these conversations. +We thank Marc Hoyois for suggesting the argument of Proposition 2.2.6, Ko Aoki and Peter +Haine for explaining some necessary ∞-categorical background to the author, and Adeel Khan for +patiently answering author’s questions on his paper [Kha22]. +We also thank Toni Annala, Bhargav Bhatt, Dustin Clausen, Dmitry Kaledin, Dmitry Kubrak, +Shizhang Li, Lucas Mann, and Emanuel Reinecke for many interesting conversations. We thank +the Max Planck Institute and the Institute for Advanced Study for funding and excellent working +conditions during author’s stay at these institutes. +2. Abstract six functor formalisms +In this section, we remind the reader the notion of a 6-functor formalism and give some construc- +tions that will be important for the rest of the paper. In particular, we fix the notation that will +be freely used in the rest of the paper. After that, we construct the 2-category of cohomological +correspondences that will play a crucial role in the proof of Poincar´e Duality. +For the rest of the section, we fix C a category of locally finite type adic S-spaces (resp. +a +category of locally finitely presented S-schemes). +2.1. 6-functor formalisms I. In this section, we discuss the general notion of a 6-functor formal- +ism. Since this is the main object of study of this paper, we have decided to spent this section +to explicitly set-up all the notation that we will use later. We also wish to convey the idea that +almost all familiar structures on the classical 6-functor formalisms can be defined in this abstract +situation in a similar manner. +We start by recalling that Y. Liu and W. Zheng have defined a symmetric monoidal9 ∞-category +Corr(C) := Corr(C)all,all of correspondences in C. We do not explain the full construction here and +instead refer to [LZ17, Prop. 6.1.3] (and to [Man22b, Def. A.5.4] for a nice exposition). However, +we specify some lower dimensional data that will be useful for us later: +Remark 2.1.1. +(1) objects of Corr(C) coincide with objects of C, i.e. locally finite type adic +S-spaces; +(2) 1-edges between X and Y are given by correspondences of the form +Z +X +Y ; +9See [HA, Def. 2.0.0.7] for the precise definition of this notion. + +POINCAR´E DUALITY REVISITED +13 +(3) in the homotopy category hCorr(C), the composition of morphisms X ← T → Y and +Y ← S → Z is given by the following outer correspondence (in red): +T ×Y S +T +S +X +Y +Z; +(4) the tensor product X ⊗ Y of two objects X and Y is their cartesian product X ×S Y . +In the next definition, we consider the Cartesian symmetric monoidal structure on Cat∞ the +∞-category of (small) ∞-categories. +Definition 2.1.2. ([Man22b, Def. A.5.7]) A weak 6-functor formalism is a lax symmetric-monoidal10 +functor +D: Corr(C) → Cat∞ +such that +(1) for each morphism f : X → Y in C, the functors D([X +id +←− X +f−→ Y ]): D → D(Y ) and +D([Y +f←− X +id +−→ X]): D(Y ) → D(X) admit right adjoints; +(2) for each X ∈ C, the symmetric monoidal ∞-category D(X) is closed (in the sense of [HA, +Def. 4.1.1.15]). The associated homotopy 1-category hD(X) is denoted by D(X). +Remark 2.1.3. One can compose D with the functor h: Cat∞ → Cat≃ +1 to the (2, 1)-category of +categories that sends an ∞-category X to its homotopy category hX. By the universal property +of homotopy 2-categories, this functor (essentially) uniquely descends to the functor +D := hD: h2 Corr(C) → Cat≃ +1 +such that D(X) = hD(X). +Remark 2.1.4. The data of a weak 6-functor formalism is a very dense piece of data. Below, +we mention some consequences of this definition, and refer to [Man22b, Def. A.5.6, Def. A.5.7, +Prop. A.5.8] for the discussion on how to derive these consequences from Definition 2.1.2. +(1) for each X ∈ C, a closed symmetric monoidal ∞-category D(X). We denote the tensor +product functor and the inner Hom functor by +− ⊗ −: D(X) × D(X) → D(X), and +HomX(−, −): D(X)op × D(X) → D(X); +(2) for each morphism f : X → Y in C, we have a symmetric monoidal functor f ∗ : D(Y ) → +D(X), and a functor f! : D(X) → D(Y ); +(3) for each f : X → Y , f ∗ and f! admit right adjoints that we denote by f∗ : D(X) → D(Y ) +and f !: D(Y ) → D(X); +(4) the functor f! satisfies the projection formula, i.e., there is an isomorphism +f!(−) ⊗ (−) ≃ f!(− ⊗ f ∗(−)) +of functors D(X) × D(Y ) → D(Y ); +10By a lax symmetric-monoidal functor, we mean a functor of the associated ∞-operads, see [HA, Def. 2.1.2.7] + +14 +BOGDAN ZAVYALOV +(5) the functor f! satisfies proper base-change, i.e., for any Cartesian diagram +X′ +X +Y ′ +Y, +g′ +f′ +f +g +there is a specified isomoprhism of functors g∗ ◦ f! ≃ f ′ +! ◦ (g′)∗; +(6) a lot of higher coherences... +Notation 2.1.5. +(1) (Unit object) In what follows, we fix a unit object 1S ∈ D(S). For each +f : X → S in C, we denote by 1X := f ∗ (1S) the pullback of 1S to X. It is a unit object in +D because f ∗ is a (symmetric) monoidal functor; +(2) (Co-projection morphism) for any f : X → Y in C, there is a natural morphism of functors +w(−),(−) : f !(−) ⊗ f ∗(−) → f !(− ⊗ −) +from D(Y ) × D(Y ) to D(X) that is defined to be adjoint to the morphism +f!(f !(−) ⊗ f ∗(−)) ≃ f!(f !(−)) ⊗ (−) +adj⊗id +−−−−→ − ⊗ −; +(3) (Shriek base-change) If +X′ +X +Y ′ +Y +f′ +g′ +f +g +is a Cartesian diagram in C, there is a natural morphism (g′)∗ ◦ f ! → (f ′)! ◦ g∗ defined as +an adjoint to +f ′ +! ◦ (g′)∗ ◦ f ! ≃ g∗ ◦ f! ◦ f ! g∗(adj) +−−−−→ g∗, +where the first morphism is the proper base-change morphism. +For the later use, we prove the following very general (but easy) lemma: +Lemma 2.1.6. Let f : X → Y a morphism in C, and F, E objects of D(Y ). Suppose that E is +invertible. Then the co-projection morphism +wF,E: f !F ⊗ f ∗E → f !(F ⊗ E) +is an isomorphism. +Proof. Consider the morphism wF⊗E,E−1 : f !(F⊗E)⊗E−1 → f !F. It induces a morphism w′ : f !(F⊗ +E) → f ! (F) ⊗ E. Using that projection morphisms compose well, one easily checks that w′ is the +inverse to w up to a homotopy. +□ +Remark 2.1.7. We put the word “weak” in Definition 2.1.2 for the following reasons: +(1) in practice, ∞-categories D(X) are stable (in the sense of [HA, Def. 1.1.1.9]), aka additive. +It seems reasonable to put this into the definition of a 6-functor formalism; +(2) also, in practice, the functor f! is equal to f∗ for a proper morphism f and it is left adjoint +to f ∗ for an ´etale f. This also seems reasonable to put into the definition; + +POINCAR´E DUALITY REVISITED +15 +We fix these issues in Section 2.3. But before we do this, we discuss some further axioms that +one can put on a weak 6-functor formalism D. +We first discuss excision. Let i: Z ֒→ X be a Zariski-closed immersion and j : U ֒→ X its open +complement. In this case, proper base-change specifies a homotopy i∗j! ≃ 0. Data of such homotopy +defines a commutative diagram +j!j∗ +idD(X) +0 +i∗i∗ +(3) +in the ∞-category Fun(D(X), D(X)). In particular, it makes sense to ask if this diagram is Carte- +sian. +Definition 2.1.8. A weak 6-functor formalism D satisfies the excision axiom if Diagram (3) is +Cartesian for any Zariski-closed S-immersion Z ⊂ X. +An equivalent way to say this is that +Diagram (3) defines an exact triangle of functors +j!j∗ → id → i∗i∗. +(4) +Remark 2.1.9. If D satisfies the excision axiom, we can pass to right adjoints in (4) to get an +exact triangle of functors +i∗i! → id → j∗j∗. +Now we discuss the A1-invariance of an abstract 6-functor formalism. +Definition 2.1.10. A weak 6-functor formalism D on C is A1-invariant if, for every X ∈ C and +the morphism f : A1 +X → X, the natural morphism +1X → f∗1A1 +X +is an isomorphism. +In the next lemma, we denote by Pic(D(X)) the ∞-subcategory of D(X) consisting of invertible +objects. +Lemma 2.1.11. Let D be an A1-invariant weak 6-functor formalism, X ∈ C, and f : A1 +X → X +the natural morphism. Then the pullback functor +f ∗ : Pic +� +D(X) +� +→ Pic +� +D(A1 +X) +� +is fully faithful. +Proof. We fix two invertible objects L, L′ ∈ Pic +� +D(X) +� +. Then the claim follows from the following +sequence of isomorphisms: +Hom(f ∗L, f ∗L′) ≃ Hom(L, f∗f ∗L′) +≃ Hom(L, f∗1A1 +X ⊗ L′) +≃ Hom(L, L′). +The first isomorphism follows from the (f ∗, f∗)-adjunction, the second isomorphism follows from +the projection formula for invertible objects (argue as in the proof of Lemma 2.1.6). +The last +isomorphism follows from the A1-invariance. +□ + +16 +BOGDAN ZAVYALOV +2.2. (∞, 2)-category of cohomological correspondences. The main goal of this section is to +construct the (∞, 2)-category of cohomological correspondences, a 2-categorical variant of which +was first introduced in [FS21, IV.2.3.3] (based on [LZ22]). We learnt11 the arguments of this section +from Marc Hoyois. +In the rest of the paper, we will never need the (∞, 2)-version of this category; the 2-categorical +version will be sufficient for all our applications. However, it seems that a rigorous explicit con- +struction even of the associated 2-category is an extremely tedious exercise. Even though it is +probably possible to do by hand, we are not aware of any place in the literature where this has +been done in full detail. +For instance, to verify the pentagon axiom in the context of ´etale cohomology, one needs to +check that the pentagon diagram of 5 associativity constraints is commutative. Each associativity +constraint includes 2 proper base-change morphisms and 2 projection formula morphisms (and +a lot of implicit identifications). Each proper base change and projection formula morphism is, +in turn, constructed by decomposing a morphism into a composition of an ´etale and a proper +morphism. Therefore, the pentagon axiom effectively has at least 40 arrows involved. Even though +it is probably formal that it commutes, it seems really tedious to prove it without some other +machinery. +Because of this reason, we take another approach (explained to us by Marc Hoyois) that actually +produces an (∞, 2)-categorical version of this category. Since, in this approach, it is essentially +the same amount of pain to construct it as an (∞, 2)-category as to construct it simply as a 2- +category, and the (∞, 2)-categorical version may be useful for other purposes, we write the proof +in this generality. We then sketch how the same argument could be run entirely in the realm of +2-categories. +For the rest of the section, we fix a weak 6-functor formalism D: Corr(C) → Cat∞ in the sense +of Definition 2.1.2. +We start the section by giving an informal definition of the 2-categorical vesion of the category +of correpondences. For this, we need to fix some notation: +Definition 2.2.1. Let X1, X2, X3 be objects of C, and F ∈ D(X1 ×S X2) and G ∈ D(X2 ×S X3). +Then the composition G ◦ F ∈ D(X1 ×S X3) is equal to +p1,3,! +� +p∗ +1,2F ⊗ p∗ +2,3G +� +∈ D(X1 ×S X3), +where pi,j : X1 ×S X2 ×S X3 → Xi ×S Xj are the natural projections. +Lemma 2.2.2. Let X, Y, Z, W be objects of C, and F ∈ D(X ×S Y ), G ∈ D(Y ×S Z), H ∈ +D(Z ×S W). Then +(1) there is a canonical isomorphism ∆!1X ≃ ∆!1X ◦ ∆!1X, where ∆: X → X ×S X is the +diagonal morphism; +(2) there is a canonical isomorphism +H ◦ (G ◦ F) ≃ (H ◦ G) ◦ F. +Proof. We claim that both results are formal consequences of proper base-change and the projection +formula. We show the first part, and refer to [Sta23, Tag 0G0F] for the proof of the second part. +11Ko Aoki has informed the author that a similar construction has also been known to Adam Dauser. + +POINCAR´E DUALITY REVISITED +17 +We first consider the Cartesian square +X ×S X +X ×S X ×S X +X +X ×S X. +∆×Sid +p1 +p1,2 +∆ +Then proper base-change implies that +p∗ +1,2∆! (1X) ≃ (∆ ×S id)! (1X×SX) , +and similarly p∗ +2,3∆! (1X) ≃ (id ×S ∆)! (1X×SX). Now we use the Cartesian square +X +X ×S X +X ×S X +X ×S X ×S X, +∆ +∆ +id×S∆ +∆×Sid +the proper base change theorem, and the projection formula to get a sequence of isomorphisms +∆! 1X ◦ ∆! 1X ≃ p1,3,! +� +p∗ +1,2∆! 1X ⊗ p∗ +2,3∆! 1X +� +≃ p1,3,! ((∆ ×S id)! 1X×SX ⊗ (id ×S ∆)! 1X×SX) +≃ p1,3,! (∆ ×S id)! ((∆ ×S id)∗ (id ×S ∆)! 1X×SX)) +≃ p1,3,!(∆ ×S id)!∆! (1X) +≃ ∆! (1X) . +□ +Now we are ready to define the 2-category of cohomological correspondences. +Definition 2.2.3. ([FS21, IV.2.3.3]) The 2-category of cohomological correspondences CS is the +following 2-category: +(1) the objects of CS are objects of C; +(2) for every two objects X, Y ∈ Ob(CS), the Hom-category is defined as +HomCS(X, Y ) = D(X ×S Y ); +(3) for every triple X1, X2, X3 ∈ Ob(CS), the composition functor +HomCS(X2, X3) × HomCS(X1, X2) → HomCS(X1, X3) +is defined as +(A, B) �→ π13,! (π∗ +12B ⊗ π∗ +23A) , +where pi,j : X1 ×S X2 ×S X3 is the projection on Xi ×S Xj; +(4) for every X ∈ Ob(CS), the identity 1-morphism is idX = ∆! (1X), where ∆: X → X ×S X +is the diagonal morphism; +(5) the unit and associativity constraints come from Lemma 2.2.2. +In the rest of the section, we show that Definition 2.2.3 actually defines a 2-category. As explained +at the beginning of this section, the hard part is to verify axiom (P) from [Lur22, Tag 007Q]. + +18 +BOGDAN ZAVYALOV +Lemma 2.2.4. Let D be a symmetric monoidal ∞-category such that each object X ∈ D is +dualizable (in the sense of [HA, Def. 4.6.1.7 and Rem. 4.6.1.12]). Then D is a closed symmetric +monoidal ∞-category. +Proof. Since D is symmetric monoidal, it suffices to show that D is right closed. In other words, +we have to show that, for every object X ∈ D, the functor − ⊗ X : D → D admits a right adjoint. +Since X is dualizable, there is a dual object X∨ with the coevaluation and evaluation morphisms +c: 1D → X ⊗ X∨, +e: X ⊗ X∨ → 1D. +We claim that the functor − ⊗ X∨ : D → D is right adjoint to − ⊗ X. Indeed, we define the unit +and counit transformations explicitly as +η: id id⊗c +−−−→ id ⊗ X ⊗ X∨, +ǫ: id ⊗ X ⊗ X∨ ≃ X ⊗ X∨ ⊗ id e⊗id +−−−→ id. +One easily checks that this defines the desired adjunction. +□ +Lemma 2.2.5. Any object of the symmetric monoidal ∞-category Corr(C) is self-dual. In partic- +ular, the symmetric monoidal ∞-categorical structure on Corr(C) is closed. +Proof. Let X ∈ Corr(C) be an adic S-space with the structure morphism f : X → S. We wish to +show that X is self-dual. For this, we define the co-evaluation morphism c: S → X ⊗ X to be +represented by the correspondence +S +f←− X +∆ +−→ X ×S X, +where ∆ is the diagonal morphism. Likewise, we define the evaluation morphism e: X ⊗ X → S +to be represented by the correspondence +X ×S X +∆ +←− X +f−→ S. +Then it is easy to check that this morphisms define a self-duality on X (see Remark 2.1.1). +□ +Now we are ready to rigorously construct the category CS, and even its (∞, 2)-enhancement. +A crucial technical tool that we will use is the formalism of ∞-categories enhanced in monoidal +∞-categories. We refer to [GH15] for a detailed discussion of this notion, and especially to [GH15, +Def. 2.4.5]. +Proposition 2.2.6. There is an (∞, 2)-category C(∞,2) +S +such that its 2-homotopy category h2C(∞,2) +S +is equivalent to CS from Definition 2.2.3. In particular, CS is indeed a 2-category. +Proof. Lemma 2.2.5 implies that every object in Corr(C) is self-dual. +Therefore, Lemma 2.2.4 +ensures that Corr(C) is a closed symmetric monoidal ∞-category with the inner Hom given by +HomCorr(C)(X, Y ) = X × Y. +Therefore, [GH15, Cor. 7.4.10] implies that Corr(C) is enriched over itself. Now we use the lax- +monoidal functor D: Corr(C) → Cat∞ to transfer12 the constructed above Corr(C)-enrichment on +Corr(C) to a Cat∞-enrichment on Corr(C). This defines the desired (∞, 2)-category13 C(∞,2) +S +by +12For this, look at [GH15, Def. 2.4.5, Def. 2.4.2, and Def. 2.2.14]. +13We refer to [Hau15] for the relation with other models for the theory of (∞, 2)-categories. + +POINCAR´E DUALITY REVISITED +19 +[GH15, Def. 6.1.5 and Th. 5.4.6]14. Essentially by construction, the associated 2-homotopy category +h2C(∞,2) +S +is equivalent to CS. +□ +Remark 2.2.7. One can run the proof of Proposition 2.2.6 entirely in the realm of 2-categories. +In this approach, one constructs a 2-category weakly enriched over Cat≃ +1 that is tautologically +equivalent to CS. +More precisely, we mention the main changes that one needs to make in the proof of Proposi- +tion 2.2.6 to avoid any mention of (∞, 2)-categories. Firstly, one should use the notion of a monoidal +2-category15 in place of the notion of a monoidal ∞-category. Secondly, one should replace enrich- +ments in the sense of [GH15] with weak enrichments in the sense of [GS16, §3]. Thirdly, one should +use the 2-categorical version of the category of correspondences. Lastly, and one should replace the +∞-functor D with its 2-categorical version D from Remark 2.1.3. +Then the same argument works in the world of 2-categories with the only16 caveat that we do +not know a reference for the fact that a closed monoidal 2-category is enriched over itself. +2.3. 6-functor formalisms II. In this section, we follow [Sch22, Lecture VI] and define the notions +of cohomologically ´etale and proper morphisms. In this paper we take a minimalistic approach +that is sufficient for all our purposes; [Sch22, Lecture VI] contains a more thorough consideration +of cohomologically proper and ´etale morphisms. These notions are needed to spell out the full +definition of a 6-functor formalism that is used in this paper. For the latter reference, we also +discuss the notion of cohomologically smooth morphisms in this section. +2.3.1. Cohomologically proper and ´etale morphisms. In this section, we fix a weak 6-functor for- +malism D: Corr(C) → Cat∞ in the sense of Definition 2.1.2. +We wish to axiomitize the conditions f! = f∗ and f ! = f ∗; this will be done via the notions +of cohomologically ´etale and cohomologically proper morphisms. +We start with the case of a +monomorphism f : X → Y in C (i.e., the diagonal morphism ∆: X → X ×Y X is an isomorphism). +In this case we have the following cartesian diagram: +X +X +X +Y. +id +id +f +f +(5) +Construction 2.3.1. Suppose that f : X → Y is a monomorphism in C. Then +(1) there is the natural transformation of functors D(X) → D(Y ) +αf : f! → f∗ +defined as the adjoint to the proper base change equivalence f ∗f! ≃ idD(X) coming from +Diagram (5); +(2) there is the natural transformation of functors D(Y ) → D(X) +βf : f ! → f ∗ +defined as the shrieck base morphism (see Notation 2.1.5(3)) applied to Diagram (5). +14See also [GH15, Rem. 5.7.13] for the meaning of a somewhat confusing notation Cat(−) +(∞,k) +15See [JY21, Definition 12.1.3 and Explanation 12.1.4]) +16Use [GS16, §13.2] to transfer a weak enrichment along a lax-monoidal functor. + +20 +BOGDAN ZAVYALOV +Definition 2.3.2. A monomorphism f : X → Y is cohomologically proper (resp. cohomologically +´etale) if the natural tranformation αf : f! → f∗ (resp. βf : f ! → f ∗) is an equivalence . +Now we move to the case of a general morphism f : X → Y in C and consider the commutative +diagram +X +X ×Y X +X +X +Y. +∆ +id +id +q +p +f +f +(6) +Note that ∆ is always a monomorphism, so it makes sense to ask if ∆ is cohomologically proper +(resp. cohomologically smooth). +Construction 2.3.3. Let f : X → Y be a morphism in C with the diagonal morphism ∆: X → +X ×Y X. Then +(1) if ∆ is cohomologically proper, there is a natural transformation of functors D(X) → D(Y ) +αf : f! → f∗ +defined as the adjoint to the composition +f ∗f! ≃ p!q∗ p!(adj∆◦q∗) +−−−−−−−→ p!∆∗∆∗q∗ ≃ p!∆!∆∗q∗ ≃ id, +where the first isomorphism comes from proper base-change, the second morphism is induced +by the (∆∗, ∆∗)-adjunction, the third isomorphism comes from cohomological properness +of ∆, and the last isomorphism comes from the fact that p ◦ ∆ = idX and q ◦ ∆ = idX; +(2) if ∆ is cohomologically ´etale, there is a natural transformation of functors D(Y ) → D(X) +βf : f ! → f ∗ +defined as the composition +f ! ≃ ∆∗q∗f ! → ∆∗p!f ∗ ≃ ∆!p!f ∗ ≃ f ∗, +where the first isomorphism comes from the fact that q◦∆ = idX, the second isomorphism is +induced from the shriek base-change (see Notation 2.1.5(3)), the third isomorphism comes +from cohomological ´etaleness of ∆, and the last isomorphism comes from the fact that +p ◦ ∆ ≃ idX. +Definition 2.3.4. A morphism f : X → Y in C is cohomologically proper (resp. cohomologically +´etale) if the diagonal morphism ∆: X → X ×Y X is cohomologically proper (resp. cohomologically +´etale) in the sense of Definition 2.3.2, and the natural transformation αf : f! → f∗ (resp. βf : f ! → +f ∗) is an equivalence. +Lemma 2.3.5. Let g: Y ′ → Y a cohomologically ´etale morphism. Then +(1) the co-projection morphism g!(−) ⊗ g∗(−) → g!(− ⊗ −) is an equivalence of functors (see +Notation 2.1.5); + +POINCAR´E DUALITY REVISITED +21 +(2) for any Cartesian diagram +X′ +X +Y ′ +Y +f′ +g′ +f +g +in C, the natural transformation +(g′)∗ ◦ f ! → (f ′)! ◦ g∗ +is an isomorphism (see Notation 2.1.5(3)). +Proof. The first claim follows from the equality g∗ = g!. The second claim follows from proper +base-change by passing to right adjoints. +□ +2.3.2. Cohomologically smooth morphisms. We follow [Sch17] and introduce the notion of a coho- +mologically smooth morphism; the idea is to require the morphism f : X → Y to satisfy Poincar´e +Duality “up to a trivialization of the dualizing object f !1Y ”. +In this section, we fix a weak 6-functor formalism D: Corr(C) → Cat∞ in the sense of Defini- +tion 2.1.2. +Definition 2.3.6. A morphism f : X → Y in C is called weakly cohomologically smooth (with +respect to D) if +(1) the co-projection morphism f ! (1Y ) ⊗ f ∗(−) → f !(−) from Notation 2.1.5(2) is an equiva- +lence; +(2) the dualizing object ωf := f ! (1Y ) is an invertible object of D(X), and it commutes with an +arbitrary base change Y ′ → Y , i.e., for any Cartesian diagram in C +X′ +X +Y ′ +Y, +g′ +f′ +f +g +the natural morphism (g′)∗f ! (1Y ) → f ′! (1Y ′) from Notation 2.1.5(3) is an isomorphism. +Definition 2.3.7. A morphism f : X → Y in C is called cohomologically smooth (with respect to +D) if, for any morphism g: Y ′ → Y in C, the base change f ′ : X′ → Y ′ is weakly cohomologically +smooth. +Remark 2.3.8. Definition 2.3.7 formally implies that cohomologically smooth morphisms are +closed under composition and (arbitrary) base change. +We first mention some formal properties of this definition: +Lemma 2.3.9. Let +X′ +X +Y ′ +Y +g′ +f′ +f +g +be a cartesian square in C. Then +(1) the natural morphism f ′ +∗ ◦ (g′)! → g! ◦ f∗ is an isomorphism; + +22 +BOGDAN ZAVYALOV +(2) (Cohomologically smooth base change) the natural morphism g∗ ◦ f∗ → (f ′)∗ ◦ (g′)∗ is an +isomorphism if g is cohomologically smooth; +(3) the natural morphism (g′)∗ ◦ f ! → (f ′)! ◦ g∗ is an isomorphism if either f or g is cohomo- +logically smooth. +All these claims are well-known; we spell out the proof only for the reader’s convenience. +Proof. The proof of (1) is formal: it follows from proper base-change by passing to right adjoints. +The proof of (2) is also essentially formal (and well-known). The assumption that g cohomolog- +ically smooth implies that there is an invertible object ωg ∈ D(Y ′) such that g!(−) ≃ g∗(−) ⊗ ωg +and (g′)!(−) ≃ (g′)∗(−) ⊗ (f ′)∗ωg. Then it is clear that (1) implies an equivalence +g∗ ◦ f∗ ≃ (f ′)∗ ◦ (g′)∗. +The main subtlety is to check that this isomorphism is the inverse of the natural morphism. For +this, one uses (the first) commutative diagram from the proof of [LZ17, Lemma 4.1.13]. +Now we show (3). If f is cohomologically smooth, the statement follows from the definition of +cohomological smoothness. If g is cohomologically smooth, one can argue similarly to (2): using +the notion of cohomological smoothness, it is easy to construct an equivalence +(g′)∗ ◦ f ! ≃ (f ′)! ◦ g∗. +To see that this equivalence coincides with the natural morphism, one should use (the second) +commutative diagram from the proof of [LZ17, Lemma 4.1.13]. +□ +2.3.3. 6-functor formalisms. Now we are ready to give the definition of a 6-functor formalism that +will be used in this paper: +Definition 2.3.10. A 6-functor formalism is a weak 6-functor formalism D: Corr(C) → Cat∞ +such that +(1) for each X ∈ C, the ∞-category D(X) is stable and presentable; +(2) D∗|Cop : Cop → Cat∞ satisfies analytic (resp. Zariski in case of schemes) descent, i.e., for +any analytic open covering U = {Ui → X}i∈I, the natural morphism +D(X) → lim +n∈∆ +� +i1,...,in∈I +D(Ui1 ×X · · · ×X Uin) +is an equivalence. +(3) every proper morphism f is cohomologically proper17. In particular, for any proper mor- +phism f : X → Y , there is a canonical identification f! = f∗; +(4) every ´etale morphism f is cohomologically ´etale. +In particular, for any ´etale morphism +f : X → Y , there is a canonical identification f ! = f ∗. +Remark 2.3.11. The same definition makes sense if we everywhere replace the category C with +the category C′ of +-weakly finite type adic S-spaces. In the adic world, this version is actually +useful for constructing 6-functor formalisms in the sense of Definition 2.3.10 because it is easier to +construct compactifications in the category C′ (see [Hub96, §5.1]). +17Strictly speaking, we should first require that any Zariski-closed immersion is cohomologically proper in the +sense of Definition 2.3.2. And then it makes sense to require that any proper morphism is cohomologically proper in +the sense of Definition 2.3.4. + +POINCAR´E DUALITY REVISITED +23 +Remark 2.3.12. If D is a 6-functor formalism, all the functors f ∗, f∗, f !, f!, ⊗, Hom are exact in +the sense [HA, Prop. 1.1.4.1] (i.e., commute with finite limits and colimits). Indeed, all of them are +either left or right adjoints, so they commute with all colimit or limits respectively. But then [HA, +Prop. 1.1.4.1] implies they must be exact. +Remark 2.3.13. For the most part of the paper, we do not need to assume that D(X) are stable +∞-categories. However, we lack any examples of non-stable 6-functor formalisms, so we prefer to +put stability of D(X) into the definition. In the unstable case, the upper shriek functor i! usually +does not exist even for a Zariski-closed immersion i. +Remark 2.3.14. We recall that any stable ∞-category is canonically enriched over Sp the ∞- +category of spectra (see [GH15, Ex. 7.4.14 and Prop. 4.8.2.18]). In particular, for a 6-functor for- +malism D, D(X) is naturally enriched over Sp for every X ∈ C. +Notation 2.3.15. (Different Homs) For any two objects F, G ∈ D(X), we denote their inner Hom +by HomX(F, G) ∈ D(X), their Hom-spectrum by HomX(F, G) ∈ Sp, and the Hom-group in the +associated triangulated category D(X) by HomD(X)(F, G). The relation between these objects is +the following: +HomX (1X, HomX (F, G)) ≃ HomX (F, G) +H0 (HomX (F, G)) = HomD(X)(F, G). +We first show that, for a 6-functor formalism, the notion of a cohomologically smooth morphism +(see Definition 2.3.7) is sufficiently local: +Lemma 2.3.16. Let D be a 6-functor formalism. Then +(1) the notion of cohomologically smooth morphism is analytically (resp. Zariski) local on X +and Y ; +(2) ´etale morphisms are cohomologically smooth. +Proof. The first claim is formal from analytic (resp. Zariski) descent and Lemma 2.3.5(2). For +the second claim, it suffices to show that ´etale morphisms are weakly cohomologically smooth +since ´etale morphisms are closed under pullbacks. Now weak cohomological smoothness follows the +assumption from Lemma 2.3.5(1). +□ +3. Abstract Poincar´e Duality +The main goal of this section is to give a “formal” proof of (a weak version of) Poincar´e Duality +in any 6-functor formalism. +We recall that the usual proof of Poincar´e Duality in ´etale cohomology is inductive and does +not really tell the exact input one has to check to get Poincar´e Duality for one particular smooth +morphism f. We abstract out this condition. Surprisingly, it turns out that one needs a very +limited amount of extra data. We give such a characterization in terms of the trace-cycle theories +(see Definition 3.2.4). It roughly says that, in order to prove Poincar´e Duality, one only needs +to construct a trace morphism for f and a cycle map of the relative diagonal with some natural +compatibilities. +After that, we give a minimalistic set of hypothesis that ensures that any smooth morphism is +cohomologically smooth. This step reduces the question of proving Poincar´e Duality to the question +of computing the dualizing object. This question is studied in more detail in the next two sections. + +24 +BOGDAN ZAVYALOV +For the rest of the section, we fix a locally noetherian analytic adic space S (resp. a scheme S). +We denote by C the category of locally finite type (resp. locally finitely presented) adic S-spaces +(resp. S-schemes), and fix a weak 6-functor formalism D: Corr(C) → Cat∞ (see Definition 2.3.10). +In what follows, we will freely use the terminology of Section 2. In particular, for each X ∈ C, +we denote the associated stable ∞-category by D(X) and its triangulated homotopy category by +D(X). +3.1. Formal Poincar´e Duality. In this section, we use the 2-category of cohomological correspon- +dences CS to reduce the question of proving Poincar´e Duality to the question of constructing an +adjoint to 1-morphism in the 2-category of cohomological correspondences CS (see Definition 2.2.3). +We start by considering the (co-)representable 2-functor +hS = HomCS(S, −): CS → Cat1 +that is a 2-functor from the 2-category of cohomological correspondences to the 2-category of +categories (see [JY21, §8.2] for the (dual) theory of representable functors in the 2-categorical +context). +It turns out that hS is quite easy to describe explicitly. For this, it will be convenient to introduce +the notion of a Fourier-Mukai functor: +Definition 3.1.1. Let X1, X2 be objects in C, and F ∈ D(X1 ×S X2). Then the Fourier-Mukai +functor +FMF : D(X1) −→ D(X2) +is defined by the rule +G �→ p2,! (p∗ +1F ⊗ G) , +where pi : X1 ×S X2 → Xi is the natural projection. +Remark 3.1.2. Explicitly, the functor hS is quite easy to describe: +(1) to every object X ∈ CS, it associates the category +hS(X) = D(X); +(2) to every pair of objects X, Y ∈ CS, it associates the functor +FM(−) : D (X ×S Y ) → FunCat1 (D(X), D(Y )) +F �→ FMF. +It is also possible to describe the identity and composition constraints in terms of the projection +formula and proper base-change. We do not do this here because we will never explicitly need it. +We also recall the definition of adjoint morphisms in a 2-category. For this, we fix a 2-category +C′, objects C and D of C′, and a pair f : C → D, g: D → C of 1-morphisms in C′. +Definition 3.1.3. ([Lur22, Tag 02CG]) An adjunction between f and g is a pair of 2-morphisms +(η, ǫ), where η: idC → g ◦ f is a morphism in the category HomC′(C, C) and ǫ: f ◦ g → idD is a +morphism in the category HomC′(D, D), which satisfy the following compatibility conditions: +(Z1) The composition +f +ρ−1 +f +−−→ +∼ +f ◦ idC +idf ◦η +−−−→ f ◦ (g ◦ f) +αf,g,f +−−−→ +∼ +(f ◦ g) ◦ f +ǫ◦idg +−−−→ idD ◦ f +λf +−→ +∼ f +is the identity 2-morphism from f to f. Here λf and ρf are the left and right unit constraints +of the 2-category C′ (see [Lur22, Tag 00EW]) and αf,g,f is the associativity constraint for +the 2-category C′. + +POINCAR´E DUALITY REVISITED +25 +(Z2) The composition +g +λ−1 +g +−−→ +∼ +idC ◦ g +η◦idg +−−−→ (g ◦ f) ◦ g +α−1 +g,f,g +−−−→ +∼ +g ◦ (f ◦ g) +idg◦ǫ +−−−→ g ◦ idD +ρg +−→ +∼ g +is the identity 2-morphism from g to g. +Remark 3.1.4. If C′ = Cat1 is the 2-category of (small) categories. Definition 3.1.3 recovers the +usual notion of adjunction of functors. +Remark 3.1.5. ([Lur22, Tag 02CM]) Let F : C′ → C′′ be a 2-functor between 2-categories, and +(f, g) is a pair of adjoint morphisms in C′. Then (F(c), F(g)) is a pair of adjoint morphisms in C′′. +Proposition 3.1.6. (Formal Poincar´e Duality. I) Let f : X → S be a morphism in C. Suppose that +the 1-morphism A = 1X ∈ HomCS(X, S) is left adjoint to a 1-morphism B = I ∈ HomCS(S, X). +Then the functor +f!(−): D(X) −→ D(S) +admits a right adjoint given by the formula +f ∗(−) ⊗ I : D(S) −→ D(X). +Proof. First of all, it suffices to check that two functors are adjoint by passing to the corresponding +homotopy categories by (see [Lur22, Tag 02FX]), so we can argue with the associated homotopy +catetories. +We consider the (co)-representable 2-functor hS : CS → Cat1. +Remark 3.1.5 guarantees that +� +hS(A), hS(B) +� +is a pair of adjoint functors between the categories hS(X) and hS(S). Then Re- +mark 3.1.2 provides us with the identifications hS(X) ≃ D(X), hS(S) ≃ D(S), hS(A) = f!(−) and +hS(B) = f ∗(−) ⊗ I. In particular, we conclude that f! is left adjoint to f ∗(−) ⊗ I. +□ +3.2. Trace-cycle theories. In this section, we “decategorify” Poincar´e Duality and reduce it +to constructing two morphisms subject to two commutativity relations. The main tool for this +decategorification process will be the 2-category of cohomological correspondences CS. +We recall that throughout this section we have fixed a weak 6-functor formalism D: Corr(C) → +Cat∞. +Definition 3.2.1. Let f : X → Y be a morphism in C. A trace theory on f is a pair (ωf, trf) of +an invertible object ωf ∈ D(X) and a morphism +trf : f! (ωf) → 1Y +in the homotopy category D(Y ). +Construction 3.2.2. We point out that proper base-change implies that any base change of a +morphism with a trace theory (ωf, trf) admits a canonical trace theory given by (g′∗ ωf, g∗(trf)). +More precisely, let +X′ +X +Y ′ +Y +f′ +g′ +f +g +be a Cartesian diagram in C. Then proper base-change tells us that the natural morphism +g∗f! ωf +∼ +−→ f ′ +! (g′)∗ ωf + +26 +BOGDAN ZAVYALOV +is an isomorphism. Therefore, the pullback g∗(trf) defines a trace map +trf′ := g∗ (trf) : f ′ +! +� +g′∗ωf +� +→ 1Y ′. +Warning 3.2.3. The construction of trf′ depends on the choice of g: Y ′ → Y . However, this will +never cause any confusion in the examples where we apply this construction. +For the next definition, we fix a morphism f : X → Y with the diagonal morphism +∆: X → X ×Y X +and the projections p1, p2 : X ×Y X → X. +Definition 3.2.4. A trace-cycle theory on f is a triple (ωf, trf, cl∆) of +(1) an invertible object ωf ∈ D(X), +(2) a trace morphism +trf : f! ωf → 1Y +in the homotopy category D(Y ), +(3) a cycle map +cl∆ : ∆!1X −→ p∗ +2 ωf +in the homotopy category D(X ×S X) +such that +1X +p1,! (∆!1X) +1X +p1,! (p∗ +2 ωf) , +∼ +id +p1,!(cl∆) +trp1 +(7) +ωf +p2,! (p∗ +1ωf ⊗ ∆!1X) +p2,!(p∗ +1ωf ⊗ p∗ +2ωf) +ωf +1X ⊗ ωf +p2,!p∗ +1ωf ⊗ ωf, +∼ +id +p2,!(id⊗cl∆) +≀ +∼ +trp2 ⊗id +(8) +commute in D(X) (with the right vertical arrow in the second diagram being the projection formula +isomorphism). +Remark 3.2.5. The name trace-cycle theory comes from the fact that, in the case of the ´etale +6-functor formalism, the morphism cl∆ is equivalent to a class in H2d +∆ (X ×Y X, Z/nZ(d)), which +comes from the cycle class of the diagonal. +Remark 3.2.6. Commutativity of the first diagram in Definition 3.2.4 should be thought as a +formal way of saying that trace of the cycle class of a point is “universally” equal to 1. +Remark 3.2.7. Similarly to Constrution 3.2.2, one can pullback trace-cycle theories along any +morphism Y ′ → Y in C. +Now we are ready to show the main result of this section: +Theorem 3.2.8. (Formal Poincar´e Duality II) Let f : X → S be a morphism in C. Suppose that +f admits a trace-cycle theory (ωf, trf, cl∆). Then +f! (−): D(X) → D(S) +admits a right adjoint given by the formula +f ∗(−) ⊗ ωf : D(S) → D(X). + +POINCAR´E DUALITY REVISITED +27 +Proof. By Proposition 3.1.6, it suffices to verify that A = 1X ∈ HomCS(X, S) is left adjoint to +B = ωf ∈ HomCS(S, X) in the 2-category of cohomological correspondences CS. +Step 1. +Construction of the counit ǫ: A ◦ B → idS. +By definition, the composition A ◦ B +corresponds to +f! (ωf) ∈ D(S) = HomCS(S, S). +We also note the the identity morphism idS is given by 1S since S ×S S = S. We define the counit +2-morphism +ǫ: f! (ωf) → 1S +to be the trace morphism trf. +Step 2. Construction of the unit η: idX → B◦A. By definition, the composition B◦A corresponds +to the object p∗ +2 (ωf) ∈ D(X ×S X), and the identity 1-morphism idX corresponds to the object +∆!1X. Thus we define the unit 2-morphism +η: ∆!1X → p∗ +2 (ωf) +to be the cycle morphism cl∆. +Step 3. Verification of the axiom (Z1). One needs to check that the composition +A +ρ−1 +A +−−→ +∼ +A ◦ idX +idA◦η +−−−→ A ◦ (B ◦ A) +αA,B,A +−−−−→ +∼ +(A ◦ B) ◦ A +ǫ◦idB +−−−→ idS ◦ A +λA +−−→ +∼ +A +is equal to the identity morphism. After unravelling the definitions, this verification essentially +boils down to the definition of a trace-cycle theory. We explain this verification in more detail for +the convenience of the reader. +We make the diagram explicit: +(1) First, we see that A ◦ idX is equal to the +A ◦ idX = p1,! (p∗ +21X ⊗ ∆!1X) = p1,! ∆! 1X ∈ D(X). +The right unit constraint ρ−1 +A is identified with the natural isomorphism +1X +∼ +−→ p1,!∆!1X +coming from the fact that p1 ◦ ∆ = idX; +(2) the composition A ◦ (B ◦ A) is the object +A ◦ (B ◦ A) = p1,! (p∗ +2 ωf) ∈ D(X) +and the morphism idX ◦ η is given by p1,!(cl∆); +(3) the composition (A ◦ B) ◦ A is given by f ∗f! ωf and the associativity constraint αA,B,A is +the inverse of the base change isomorphism +f ∗f! ωf → p1,! (p∗ +2ωf) ; +(4) idS ◦ A is just equal to 1X since the diagonal S → S ×S S is the identity morphism. And +the composition ǫ ◦ idA is equal to +f ∗(trf): f ∗(f!ωf) → 1X; +(5) finally, the left unit constraint λA is the identity morphism because the diagonal S → S×SS +is the identity morphism. + +28 +BOGDAN ZAVYALOV +After making all these identifications, we see that the composition αA,β,A ◦ (idA ◦ η) is equal +to trp1 by the very definition of trp1. Therefore, the axiom (Z1) boils down to checking that the +diagram +1X +p1,! (∆!1X) +1X +p1,! (p∗ +2ωf) +∼ +id +p1,!(cl∆) +trp1 +commutes. We finish the proof by noting that this is part of the definition of a trace-cycle theory. +Step 4. Verification of the axiom (Z2). The verification is essentially the same as the one in +Step 3. After unravelling all the definitions, the axiom boils down to the commutativity of the +second diagram in Definition 3.2.4. +□ +Corollary 3.2.9. Let f : X → S be as in Theorem 3.2.8, and S′ → S is a morphism in C, and +f ′ : X′ → S′ is the base change of f along g. Then the functor +f ′ +!(−): D(X′) → D(S′) +admits a right adjoint given by the formula +(f ′)∗(−) ⊗ (g′)∗ (ωf) : D(S′) → D(X′), +where g′ : X′ → X is the base-change morphism. +Proof. By Remark 3.2.7, we can pullback the trace-cycle theory on f to a trace cycle theory on f ′. +Then we denote by C′ the slice category C/S′ and restrict the 6-functor formalism D on Corr(C′) to +apply Theorem 3.2.8 to f ′. +□ +Remark 3.2.10. We note that Corollary 3.2.9 is already a quite non-trivial statement. It is not +clear why duality for f should imply duality for f ′ from first principles. +3.3. Cohomological smoothness. The main goal of this section is to show how Theorem 3.2.8 +can be used to formulate a pretty minimalistic set of assumptions that ensures that any smooth +morphism is cohomologically smooth (see Definition 2.3.6). This statement should be thought like +a version of Poincar´e Duality without identifying the dualizing object. +We recall that throughout this section we have fixed a weak 6-functor formalism D: Corr(C) → +Cat∞. +Theorem 3.3.1. Let f : X → Y be a morphism in C with a trace-cycle theory (ωf, trf, cl∆). Then +f is cohomologically smooth (see Definition 2.3.7). +Proof. This follows directly from Theorem 3.2.8 and Corollary 3.2.9. +□ +Remark 3.3.2. It is not hard to see that f : X → Y is cohomologically smooth if and only if f +admits a trace-cycle theory. Indeed, we put ωf := f !1Y , and trf : f! ωf → 1Y to be the counit of +the (f!, f !)-adjunction. Then we note that Definition 2.3.6 implies that +1X ≃ ∆!p! +11X ≃ ∆!p∗ +2 ωf. +Therefore, we define the cycle morphism cl∆ : ∆!1X → p∗ +2ωf to be counit the (∆!, ∆!)-adjunction. +We leave it to the reader to verify that the triple (ωf, trf, cl∆) satisfies the assumptions of Defini- +tion 3.2.4. + +POINCAR´E DUALITY REVISITED +29 +Theorem 3.3.3. Suppose that D is a 6-functor formalism (see Definition 4.2.9). Then the relative +projective line g: P1 +S → S admits a trace-cycle theory (ωg, trg, cl∆) if and only if every smooth +morphism f : X → Y is cohomologically smooth (with respect to D). +Proof. The “if” part follows directly from Remark 3.3.2. So we prove the “only if” part. +By Lemma 2.3.16(1), we can argue analytically locally on X and Y . Therefore, [Zav23, Lemma +5.8] implies that we may assume that X is ´etale over the relative disk Dd +Y (resp. affine space Ad +Y ). +Now Lemma 2.3.16(2) and Remark 2.3.8 ensure that it suffices to show that the natural projection +Dd +Y → Y (resp. Ad +Y → Y ) is cohomologically smooth. Then we use Remark 2.3.8 once again +to reduce the question further to the case of the one-dimensional relative disk D1 +Y → Y (resp. +A1 +Y → Y ). In this case, it suffices to show it for the relative projective line P1 +Y → Y compactifying +the relative disk (resp. affine line). In this case, the result follows Theorem 3.3.1. +□ +4. Dualizing object +Theorem 3.3.3 gives a minimalistic condition that implies Poincar´e Duality up to computing the +dualizing object ωf. Thus the question of proving the full version of Poincar´e Duality reduces to +computing the dualizing object. +In this section, we show that (under a relatively mild assumption) there is always a “formula” +for the dualizing object f !1Y in terms of the relative tangent bundle Tf. The formula says that ωf +is equal to 0∗ +Xg!1X, where g: VX(Tf) → X is the total space of the relative tangent bundle and 0X +is the zero section. In particular, it implies that, for the purpose of computing f !1Y , it suffices to +assume that f is the total space of a vector bundle and make the computating in a “neighborhood” +of the zero section. In the next section, we will use this to show that, in the presence of first Chern +classes, one can fully trivialize f !1Y (up to the appropriate Tate twists). +We prove the desired formula in two steps: we first use Verdier’s diagonal trick to reduce the +question of computing ωf for a general smooth morphism to the question of computing s∗ωf for +a smooth morphism f with a section s. Then we use a version of the deformation to the normal +cone to reduce further to the case, where f is the total space of the (normal) vector bundle. +The methods of this section are essentially independent of Section 3. Therefore, we always put +into our assumptions that any smooth morphism in C is cohomologically smooth with respect to D +(see Definition 2.3.6). Theorem 3.3.3 shows that this is equivalent to the existence of a trace-cycle +theory on the relative projective line. +Throughout this section, we fix a locally noetherian analytic adic space S (resp. a scheme S). +We denote by C the category of locally finite type (resp. locally finitely presented) adic S-spaces +(resp. S-schemes), and fix a 6-functor formalism D: Corr(C) → Cat∞. +4.1. Verdier’s diagonal trick. We start the discussion by reviewing a version of Verdier’s diagonal +trick. +Proposition 4.1.1. Let f : X → Y be a cohomologically smooth morphism in C, ∆: X → X ×Y X +the relative diagonal, and p: X ×Y X → X is the projection onto the first factor. Then there is a +canonical isomorphism +∆∗p!1X ≃ f !1Y . + +30 +BOGDAN ZAVYALOV +Proof. We consider the commutative diagram +X +X ×Y X +X +X +Y. +∆ +id +id +q +p +f +f +Then we have a sequence of isomorphisms: +f !1Y ≃ ∆∗q∗f !1Y +≃ ∆∗p!f ∗1Y +≃ ∆∗p!1X. +The first isomorphism follows from the equality q ◦ ∆ = id. The second isomorphism follows from +the base change condition in the definition of cohomological smoothness. The third isomorphism +is trivial. +□ +We note that Proposition 4.1.1 allows us to reduce the question of computing f ! for a general +smooth morphism f to the question of computing s∗f !1X in the case f has a section s. For our +later convenience, we axiomize this construction. We recall that Pic(D(Y )) denotes the group of +the isomorphism classes of invertible objects in D(Y ). +Construction 4.1.2. Let f : X → Y be a cohomologically smooth morphism in C with a section +s. Then we denote by C(f, s) ∈ Pic(D(Y )) the object +C(f, s) := s∗f !1Y . +By definition of cohomological smoothness, the formation of C(f, s) commutes with an arbitrary +base change Y ′ → Y . +For the rest of this section, we assume that all smooth morphisms in C are cohomologically +smooth with respect to D. +Variant 4.1.3. Let f : VX(E) → X be the total space of a vector bundle E on X with the zero +section s: X → VX(E). Then we define CX(E) ∈ Pic(D(X)) as +CX(E) = C(f, s) ∈ D(X). +Remark 4.1.4. Using this notation, Proposition 4.1.1 tells us that, for a smooth morphism f : X → +Y , we have a canonical isomorphism f !1Y ≃ C(p, ∆). Our goal is to relate C(p, ∆) to CX(Tf), +where Tf is the total space of the relative tangent bundle. This will be done in the next section +using (a version) of the deformation to the normal cone. +In the rest of this section, we would like to show that CY (−) defines an additive morphism from +K0(Vect(Y )) to Pic(D(Y )), where K0(Vect(Y )) is the Grothendieck group of vector bundle on Y . +This will not play any role in this paper, but it seems to be of independent interest as it defines an +interesting invariant of a 6-functor formalism. + +POINCAR´E DUALITY REVISITED +31 +Lemma 4.1.5. Assume that all smooth morphisms in C are cohomologically smooth with respect +to D, and let X be an object of C. Then the construction CX(E) defines an additive homomorphism +CX : K0(Vect(X)) → Pic(D(X)). +Proof. The only thing that we need to show is that, for any short exact sequence of vector bundle +0 → E′ +i−→ E π−→ E′′ → 0, +there is an isomorphism +CX(E) ≃ CX(E′) ⊗ CX(E′′). +For this, we denote the structure morphism of VX(E) by f and the zero section by 0E, similarly for +f ′, f ′′ and 0E′ and 0E′′ Now we consider the commutative diagram +VX(E′) +VX(E) +X +VX(E′′) +X. +i +f′ +π +f +id +0E′ +0E′′ +0E +f′′ +(9) +Now the result follows from the following sequence of isomorphisms: +CX(E) = 0∗ +E +� +f !1X +� +≃ 0∗ +E +� +π!(f ′′)!1X +� +≃ 0∗ +Eπ∗ � +(f ′′)!1X +� +⊗ 0∗ +E +� +π!1V(E′′) +� +≃ 0∗ +E′′ +� +(f ′′)!1X +� +⊗ 0∗ +E′ +� +i∗π!1V(E′′) +� +≃ 0∗ +E′′ +� +(f ′′)!1X +� +⊗ 0∗ +E′ +� +(f ′)!1X +� += CX(E′′) ⊗ CX(E′). +The first equality holds by definition. +The second isomorphism comes from the equality f = +f ′′ ◦ π. The third isomorphism comes from invertibility of (f ′′)!1X and Lemma 2.1.6. The fourth +isomorphism comes from the equalities π ◦ 0E = 0E′′ and 0E = i ◦ 0E′. The fifth isomorphism comes +from the fact that π is cohomologically smooth, and so formation of π!1 commutes with arbitrary +base change. And sixth equality holds by definition. +□ +4.2. Deformation to the normal cone. Our goal in this section is to fulfil the promise made +in Remark 4.1.4 and show that C(p, ∆) = CX(Tf). We are going to do this via deforming (or, +actually, specializing) to the normal cone. The idea of using deformation to the normal cone to +compute the dualizing object is due to Dustin Clausen. In particular, a version of this argument is +used in [CS22, Lecture XIII] to compute the dualizing object in the 6-functor formalism of liquid +sheaves on complex-analytic spaces. + +32 +BOGDAN ZAVYALOV +We give two slightly different arguments for the formula C(p, ∆) = CX(Tf) under two different +assumptions on the 6-functor formalism D. +4.2.1. Motivic 6-functor formalisms. In this subsection, we show that C(p, ∆) = CX(Tf) under +the assumption that D is A1-invariant in the strong sense: +Definition 4.2.1. Let C be the category of locally finite type (resp. locally finitely presented) adic +S-spaces (resp. S-schemes). A 6-functor formalism D: Corr(C) → Cat∞ is motivic if +(1) A1-invariant (see Definition 2.1.10), +(2) any smooth moprhism f in C is cohomologically smooth with respect to D. +The main idea of the proof is to deform a Zariski-closed immersion s: Y → X into the zero +section of its normal cone. The construction of the deformation to the normal cone uses blow-ups, +so we refer to [Zav23, Section 6] for the detailed discussion of the Proj and blow-up construction +in the adic world, and to [Zav23, Section 5] for the notion of an lci (Zariski-closed) immersion. In +the case of schemes, these notions are standard. +Construction 4.2.2. (Deformation to the normal cone) Let Z +i֒−→ X be an lci S-immersion. Then +the deformation to the normal cone DZ(X) is the S-space +DZ(X) := BlZ×S0S +� +X ×S A1 +S +� +− BlZ(X). +By definition, it admits a morphism π: DZ(X) → A1 +X. +Moreover, by functoriality, there is a +morphism +DZ(Z) = A1 +Z +�i−→ DZ(X) +making the diagram +A1 +Z +DZ(X) +A1 +X +�i +π +commute. +Remark 4.2.3. (Local construction) +(1) Suppose first that X = Spec A and Z = Spec A/I for a regular ideal I ⊂ A. Then [Ful98, +§5.1, end of p.51] implies that DZ(X) has a very concrete description as the spectrum of +the Rees algebra. More precisely18, +DZ(X) ≃ Spec +� +n∈Z +InT −n. +Moreover, under this isomorphism, the natural morphism π: DZ(X) → A1 +X is equal to the +morphism +Spec +� +n∈Z +InT −n → Spec A[T] +induces by the natural morphism A[T] → � +n∈Z InT −n. The fiber over 0X is isomorphic to +Spec � +n≤0 In/In+1, the total space of the normal bundle19. +18In the formula below, the convention is that In = A for n < 0. +19Here we use the lci assumption to make sure that I/I2 is projective and In/In+1 = Symn +A/II/I2. + +POINCAR´E DUALITY REVISITED +33 +(2) Now if Z ⊂ X is a general lci S-imersion of pure codimension c (either in the analytic +or algebraic world). +Then DZ(X) can be alternatively defined via gluing (and relative +analytification20) the local algebraic construction. +Remark 4.2.4. Similarly to the algebraic geometry (or by deducing using the local description in +Remark 4.2.3(1)), one sees that there is a commutative diagram +Gm,Z +A1 +Z +Z +Gm,X +DZ(X) +VZ(NZ/X) +Gm,X +A1 +X +X, +i×idGm,S +�i +0Z +≀ +π +0X +where 0X and 0Z are the corresponding zero sections. +Now we apply this construction in one particular example when f : X → Y is a smooth morphism, +and i = s: Y → X is a Zariski-closed immersion that is a section of f (it is automatically an lci +immersion by [Zav23, Cor. 5.10]). In this case, we slightly change our notation as follows: +Notation 4.2.5. In the situation as above, we denote DZ(X) by � +X. It fits into the following +commutative diagram +Gm,X +� +X +VY (Ns) +Gm,Y +A1 +Y +Y, +f×Gm +�f +f0 +s×Gm +�s +0Y +s0 +(10) +where �f : � +X → A1 +Y is the composition � +X → A1 +X → A1 +Y , �s is the morphism previously denoted by +�i, and s0 is the zero section of the total space of the normal cone of Y inside X. Remark 4.2.4 +implies that �f is smooth in this case. +Proposition 4.2.6. Suppose the 6-functor formalism D is motivic (see Definition 4.2.1). +Let +f : X → Y be a smooth morphism, s: Y → X a Zariski-closed section of f, and �f : � +X → A1 +Y and +�s: A1 +Y → � +X be as in Notation 4.2.5. Then the invertible object +C( �f, �s) = �s∗ �f !1A1 +Y ∈ Pic +� +D +� +A1 +Y +� � +lies in the essential image of the pullback functor Pic (D (Y )) → Pic +� +D +� +A1 +Y +�� +. +Proof. Step 1. Localize on Y and reduce to a simpler situation. We first note that Lemma 2.1.11 +ensures that the functor +g∗ : Pic (D (Y )) → Pic +� +D +� +A1 +Y +�� +is fully faithful for any Y ∈ C. Therefore, using the analytic (resp. Zariski) descent, we can check +that an object lies in the essential image of g∗ locally on Y . +20See [Hub93, Prop. 3.8]. + +34 +BOGDAN ZAVYALOV +We fix a point y ∈ Y , so [Zav23, Lemma 5.8] ensures that we can find an open s(y) ∈ U ⊂ X +such that f|U : U → Y factors as +U +r−→ Dd +Y → Y +such that r is ´etale and s(y) ∈ r−1(0Y ) = Y ∩ U. Now we replace U with f −1(s(Y ) ∩ U) ∩ U to get +an open U ⊂ X such that +(1) s(y) ∈ U; +(2) if V := f(U) ⊂ Y is the (open) image of U in Y , then s(V ) ⊂ U; +(3) the morphism f|U : U → V factors as the composition +U +r−→ Dd +V → V +such that r is ´etale and r−1(0V ) = s(V ). +Now we consider the square +U +XV +X +V +V +Y, +f|U +fV +f +s|V +id +s|V +s +where all horizontal arrows are open immersion, and the right square is Cartesian. Now we use +that +C(� +fV , � +sV ) = �s∗ +V �f ! +V 1 ∈ Pic(D(A1 +V )) +depends only on the open neighborhood of the section s(V ) to get a canonical identification +C( �f, �s)|A1 +V ≃ C(� +fV , � +sV ) ≃ C(� +f|U, � +s|V ) ∈ Pic +� +D +� +A1 +V +�� +. +In other words, since we are allowed to argue locally on Y , we may replace the pair (f, s) by the +pair (f|U, s|V ) to assume that f : X → Y factors as +X +r−→ Dd +Y +h−→ Y +with an ´etale r and s(Y ) = r−1(0Y ). +Step 2. Reduce further to the case of the relative affine space Dd +Y → Y with the zero section +s = 0Y . We consider the Cartesian square +Y +X +Y +Dd +Y . +s +id +r +0Y + +POINCAR´E DUALITY REVISITED +35 +Since the formation of the deformation of the normal cone commutes with ´etale base change (for +this, use [Zav23, Lemma 5.5, 5.7], and [Zav21c, Remark B.4.7]), we get a Cartesian square +A1 +Y +A1 +Y +� +X +� +Dd +Y +A1 +Y +A1 +Y . +�s +id +�0Y +�f +�h +id +Since the formation of C(−, −) commutes with arbitrary base change, we conclude that +C( �f, �s) ≃ C(�h,�0Y ) ∈ Pic(D(A1 +Y )). +Therefore, it suffices to show the claim for X = Dd +Y with f : Dd +Y → Y being the natural projection, +and s = 0Y the zero section. Using that the formation of C commutes with arbitrary base change, +we can reduce further to the case S = Y . +Step 3. The case of the natural projection f : X = Dd +S → S and the zero section 0S. Since +the question is local on S (see Step 1), we can assume that S = Spa(OS(S), O+ +S (S)) is a strongly +noetherian Tate affinoid. Denote the d-dimensional relative Tate algebra by +A = OS(S)⟨T1, . . . , Td⟩ +with the ideal I = (T1, . . . , Td) ⊂ A. In this case, Remark 4.2.3(1) tells us that � +Dd +S is isomorphic +to the relative analytification of the A-algebra +Rees(A) := +� +n∈Z +Int−n, +where In = A if n ≤ 0. Then, similarly to the situation in algebraic geometry, one checks that the +unique OS(S)-linear ring homomorphism +OS(S)⟨X1, . . . , Xd⟩[T] → +� +n∈Z +Int−n +sending Xi to Tit−1 and T to t is an isomorphism. Therefore, after passing to the relative analyti- +fication, we see that we have a canonical isomorphism +� +Dd +S ≃ Dd +S ×S A1 +S +such that the projection �f : � +Dd +S → A1 +S corresponds to the projection onto the second factor, and +the section �0S : A1 +S → � +Dd +S corresponds to the “zero”-section +idDd × 0S : A1 +S → Dd +S ×S A1 +S. + +36 +BOGDAN ZAVYALOV +In particular, there is a commutative square +A1 +S +S +� +Dd +S +A1 +S +A1 +S +S, +�0S +0S +�f +g +g +where each square is Cartesian. +Since the formation of C(f, s) commutes with arbitrary base +change, we conclude that +C +� +�f,�0S +� +≃ g∗C +� +g, 0S +� +. +This finishes the proof. +□ +Corollary 4.2.7. In the notation of Proposition 4.2.6, there is a canonical isomorphism +C(f, s) ≃ CY (Ns) ∈ D(Y ), +where Ns is the normal bundle of s(Y ) in X. +Proof. Consider the deformation to the normal cone construction: +Gm,X +� +X +VY (Ns) +Gm,Y +A1 +Y +Y, +f×Gm +�f +f0 +s×Gm +�s +0Y +s0 +Then we know that the the formation of C( �f, �s) commutes with arbitrary base change21. Therefore +we get isomorphisms +C( �f, �s)|0Y ≃ C(f0, 0Y ) = CY (Ns) ∈ D(Y ), +C( �f, �s)|1Y ≃ C(f, s). +Now we note that Proposition 4.2.6 comes as a pullback from D(Y ), so we get a canonical identi- +fication of the “fibers” +C(f, s) ≃ C( �f, �s)|1Y ≃ C( �f, �s)|0Y ≃ CS(Ns). +□ +Theorem 4.2.8. Suppose the 6-functor formalism D is motivic. +Let f : X → Y be a smooth +morphism. Then there is a canonical isomorphism +f !1Y ≃ CX(Tf) ∈ D(X), +where Tf is the relative tangent bundle of f and CX(Tf) is from Variant 4.1.3. +21This step implicitly uses that �f is a smooth morphism. This can either be seen from the proof of Proposition 4.2.6 +or from the local description in Remark 4.2.3 + +POINCAR´E DUALITY REVISITED +37 +Proof. Proposition 4.1.1 says that +f !1Y ≃ ∆∗p!1X = C(p, ∆), +where p: X ×Y X → X is the projection onto the first factor, and ∆: X → X ×Y X is the diagonal +morphism. Then [Zav21c, Lemma B.7.3] ensures that we can decompose ∆ as +X +i−→ U +j−→ X ×Y X, +where i is a Zariski-closed immersion, and j is an open immersion. Then we see that +C(p, ∆) = ∆∗p!1X ≃ i∗j∗p!1X ≃ i∗(p ◦ j)!1X = C(i, p ◦ j). +Now clearly i is a Zariski-closed section of a smooth morphism g := p ◦ j : U → X. So the result +follows directly from Corollary 4.2.7 and the observation that the normal bundle of the (relative) +diagonal is equal to the (relative) tangent bundle Tf. +□ +4.2.2. Geometric 6-functor formalisms. In this section, we perform the deformation to the normal +cone type argument under a different assumption on D. +Definition 4.2.9. A 6-functor formalism D: Corr(C) → Cat∞ is pre-geometric if, for every object +Y ∈ C and an invertible object L ∈ Pic(P1 +Y ), there is an isomorphism L|0Y ∼= L|1Y inside D(Y ). +A 6-functor formalism D: Corr(C) → Cat∞ is geometric if any smooth moprhism f in C is +cohomologically smooth with respect to D. +To adapt the proof of Theorem 4.2.8 to a geometric 6-functor formalism D, we need to introduce +the projective version of Construction 4.2.2 +Construction 4.2.10. (Projective deformation to the normal cone) Let Z +i֒−→ X be an lci S- +immersion. Then the projective deformation to the normal cone PDZ(X) is the S-space +PDZ(X) := BlZ×S0S +� +X ×S P1 +S +� +− BlZ(X). +By definition, it admits a morphism π: PDZ(X) → P1 +X. Moreover, by functoriality, there is a +morphism +PDZ(Z) = P1 +Z +�i−→ PDZ(X) +making the diagram +P1 +Z +PDZ(X) +P1 +X +�i +π +commute. +Similarly to Notation 4.2.5, we specialize Construction 4.2.10 to the case when f : X → Y is a +smooth morphism, and i = s: Y → X is a Zariski-closed immersion that is a section of f (it is +automatically an lci immersion by [Zav23, Cor. 5.10]). In this case, we slightly change our notation +as follows: + +38 +BOGDAN ZAVYALOV +Notation 4.2.11. In the situation as above, we denote PDZ(X) by � +X. It fits into the following +commutative diagram +A1 +X +� +X +VY (Ns) +A1 +Y +P1 +Y +Y, +f×A1 +S +�f +f0 +s×A1 +S +j +�s +0Y +s0 +(11) +where j is the open complement to the zero section 0Y : Y → P1 +Y . +Theorem 4.2.12. Suppose the 6-functor formalism D is geometric. Let f : X → Y be a smooth +morphism. Then there is an isomorphism +f !1Y ∼= CX(Tf) ∈ D(X), +where Tf is the relative tangent bundle of f and CX(Tf) is from Variant 4.1.3. +Proof. The same proof as in Theorem 4.2.8 reduces the question to proving that C(f, s) ≃ CY (Ns) +for a smooth morphism f : X → Y with a Zariski-closed section s and a geometric 6-functor +formalism D. Then we use the projective deformation to the normal cone +A1 +X +� +X +VY (Ns) +A1 +Y +P1 +Y +Y +f×A1 +S +�f +f0 +s×A1 +S +j +�s +0Y +s0 +and the fact that, for an invertible object C( �f, �s) ∈ D(P1 +Y ), the fibers over 1Y and 0Y are isomorphic +to conclude that there is a sequence of isomorphisms +C(f, s) ≃ C( �f, �s)|1Y ∼= C( �f, �s)|0Y ≃ C(f0, 0Y ) = CY (Ns) ∈ D(Y S). +□ +Remark 4.2.13. In practice, the isomorphism L|1Y ≃ L|0Y in Definition 4.2.9, can be always +achieved to be “canonical”. This would make the isomorphism in Theorem 4.2.12 also canonical. In +particular, this should apply to the potential crystalline or prismatic 6-functor formalisms. However, +it seems annoying to explicitly spell out what this ”canonicity” should mean in an abstract 6-functor +formalism, so we do not discuss it here. + +POINCAR´E DUALITY REVISITED +39 +5. First Chern classes +We note that Theorem 3.3.3 and Theorem 4.2.8 (or Theorem 4.2.12) together already imply a +big part of Poincar´e Duality. More precisely, Theorem 3.3.3 gives a minimalistic way to check that +all smooth morphisms are cohomologically smooth with respect to a 6-functor formalism D, and +Theorem 4.2.8 gives a “formula” for the dualizing object ωf = f !1Y . +However, in many cases, the dualizing object has a particularly nice description as the tensor +power of the “Tate object” (e.g. relative reduced cohomology of the projective line). This descrip- +tion is not automatic and does not happen for all (geometric) 6-functor formalisms (e.g. this is +false for the (solid) quasi-coherent 6-functors). Therefore, this further trivialization requires some +new argument. +In this section, we give different conditions that imply that a 6-functor formalism D automatically +satisfies the strongest possible version of Poincar´e Duality. The strategy is to use Chern classes to +both construct the trace map for the relative projective line, and trivialize the dualizing object. +We get essentially the optimal result if D satisfies the excision axiom (see Definition 2.1.8); in +this case, existence of a theory of first Chern classes (see Definition 5.2.8) implies Poincar´e Duality. +After unravelling the definition, a theory of first Chern classes essentially boils down to a sufficiently +functorial additive assignment of a first Chern class c1(L) to a line bundle L with the constraint +that it satisfies the projective bundle formula for the relative projective line. +For a general 6-functor formalism, the results are slightly less nice and we need to put more +assumptions on D in order to get Poincar´e Duality. +We need to assume that D is either A1- +invariant or pre-geometric (see Definition 4.2.9), that there is a strong theory of first Chern classes +c1 (see Definition 5.2.8), and there is a theory of cycle maps underlying c1. Even though the results +are not as strong as in the case of a 6-functor formalism, these conditions seem not that hard to +verify in practice. +For the rest of the section, we fix a locally noetherian analytic adic space S (resp. a scheme S). +We denote by C the category of locally finite type (resp. locally finitely presented) adic S-spaces +(resp. S-schemes), and fix a 6-functor formalism D: Corr(C) → Cat∞. We also fix an inverible +object 1S⟨1⟩ ∈ D(S). +5.1. Notation. In this section, we fix some notation that we will freely use later. We recall that +we fixed an invertible object 1S⟨1⟩ ∈ D(S) for the rest of this section. +Notation 5.1.1. +(1) (Tate objects) For a non-negative integer d ≥ 0, we define Tate objects +1S⟨d⟩ := 1S⟨1⟩⊗d ∈ D(S). +Using that 1S⟨1⟩ is invertible, we extend the above formula to negative integers d by the +following formula: +1S⟨d⟩ := (1S⟨−d⟩)∨ ∈ D(S). +(2) (Tate twists) In general, for a morphism f : X → S, an object F ∈ D(X), and an integer d, +we define its Tate twist +F⟨d⟩ := F ⊗ f ∗1S⟨d⟩ ∈ D(X). +In particular, the object 1X⟨d⟩ ∈ D(X) is defined to be f ∗1S⟨d⟩. + +40 +BOGDAN ZAVYALOV +5.2. Theory of first Chern classes. The main goal of this section is to define the notion of a +theory of first Chern classes and verify some of its formal properties. +We start the section by giving a precise definition of a theory of first Chern classes. This will be +convenient to do in the ∞-categorical setting to automatically keep track of all higher coherences. +One nice feature of this definition, is that it allows us to define localized first Chern classes for free, +while in the 1-categorical approach, it seems to be extra data. +Recall that we have fixed a 6-functor formalism +D: Corr(C) → Cat∞ +with an invertible object 1S⟨1⟩ ∈ D(S). +Notation 5.2.1. We write Can for the site whose underlying category is the category C and whose +coverings are analytic open coverings (resp. Zariski open coverings). +We consider sheaf of abelian group +O× : Cop +an → ModZ +defined by X �→ O× +X(X). We can compose it with the natural morphism ModZ → D(Z), to get +the ∞-functor O× : Cop +an → D(Z). This functor is not a D(Z)-valued sheaf (in the sense [Lur18, +Def. 1.3.1.1]). +Notation 5.2.2. The sheafification of the D(Z)-valued functor O× is the functor +RΓan(−, O×): Cop +an → D(Z). +By [Cla21, L. 3, Cor. 11], the values of this functor on an object X ∈ C are canonically identified with +RΓan(X, O× +X) justifying the name. In what follows, we will usually consider the functor RΓan(−, O×) +as an Sp-valued sheaf by compositing with the natural “forgetful” functor D(Z) → D(Sp). +Notation 5.2.3. We also consider absolute cohomology as an Sp-valued functor +RΓ(−, 1⟨c⟩): Cop +an → Sp +that sends an object X ∈ C to RΓ(X, 1X⟨c⟩) = HomX(1X, 1X⟨c⟩). One easily checks that it is a +Sp-valued sheaf due to the fact that D satisfies analytic descent. +Definition 5.2.4. A weak theory of first Chern classes on a 6-functor formalism D is a morphism +c1 : RΓan(−, O×)[1] → RΓ(−, 1⟨1⟩) +of Sp-valued sheaves on Can. +This definition may seem a bit random at first. +However, it does have a strong connection +to is classically called a theory of (additive) first Chern classes. We will see in a moment that +this definition, in particular, assigns a cohomology class to each line bundle. Furthermore, this +assignment is sufficiently functorial so, in the presence of the excision axiom, it even allows us to +assign “localized” classes to a line bundle with a trivialization. It also encodes functoriality and +additivity of this classes. +In the following remark, we partially unravel the content of Definition 5.2.4. +Remark 5.2.5. +(1) (First Chern classes) By passing to H0, a weak theory of first Chern classes +gives a group homomorphism +H1 +an(X, O× +X) → H0(X, 1X⟨1⟩). + +POINCAR´E DUALITY REVISITED +41 +Recall that the group H1 +an(X, O× +X) classifies the isomorphism classes of line bundles on X, +so, for each isomorphism class of line bundles L, a weak theory of first Chern classes assigns +the first Chern class of L as an element +c1(L) ∈ H0(X, 1⟨1⟩) = HomD(X)(1X, 1X⟨1⟩). +For our purposes, it will be convenient to also consider this class as a homotopy class of +morphisms +c1(L): 1X → 1X⟨1⟩. +(2) (Additivity) Since c1 is a map of spectra, we see that localized first Chern classes are +additive. If L and L′ two isomorphism classes of line bundles on X, then +c1(L) + c1(L′) = c1(L ⊗ L′). +(3) (Base Change) The formation of c1(L) commutes with arbitrary base due to functoriality +of c1. More precisely, if Y → X is a morphism in C. Then we have an equality of classes +f ∗� +c1(L) +� += c1 +� +f ∗L +� +∈ HomD(Y )(1Z′, 1Y ⟨1⟩). +Now we show that if D satisfies the excision axiom (see Definition 2.1.8), then one can also define +the localized version of the usual first Chern classes: +Remark 5.2.6. +(1) (Localized first Chern classes) More generally, let Z +i֒−→ X be a Zariski- +closed subset with the complement U. Then the group +H0 +� +fib +� +RΓan(X, O× +X) → RΓan(U, O× +U) +� +[1] +� += H1 +Z(X, O× +X) +classifies22 isomorphism classes of pairs (L, φU) of a line bundle L and a trivialization +φ: OU → L|U on U. Therefore, for any such isomorphism class, a weak theory of first +Chern classes assigns the localized Chern class of (L, ϕU) as an element23 +c1(L, ϕU) ∈ H0 +� +fib (RΓ(X, 1X⟨1⟩) → RΓ(U, 1U⟨1⟩)) +� +≃ H0 +Z(X, 1X⟨1⟩) = HomD(X)(i∗1Z → 1X⟨1⟩). +Again, for our purposes, it will also be convenient to think about the localized first Chern +class as of a homotopy class of morphisms +c1(L, ϕU): i∗1Z → 1X⟨1⟩. +Non-localized first Chern classes can be recovered from this construction by taking Z = X. +(2) (Additivity) Since c1 is a map of spectra, we see that localized first Chern classes are +additive. If (L, ϕU) and (L′, ϕ′ +U) two isomorphism classes of line bundles with a trivialization +on U, then +c1(L, ϕU) + c1(L′, ϕ′ +U) = c1(L ⊗ L′, ϕU ⊗ ϕ′ +U). +22Even though this fact is well-known, it does not seem to be explicitly formulated in the literature. The interested +reader may adapt the argument used in [Ols15, 2.13] to this situation. +23Use the excision sequence from Remark 2.1.9 for the second isomorphism below. + +42 +BOGDAN ZAVYALOV +(3) (Base Change) The formation of c1(L, ϕU) commutes with arbitrary base due to functori- +ality of c1. More precisely, if +Z′ +Z +Y +X +i′ +f′ +i +f +is a Cartesian diagram in C. Then we have an equality of classes +f ∗� +c1(L, ϕU) +� += c1 +� +f ∗L, f ∗(ϕU) +� +∈ HomD(Y )(i′ +∗1Z′, 1Y ⟨1⟩). +In other words, the diagram +f ∗i∗1Z +f ∗(1X⟨1⟩) +i′ +∗1Z′ +1Y ⟨1⟩ +≀ +f∗� +c1(L,ϕU) +� +≀ +c1 +� +f∗L,f∗ϕU +� +commutes (up to homotopy), where the left vertical map is the base-change morphism. +(4) (Localization) Now we discuss another instance of functoriality of c1. Let i1 and i2 +Z1 +Z2 +X +i1 +i2 +be Zariski-closed immersions with open complements U1 and U2 respectively, and (L, ϕU1) +a pair of a line bundle and its trivialization on U1. Then the diagram +i2,∗1Z2 +i1,∗1Z1 +1X⟨1⟩ +c1(L,ϕU1|U2) +c1(L,ϕU1) +commutes (up to homotopy). +Construction 5.2.7. Suppose that f : X → Y is a morphism is C, and c: f ∗1Y = 1X → 1X⟨1⟩ is +a morphism in D(X). By the (f ∗, f∗)-adjunction, this uniquely defines a morphism +adjc: 1Y → f∗1X⟨1⟩. +Unless there is some possible confusion, we will denote the morphism adjc simply by c. Applying +the same construction to tensor powers of c, we get morphisms +ck : 1Y → f∗1X⟨k⟩. +We note that, for k = 0, we get simply the adjunction morphism that we denote by +f ∗ : 1Y → f∗1X. +Now we apply this construction to the projective bundle f : PY (E) → Y for some vector bundle +E on Y of rank d + 1 (see [Zav23, Def. 6.14]) and the first Chern class morhism of the universal line +bundle: +c1 = c1(O(1)): 1PY (E) → 1PY (E)⟨1⟩. + +POINCAR´E DUALITY REVISITED +43 +Then Construction 5.2.7 gives us a morphism +d +� +k=0 +ck +1⟨d − k⟩: +d +� +k=0 +1Y ⟨d − k⟩ → f∗1PY (E)⟨d⟩. +Definition 5.2.8. A theory of first Chern classes is a weak theory of first Chern classes c1 such +that, for the relative projective line f : P1 +S → S, the morphism +c1 + f ∗⟨1⟩: 1S ⊕ 1S⟨1⟩ → f∗1P1 +S⟨1⟩. +is an isomorphism. +A strong theory of first Chern classes is a weak theory of first Chern classes c1 such that, for any +integer d ≥ 1 and the relative projective space f : Pd +S → S, the morphism +d +� +k=0 +ck +1⟨d − k⟩: +d +� +k=0 +1S⟨d − k⟩ → f∗1Pd +S⟨d⟩. +is an isomorphism. +Remark 5.2.9. Definition 5.2.8 implies that, if c1 is a theory of first Chern classes, then +1S⟨−1⟩ ≃ Cone +� +1S → f∗1P1 +S +� +. +So the invertible object 1S⟨1⟩ is unique up to an isomorphism, and axiomitizes the “Tate twist”. +Lemma 5.2.10. (Projective Bundle Formula) Let c1 be a theory of strong first Chern classes, Y +an element of C, and f : PY (E) → Y a projective bundle for a vector bundle E of rank d + 1. Then +the morphism +d +� +k=0 +ck +1⟨d − k⟩: +d +� +k=0 +1Y ⟨d − k⟩ → f∗1PY (E)⟨d⟩. +is an isomorphism. If c1 is a theory first Chern classes, the same holds for vector bundles of rank 2. +Proof. Since D is an analytic sheaf, we can check that �d +i=0 ck +1⟨d−k⟩ is an isomorphism analytically +locally on Y . Therefore, we may and do assume that E is a trivial vector bundle of rank d. In +this case, the result follows from Definition 5.2.8, proper base change, and the fact that c1(O(1)) +commutes with base change along Y → S. +□ +Now we show that a strong theory of first Chern classes automatically implies that the braiding +morphism +s: 1S⟨1⟩⊗2 → 1S⟨1⟩⊗2 +is homotopic to the identity morphism. This will be used later to simplify the second diagram in +Definition 3.2.4 in the presence of a strong theory of first Chern classes. +Lemma 5.2.11. Let c1 be a theory of strong first Chern classes on a 6-functor formalism D. Then +the braiding morphism +s: 1S⟨1⟩⊗2 → 1S⟨1⟩⊗2 +is homotopic to the identity morphism. +Proof. Firstly, it suffices to prove the analogous claim for 1S⟨−1⟩. The key is that 1S⟨−1⟩ can be +realized as a direct summand of the “relative” cohomology of P2 +S . + +44 +BOGDAN ZAVYALOV +We first fix the relative projective space f : P2 +S → S. Now we note that f∗ is a right-adjoint to +a symmetric monoidal functor f ∗, so it is lax-monoidal. In particular, for every object F ∈ D(P2 +S) +with the braiding morphism sF : F⊗2 → F⊗2, we have a commutative diagram +(f∗F)⊗2 +f∗(F⊗2) +(f∗F)⊗2 +f∗(F⊗2). +sf∗F +∪ +f∗(sF) +∪ +(12) +in the homotopy category D(S). +Now we consider the (twisted) first Chern class morphism c1(O(1))⟨−1⟩: 1P2 +S⟨−1⟩ → 1P2 +S. Then +similarly to Construction 5.2.7, we get the morphism +adjc1 : 1S⟨−1⟩ → f∗1P2 +S. +The same construction applied to c1(O(1))⟨−1⟩⊗2 : 1P2 +S⟨−1⟩⊗2 → 1⊗2 +P2 +S produces the morphism +adjc2 +1 : 1S⟨−1⟩⊗2 → f∗(1⊗2 +P2 +S). +A formal diagram chase implies that the diagram +1S⟨−1⟩⊗2 +� +f∗1P2 +S +�⊗2 +f∗ +� +1⊗2 +P2 +S +� +adjc1⊗adjc1 +adjc2 +1 +∪ +commutes in D(S). +Definition 5.2.8 (with maps twisted by 1⟨−2⟩) implies that adjc2 +1 realizes +1S⟨−1⟩⊗2 as a direct summand of f∗ +� +1⊗2 +P2 +S +� +. Now we consider the commutative diagram +1S⟨−1⟩⊗2 +� +f∗1P2 +S +�⊗2 +f∗ +� +1⊗2 +P2 +S +� +1S⟨−1⟩⊗2 +� +f∗1P2 +S +�⊗2 +f∗ +� +1⊗2 +P2 +S +� +, +s +adjc1⊗adjc1 +adjc2 +1 +∪ +sf∗(1) +f∗(s1) +adjc1⊗adjc1 +adjc2 +1 +∪ +where s stand for the braiding morphisms. The left square commutes by the definition of a sym- +metric monoidal category, and the right square commutes due to Diagram (12). Since adjc2 +1 splits, +it suffices to show that f∗(s1) is equal to id. But this is clear since the braiding morphism of the +unit object is homotopic to the identity morphism. +□ +In the next couple of sections, we will show how a theory of first Chern classes can be used to +prove the full version of Poincar´e Duality. + +POINCAR´E DUALITY REVISITED +45 +5.3. Theory of cycle maps. The main goal of this section is to axiomitize a theory of cycle maps +(for divisors) on a 6-functor formalism D “compatible” with a weak theory of first Chern classes +c1 on D. Then we show that, if D satisfies the excision axiom, one can canonically construct such +a theory from any weak theory of first Chern classes. +5.3.1. Definitions. In this subsection, we explain the definition of a theory of cycle maps (for +divisors) and what it means for a theory of first Chern classes to underlie a theory of cycle maps. +As previously, we fix an invertible object 1S⟨1⟩ ∈ D(S) and always consider (weak) theories of +first Chern Classes with respect to this invertible object. +Definition 5.3.1. Let i: D ֒→ X be an effective Cartier divisor with the associated coherent ideal +sheaf I = ker(OX → i∗OD) ⊂ OX (see [Zav23, Def. 5.3]). The associated line bundle OX(D) := I∨ +is the dual of I, we denote its dual by OX(−D) (that is simply just a different name for I). +Definition 5.3.2. A theory of cycles maps (for effective Cartier divisors) cl• on a 6-functor for- +malism D: Corr(C) → Cat∞ is a collection of morphisms +cli : i∗1Y → 1X⟨1⟩ in the homotopy category D(X) +for each effective Cartier divisor i: Y → X such that they satisfy transversal base change, i.e., for +any Cartesian diagram +Y ′ +Y +X′ +X +g′ +i′ +i +g +such that the vertical arrows are effective Cartier divisors, the diagram +g∗i∗1Y +g∗(1X⟨1⟩) +i′ +∗1Y ′ +1X′⟨1⟩ +≀ +g∗(cli) +≀ +cli′ +commutes in D(X). +Definition 5.3.3. A weak theory of first Chern classes c1 underlies a theory of cycle maps cl• if, +for every effective Cartier divisor i: Y → X, the composition +1X → i∗1Y +cli +−→ 1X⟨1⟩ +is equal to c1(OX(Y )) in the homotopy category D(X). +For the next remark, we fix a weak theory of first Chern classes c1 underlying a theory of cycle +maps cl•. +Remark 5.3.4. Let f : X → Y be a morphism in C, and i: D ֒→ X an effective Cartier divisor. +We can apply Construction 5.2.7 to the composition morphism +1X +i∗1D +1X⟨1⟩ +c1(OX(D)) +cli +to get the morphism c: 1Y → f∗1X⟨1⟩. Then c has an alternative description as the composition +1Y −→ f∗i∗1D +f∗(cli) +−−−−→ f∗(1X)⟨1⟩. + +46 +BOGDAN ZAVYALOV +5.3.2. Constructing cycle maps. The main goal of this subsection is to show that, if D satisfies the +excision axiom, then any weak theory of first Chern classes c1 canonically underlies a theory of +cycle maps. +Warning 5.3.5. We do not know a way to extract a theory of cycle maps from a weak theory of +first Chern classes without the excision axiom. However, in practice, all 6-functor formalisms with +a (strong) theory of first Chern classes admit a compatible theory of cycle maps. Therefore, it may +be possible that there is a weaker assumption on D allowing the (canonically) construct cycle maps +from first Chern classes. +For the rest of this section, we fix a 6-functor formalism D satisfying the excision axiom and a +weak theory of first Chern classes c1. +To construct cycle clases, we note that an effective Cartier divisor D comes with the canonical +short exact sequence (see Definition 5.3.1): +0 → OX(−D) → OX → i∗OD → 0. +By passing to duals, we get a morphism OX → OX(D) that is an isomorphism over U := X \ D. +We denote its restriction on U by an isomorphism +ϕU : OU +≃ +−→ OX(D)|U. +Now, in the presence of the excision axiom, we can give the following definition: +Definition 5.3.6. A cycle map (relative to c1) of an effective divisor D ⊂ X is a homotopy class +of morphisms +cli : i∗1D → 1X⟨1⟩ +equal to c1(OX(D), ϕU) ∈ H0 +D(X, 1X⟨1⟩) = HomD(X)(i∗1D, 1X⟨1⟩) (see Remark 5.2.6(1). +Lemma 5.3.7. Let D be a 6-functor formalism satisfying the excision axiom, and c1 is a weak +theory of first Chern classes on D. Then the construction of cycle maps cl• from Definition 5.3.6 +defines a theory of cycle maps (see Definition 5.3.2) such that c1 underlies cl• (see Definition 5.3.3). +Proof. We need to check two things: cycle maps commute with transversal base change and, for +each effective Cartier divisor i: Y ֒→ X, the composition +1X → i∗1Y +cli +−→ 1X⟨1⟩ +is equal to c1(OX(Y )). +The first claim is automatic from Remark 5.2.6(3) and [Zav23, Lemma 5.7]. The second claim +is automatic from Remark 5.2.6(4) by taking Z1 = Y and Z2 = X. +□ +5.4. Cycle map of a point. In this section, we construct the (naive) cycle map of the (“zero”) +section on the relative projective space fd : Pd +S → S. We do not develop a robust theory of cycle +maps for all lci closed immersions of higher co-dimension, instead we give an ad hoc construction in +this particular case. The theory of higher dimensional cycle classes can be developed if D satisfies +the excision axiom (following the strategy of defining cycle classes in ´etale cohomology developed +in [Fuj02]), but we are not aware of a way of doing this for a general D so we do not discuss it in +this paper. The ad hoc construction mentioned above is enough for all purposes of this paper. +Before we go into details, we point out that this construction will be used both in establishing +Poincar´e Duality for A1-invariant or pre-geometric (see Definition 2.1.10 and Definition 4.2.9) 6- +functor formalisms with a strong theory of first Chern classes underlying a theory of cycle maps, + +POINCAR´E DUALITY REVISITED +47 +and in proving that a theory of first Chern classes is automatically a strong theory of first Chern +classes if D satisfies the excision axiom. +For the rest of this section, we fix a 6-functor formalism D with a theory of weak first Chern +classes c1 underlying a theory of cycle maps cl• (see Definition 5.3.3). +We fix a relative projective space fd : Pd +Y → Y with homogenenous coordinates X1, . . . , Xd+1 +and a set of d + 1-standard Y -hyperplanes +H1, . . . , Hd, Hd+1 ⊂ Pd +Y +given as the vanishing locus of the homogeneous coordinate Xi respectively. We note that the +intersection H1 ∩ H2 ∩ . . . Hd is canonically isomorphic to Y and the natural embedding +s: H1 ∩ H2 ∩ . . . Hd = Y → Pd +Y +defines the “zero” section of Pd +Y . We also denote by id : Hd → Pd +Y the natural immersion of Hd +into Pd +Y , and by s′ : Y → Hd the closed immersion of H1 ∩ H2 ∩ . . . Hd into Hd. In particular, we +have the following commutative diagram: +Y +Hd +Pd +Y . +s +s′ +id +Definition 5.4.1. (Naive Cycle map of the (“zero”) section) We define the naive cycle map of s +(relative to c1, cl•) to be the homotopy class of morphisms cls : s∗1Y → 1Pd +Y ⟨d⟩ inductively obtained +by the following rule: +(1) if d = 1, s is an effective Cartier divisors, so cls is the cycle map of the corresponding +effective Cartier divisor; +(2) if d > 1, we suppose that we defined cls for all d′ < d (so, in particular, it is defined for s′), +and define cli as the composition +s∗1Y ≃ id,∗s′ +∗1S +id,∗(cls′) +−−−−−→ id,∗1Hd⟨d − 1⟩ +id1⟨d−1⟩⊗clid +−−−−−−−−→ 1Pd +Y ⟨d⟩, +where cls′ is defined due to the induction hypothesis and clid is the cycle map of an effective +Cartier divisor. +Warning 5.4.2. The definition 5.4.1, a priori, depends on the choice of coordinates on Pd +Y . In +particular, it is not clear that the cycle map cli does not change if we permute coordinates on Pd +Y . +Lemma 5.4.3. Let c1 be a weak theory of first Chern classes on D underlying a theory of cycle +maps cl•, fd : Pd +Y → Y is the relative projective space, and the morphism +cls : s∗1Y → 1Pd +Y ⟨d⟩ +is the naive cycle map from Definition 5.4.1. Then the diagram +1Pd +Y +s∗1Y +1Pd +Y ⟨d⟩. +c1(OPd +Y /Y (1))⊗d +adjs +cls +commutes in D(Pd +Y ), where adjs is the canonical morphism coming from the (s∗, s∗)-adjunction. + +48 +BOGDAN ZAVYALOV +Proof. We argue by induction. If d = 1, the claim follows directly from Remark 5.3.4. +Now we suppose the claim is know for all d′ < d and wish to show it for d. +Note that, in +particular, the induction hypothesis applies to the morphism s′ : Y → Hd ≃ Pd−1 +Y +. In particular, +we conclude that the diagram +id,∗1Hd +id,∗s′ +∗1Y +id,∗1Hd⟨d − 1⟩. +id,∗ +� +c1(OHd/Y (1)) +⊗d−1 +� +id,∗(adjs′) +id,∗(cls′) +commutes in D(Pd +Y ). +Now note that OHd/Y (1) ≃ i∗ +dOPd +Y /Y (1) to conclude that the following +diagram commutes in D(Pd +Y ): +1Pd +Y +1Pd +Y ⟨d − 1⟩ +s∗1Y +id,∗1Hd +id,∗s′ +∗1Y +id,∗1Hd⟨d − 1⟩. +adjid +adjs +c1 +� +OPd +Y /Y (1) +�⊗d−1 +adjid +≀ +id,∗ +� +c1(OHd/Y (1)) +⊗d−1 +� +id,∗(adjs′) +id,∗(cls′) +(13) +By definition of a (weak) theory of first Chern classes underlying a theory of cycle maps (see +Definition 5.3.3), we also get a commutative diagram +1Pd +Y ⟨d − 1⟩ +id,∗1Hd⟨d − 1⟩ +1Pd +Y ⟨d⟩ +adjid +id1⟨d−1⟩⊗c1 +� +OPd +Y /Y (1) +� +id1⟨d−1⟩⊗clid +(14) +Therefore, we may combine Diagram (13) and Diagram (14) to conclude that the composition +1Pd +Y +adjs +−−→ s∗1Y +cls +−→ 1Pd +Y ⟨d⟩. +is equal (in the homotopy category D(Pd +Y )) to the following composition: +1Pd +Y +c1 +� +OPd +Y /Y (1) +�⊗d−1 +−−−−−−−−−−−−−→ 1Pd +Y ⟨d − 1⟩ +id1⟨d−1⟩⊗c1(OPd +Y /Y (1)) +−−−−−−−−−−−−−−−→ 1Pd +Y ⟨d⟩ +that is just equal to c1(OPd +Y /Y (1))⊗d. This finishes the proof. +□ + +POINCAR´E DUALITY REVISITED +49 +5.5. First Chern classes and excision. The main goal of this section is to show that, if D satisfies +the excision axiom, then any theory of first Chern classes on D is automatically a strong theory +of first Chern classes (see Definition 5.2.8). More precisely, we have to show that the projective +bundle formula for the P1 +S implies the projective bundle formula for all higher dimensional relative +projective spaces in the presence of the excision axiom. We show this by induction on d cutting +Pd +S into a closed subspace Pd−1 +S +and an open complement Ad +S. To deal with the open complement, +we use the naive cycle map of the zero section from Definition 5.4.1. +For the rest of the section, we fix a 6-functor formalism D satisfying the excision axiom, and a +theory of first Chern classes c1. We also fix an object Y ∈ C. +Setup 5.5.1. We denote by 0Y : Y → Ad +Y the zero section. This fits into the following commutative +diagram: +Y +Ad +Y +Pd +Y +Pd−1 +Y +≃ Hd+1 +Y, +0Y +s +id +g +j +fd +fd−1 +id+1 +where fd, fd+1, and g are the structure morphisms, j is the natural open immersion, and s is the +“zero” section from the discussion above Definition 5.4.1. +Definition 5.5.2. (Naive cycle map of the zero section) We define the naive cycle map of 0Y to +be the homotopy class of morphisms +cl0Y : 0Y,∗1Y → 1Ad +Y ⟨d⟩ +equal to j∗(cls), where cls is from Definition 5.4.1. More precisely, cl0Y is obtained as the compo- +sition +0Y,∗1Y ≃ j∗i∗1Y +j∗(cls) +−−−−→ j∗1Pd +Y ⟨d⟩ ≃ 1Ad +Y . +Remark 5.5.3. Alternatively, one can repeat Definition 5.4.1 in the affine case, and define cl0Y to +be the composition of d − 1 cycle maps of divisors. +Lemma 5.5.4. Following the notion from Setup 5.5.1, let cd +1 : 1Y → fd,∗1Pd +Y ⟨d⟩ be the morphism +obtained by applying Construction 5.2.7 to c1(OPd +Y /Y (1))⊗d. Then the diagram +1Y +g!1Ad +Y ⟨d⟩ +fd,∗1Pd +Y ⟨d⟩ +g!(cl0Y ) +cd +1 +can +commutes in (the homotopy category) D(Y ). + +50 +BOGDAN ZAVYALOV +Proof. Essentially by construction, we have the following commutative diagram +1Y +g!1Ad +Y ⟨d⟩ +fd,∗1Pd +Y ⟨d⟩. +g!(cl0Y ) +fd,∗(cls) +can +Thus, we are only left to identify fd,∗(cls) with cd +1. This follows from Remark 5.3.4 and Lemma 5.4.3. +□ +Lemma 5.5.5. Suppose D satisfies the excision axiom, and c1 is a theory of first Chern classes. +Following the notion from Setup 5.5.1, then there is a morphism of exact triangles +1Y +�d +k=0 1Y ⟨d − k⟩ +�d−1 +k=0 1Y ⟨d − k⟩ +g!1Ad +Y ⟨d⟩ +fd,∗1Pd +Y ⟨d⟩ +fd−1,∗1Pd−1 +Y +⟨d⟩ +g!(cl0Y ) +�d +k=0 ck +1⟨d−k⟩ +�d−1 +k=0 ck +1⟨d−k⟩ +in D(S), where the left lower map is the evident inclusion and the right lower map is the evident +projection. +Proof. The upper exact triangle is evident, and the lower exact triangle comes by applying fd,∗ = fd,! +to the excision fiber sequence (see Remark 2.1.9) +j!1Ad +Y ⟨d⟩ → 1Pd +Y ⟨d⟩ → id+1,∗1Pd−1 +Y +⟨d⟩. +Lemma 5.5.4 ensures that the left square commutes. So using the axioms of triangulated categories, +we can extend this commutative square to a morphism of exact triangles: +1Y +�d +k=0 1Y ⟨d − k⟩ +�d−1 +k=0 1Y ⟨d − k⟩ +g!1Ad +Y ⟨d⟩ +fd,∗1Pd +Y ⟨d⟩ +fd−1,∗1Pd−1 +Y +⟨d⟩. +g!(cl0Y ) +�d +k=0 ck +1⟨d−k⟩ +c +The only thing we are left to show is to compute c. It suffices to do separately on each direct +summand 1Y ⟨d − k⟩. Then we use that the upper exact triangle is split to see that c|1Y ⟨d−k⟩ must +be equal to the composition +1Y ⟨d − k⟩ +ci +1⟨d−k⟩ +−−−−−→ fd,∗1Pd +Y ⟨d⟩ can +−−→ fd−1,∗1Pd−1 +Y +⟨d⟩. +Using the first Chern classes commute with pullbacks and OPd +Y /Y (1)|Pd−1 +Y += OPd−1 +Y +/Y (1), one easily +sees that the composition is equal to +ck +1⟨d − k⟩: 1Y ⟨d − k⟩ → fd−1,∗1Pd−1 +Y +⟨d⟩. +□ + +POINCAR´E DUALITY REVISITED +51 +Lemma 5.5.6. Suppose that D satisfies the excision axiom, and c1 is a theory of first Chern +classes. Let g: Ad +Y → Y be a relative affine space, and 0Y : Y → Ad +Y be the zero section. Then the +natural morphism +1Y +g!(cl0Y ) +−−−−−→ g! +� +1Ad +Y ⟨d⟩ +� +is an isomorphism for any d. +Proof. We prove this claim by induction on d. +Step 1. Base of induction. Here, we follow the notation of Setup 5.5.1 with d = 1. In this +case, we note that the Zariski-closed immersion i2 : P0 +Y → P1 +Y is the “∞”-section of P1 +Y . So the +commutative diagram from Lemma 5.5.5 simplifies to the following form: +1Y +1Y ⊕ 1Y ⟨1⟩ +1Y ⟨1⟩ +g!1A1 +Y ⟨1⟩ +f∗1P1 +Y ⟨1⟩ +1Y ⟨1⟩. +g!(cl0Y ) +c1+f∗⟨1⟩ +id +The right vertical map is clearly an isomorphism, and the middle vertical arrow is an isomorphism +by Lemma 5.2.10. Therefore, we conclude that g!(cl0Y ) is also an isomorphism finishing this step. +Step 2. Inductive argument. We suppose that we know the result for integers < d and deduce it +for 2 ≤ d. For this, we consider the commutative diagram +Y +Ad−1 +Y +Ad +Y +Ad−1 +Y +Y, +i +i +id +0Y +id +j +f +g +h +where i is the zero section of Ad +Y , and j is the Zariski-closed immersion realizing Ad−1 +Y +inside Ad +Y +as the vanishing locus of the last coordinate. We warn the reader that this notation is different +from the one used in Setup 5.5.1. +By Remark 5.5.3, we have an equality (up to canonical identifications24) +cl0Y = clj⟨1⟩ ◦ j∗(cli), +(15) +24In this proof, we will ignore canonical identifications and write “=” meaning canonically isomorphic. This does +not cause any problems because our goal is to show that a well-defined morphism is an isomorphism. + +52 +BOGDAN ZAVYALOV +where cli is the naive cycle of the zero section i: Y → Ad−1 +Y +. Therefore, we have the following +sequence of equalities +g!(cl0S) = g! +� +clj⟨1⟩ ◦ j∗ (cli) +� += g! +� +clj⟨1⟩ +� +◦ g! +� +j∗ (cli) +� += h! +� +f!(clj⟨1⟩) +� +◦ h! +� +f! (j∗ (cli)) +� += h! +� +f!(clj⟨1⟩) +� +◦ h! +� +cli +� +. +The first equality comes from Equation (15). The second equality comes from the fact that g! is a +functor. The third equality comes from the fact that g = h ◦ f. The fourh equality comes from the +fact f ◦ j = id and j! = j∗ (because j is a closed immersion). +Now we note that the induction hypothesis implies that h!(cli) is an isomorphism. Similarly, we +note that the induction hypothesis implies that f!(clj) is an isomorphism by applying it to relative +A1-morphism f : Ad+1 +Y +→ Ad +Y . Therefore, we conclude that the composition +g!(cl0S) = h! +� +f!(clj⟨1⟩) +� +◦ h! +� +cli +� +must be an isomorphism as well. +□ +Theorem 5.5.7. Suppose that D satisfies the excision axiom, and c1 is a theory of first Chern +classes. Then c1 is a strong theory of first Chern classes (see Definition 5.2.8). +Proof. Following the notation of Definition 5.2.8, we need to show that the morphism +d +� +k=0 +ck +1⟨d − k⟩: +d +� +k=0 +1S⟨d − k⟩ → fd,∗1Pd +S⟨d⟩ +is an isomorphism for the relative projective space fd : Pd +S → S for any d ≥ 1. For d = 1, this +is the definition of a theory of first Chern classes. For d > 1, this follows from Lemma 5.5.5 and +Lemma 5.5.6 by an evident inductive argument. +□ +5.6. Trace morphisms. The main goal of this section is to construct the trace morphism for the +relative projective line from a theory of first Chern classes. Then we show that any theory of first +Chern classes underlying a theory of cycle maps (see Definition 5.3.3) admits a trace-cycle theory +on the relative projective line (see Definition 3.2.4). When combined with Theorem 3.3.3, this +already shows that any smooth morphism is cohomologically smooth with respect to a 6-functor +formalism with a theory of first Chern classes. +As previously, we fix an invertible object 1S⟨1⟩ ∈ D(S). In this section, we also fix a theory of +first Chern Classes with respect 1S⟨1⟩ (see Definition 5.2.8). +5.6.1. Recovering trace morphisms. Now we discuss the construction of the trace morphism for the +relative projective line. It comes as the “inverse” of the first Chern class morphism. More precisely, +we fix the relative projective line f : P1 +Y → Y and recall that Lemma 5.2.10 provides us with the +isomorphism +c1 + f ∗⟨1⟩: 1Y ⊕ 1Y ⟨1⟩ → f∗1P1 +Y ⟨1⟩. +(16) + +POINCAR´E DUALITY REVISITED +53 +We denote by (c1)−1 : f∗1P1 +Y ⟨1⟩ → 1Y the projection onto the first component of the decomposi- +tion (16). +Construction 5.6.1. The trace map trf : f∗1P1 +Y ⟨1⟩ → 1Y is the morphism +(c1)−1 : f∗1P1 +Y ⟨1⟩ → 1Y . +Remark 5.6.2. The formation of trf commutes with arbitrary base change. This formally follows +from the fact that c1(OP1 +Y /Y (1)) commutes with arbitrary base change. +Warning 5.6.3. This construction is well-defined only if we assume that c1 is a theory of first +Chern classes, and not merely a weak theory of first Chern classes. +For the later reference, it will also be convenient to discuss a more general construction of +trace morphisms for a strong theory of first Chern classes (see Definition 5.2.8). In this situation, +Lemma 5.2.10 provides us with the isomorphism +d +� +k=0 +ck +1⟨d − k⟩: +d +� +k=0 +1Y ⟨d − k⟩ → f∗1PY (E)⟨d⟩. +(17) +for any object Y ∈ C, a rank d + 1 vector bundle E, and the corresponding projective bundle +f : PY (E) → Y. +As above, it make sense to define (cd +1)−1 : f∗1PY (E)⟨d⟩ → 1Y to be the projection onto the last +component of decomposition (17). +Construction 5.6.4. In the notation as above, the trace map trf : f∗1PY (E)⟨d⟩ → 1Y is the +morphism +(cd +1)−1 : f∗1PY (E)⟨d⟩ → 1Y . +5.6.2. Properties of the trace morphism. Our next goal is to show that, if c1 is a theory of first +Chern classes underlying a theory of cycle maps cl•, then the triple (1P1 +S⟨1⟩, trf, cl∆) satisfies the +definition of a trace-cycle theory (see Definition 3.2.4) where f : P1 +S → S is the relative projective +line. For this, we will actually show a stronger statement: +Proposition 5.6.5. Let c1 be a theory of first Chern classes on D underlying a theory of cycle +maps cl• (see Definition 5.3.3), f : P1 +Y → Y the relative projective line, and s ∈ P1 +Y (Y ) a section. +Let trf : f∗1P1 +Y ⟨1⟩ → 1Y be the trace morphism from Construction 5.6.1. Then the diagram +1Y +f∗ (s∗1Y ) +1Y +f∗1P1 +Y ⟨d⟩ +∼ +Id +f∗(cls) +trf +commutes in D(Y ). +Whenever we use Construction 5.2.7 in the following proof, we use the notation adjc to distinguish +Chern morphisms on the base and morphisms adjoint to Chern morphisms on P1 +Y . +Proof. We first note that Remark 5.3.4 implies that f∗(cls) (up to a canonical identification f∗s∗1Y ≃ +1Y ) is equal to +adjc1(O(s)): 1Y → f∗1P1 +Y ⟨1⟩, + +54 +BOGDAN ZAVYALOV +where O(s) is the line bundle corresponding to the effective Cartier divisor s: S → P1 +S. We wish +to show that +trf ◦adjc1(O(s)) = id. +(18) +[Zav23, Cor. 7.10] (resp. its schematic counterpart) implies that there is a decomposition of S into +clopen subspaces S = ⊔i∈ISi with the induced morphisms +fi : P1 +Si → Si, si : Si → P1 +Si +such that, OP1 +Si(si) = f ∗ +i Li ⊗ OP1 +Si/Si(ni) for some L ∈ Pic(Si) and integers ni. Equation (18) can +be checked on each Si separately, so we can assume that O(s) ≃ f ∗L ⊗ O(n) for a line bundle L on +S and an integer n. +By restricting onto a fiber, one concludes that n = 1, so we have an isomorphism +O(S) ≃ f ∗L ⊗ O(1). +Therefore, we see that +adjc1(O(s)) =adj c1(f ∗L) +adj c1(O(1)): 1Y → f∗1P1 +Y ⟨1⟩. +By definition, we know that trf ◦adjc1(O(1)) = id. Thus we reduce the question to showing that +trf ◦adjc1(f ∗L) = 0 for any line bundle L on S. For this, we note that +c1(f ∗L) = f ∗c1(L). +Therefore, after unravelling Construction 5.2.7, we get that adjc1(f ∗L) is equal to the composition +1Y +c1(L) +−−−→ 1Y ⟨1⟩ +f∗⟨1⟩ +−−−→ f∗1P1 +Y ⟨1⟩. +By definition of the trace map, we have trf ◦f ∗⟨1⟩ = 0. Therefore, this formally implies that +trf ◦adjc1(f ∗L) = 0 +finishing the proof. +□ +Proposition 5.6.5 already has some non-trivial consequences: +Corollary 5.6.6. Let c1 be a strong theory of first Chern classes on D underlying a theory of cycle +maps cl•. Then the triple +(1P1 +S⟨1⟩, trf, cl∆) +forms a trace-cycle theory on the relative projective line f : P1 +S → S. In particular, any smooth +morphism in C is cohomologically smooth with respect to D (see Definition 2.3.7). +Proof. In this proof, we will freely identify +p∗ +11P1 +S ≃ 1P1 +S×SP1 +S ≃ p∗ +21P1 +S. +Thus, the cycle map of the diagonal takes the form +cl∆ : ∆!1P1 +S → 1P1 +S×SP1 +S⟨1⟩ +defining a cycle map in the sense of Definition 3.2.4. +Now commutativity of the first diagram in Definition 3.2.4 follows directly from Proposition 5.6.5 +by taking Y = P1 +S, f = p1, and s = ∆. We wish to establish commutativity of the second diagram. +For brevity, we denote P1 +S by X and P1 +S ×S P1 +S by X2. We have to check that the composition +1X⟨1⟩ ≃ p2,! (1X2⟨1⟩ ⊗ ∆!1X) +p2,!(id⊗cl∆) +−−−−−−−→ p2,!(1X2⟨1⟩⊗1X2⟨1⟩) ≃ p2,!1X2⟨1⟩⊗1X⟨1⟩ +trp2 ⊗id +−−−−−→ 1X⟨1⟩ + +POINCAR´E DUALITY REVISITED +55 +is equal to the identity morphism (in the homotopy category D(X)). For this, we first note that +Lemma 5.2.11 implies that the diagram +1X2⟨1⟩ ⊗ ∆!1X +1X2⟨1⟩ ⊗ 1X2⟨1⟩ +∆!1X ⊗ 1X2⟨1⟩ +≀ +id⊗cl∆ +cl∆⊗id +commutes in D(X2), where the left vertical map is the braiding morphism. Therefore, we have the +following commutative diagram +1X⟨1⟩ +p2,! (1X2⟨1⟩ ⊗ ∆!1X) +p2,!(1X2⟨1⟩ ⊗ 1X2⟨1⟩) +p2,! (∆!1X ⊗ 1X2⟨1⟩) +p2,!(1X2⟨1⟩ ⊗ 1X2⟨1⟩) +1X⟨1⟩ +1X ⊗ 1X⟨1⟩ +p2,!1X2⟨1⟩ ⊗ 1X⟨1⟩, +∼ +id +≀ +p2,!(id⊗cl∆) +id +p2,!(cl∆⊗id) +≀ +≀ +∼ +p2,!(cl∆)⊗id +where the two bottom vertical maps come from the projection formula. Therefore, it suffices to +show that the composition +1X⟨1⟩ +p2,!(cl∆)⊗id +−−−−−−−→ p2,!1X2⟨1⟩ ⊗ 1X⟨1⟩ +trp2 ⊗id +−−−−−→ 1X⟨1⟩ +is equal to the identity morphism (in the homotopy category D(X)). For this, it suffices to show +that +trp2 ◦ p2,!(cl∆) = id. +This follows from Proposition 5.6.5 by taking Y = P1 +S, f = p2, and s = ∆. +Overall, this proves that (1P1 +S⟨1⟩, trf, cl∆) forms a trace-cycle theory. The “in particular” claim +follows directly from Theorem 3.3.3. +□ +Now we discuss another consequence of Proposition 5.6.5: we show that a 6-functor formalism +D satisfying the excision axiom and admitting a theory of first Chern classes is automatically +A1-invariant (see Definition 2.1.10). For this, we need the following construction: +Construction 5.6.7. Let f : P1 +Y → Y be the relative projective line with a trace morphism +tr: f∗1P1 +Y ⟨1⟩ → 1Y . By the (f∗, f !)-adjunction, it also defines the adjoint trace morphism +adjtr: 1P1 +Y ⟨1⟩ → f ! (1Y ) . +Lemma 5.6.8. Let D be a 6-functor formalism satisfying the excision axiom, and c1 is a theory +of first Chern classes. Then D is motivic (see Definition 4.2.1). +Proof. Firstly, we note that Lemma 5.3.7 constructs a theory of cycle maps underlying c1. Fur- +thermore, Theorem 5.5.7 implies that c1 is a strong theory of first Chern classes. +Therefore, +Corollary 5.6.6 ensures that any smooth morphism is cohomologically smooth. So we only need to +show that D is A1-invariant. +We fix a relative affine line g: A1 +Y → Y and compactify it to a relative projective line f : P1 +Y → Y . +The complement of A1 +Y in P1 +Y forms a section s: Y → P1 +Y . Then Definition 5.3.6 defines a theory +cycle maps underlying c1. In particular, it defines a morphism +s∗1Y → 1P1 +Y ⟨1⟩. + +56 +BOGDAN ZAVYALOV +Using Proposition 5.6.5, it is essentially formal to verify that the following diagram commutes: +s∗1Y +s∗s!f !1Y +1P1 +Y ⟨1⟩ +f !1Y . +cls +≃ +adj +adjtr +Therefore, Corollary 5.6.6 and Theorem 3.2.8 ensure that adj tr is an isomorphism, and so we get +an exact triangle +s∗1Y +cls +−→ 1P1 +Y ⟨1⟩ can +−−→ j∗1A1 +Y ⟨1⟩, +where j : A1 +Y → A1 +Y is the natural open immersion. Now we apply f∗ (and Remark 5.3.4) to this +sequence to get an exact triangle +1Y +c1 +−→ f∗1P1 +Y ⟨1⟩ → g∗1A1 +Y ⟨1⟩. +In particular, we have a commutative diagram of exact triangles +1Y +1Y ⊕ 1Y ⟨1⟩ +1Y ⟨1⟩ +1Y +f∗1P1 +Y ⟨1⟩ +g∗1A1 +Y ⟨1⟩. +id +c1+f∗⟨1⟩ +g∗⟨1⟩ +c1 +Now the definition of the first Chern classes and the 2-out-of-3 property implies that +1Y ⟨1⟩ → g∗1A1 +Y ⟨1⟩ +is an isomorphism. Since 1Y ⟨1⟩ is an invertible sheaf, this formally implies that the natural mor- +phism 1Y → g∗1A1 +Y is an isomorphism as well. +□ +5.7. Poincar´e Duality. The first goal of this section is to show that a strong theory of first Chern +classes c1 underlying a theory of cycle maps (see Definition 5.2.8 and Definition 5.3.3) implies the +strongest version of Poincar´e Duality under the additional assumption that D is either A1-invariant +or pre-geometric (see Definition 4.2.9). The second goal is to show that, if D satisfies the excision +axiom, it suffices to assume that D admits a theory of first Chern classes. +We now briefly sketch the idea behind the proof. Corollary 5.6.6 reduces the question of proving +Poincar´e Duality to the question of computing dualizing object f !1Y . For this, we use Theorem 4.2.8 +(or Theorem 4.2.12) to reduce the question to computing C(Tf). This is done via compactifying +Tf to a projective bundle and the (naive) cycle map of a point from Definition 5.4.1. +For the rest of this section, we fix a 6-functor formalism D with a strong theory of first Chern +classes c1 underlying a theory of cycle maps cl• (see Definition 5.3.3). +We start by defining the adjoint to the trace map from Construction 5.6.4. More precisely, let +Y be an object of C, E is a vector bundle on Y of rank d + 1, and +f : PY (E) → Y +be the corresponding projective bundle. Then Construction 5.6.4 defines the trace morphism +trf : f∗1PY (E)⟨d⟩ → 1Y + +POINCAR´E DUALITY REVISITED +57 +Construction 5.7.1. Let f : PY (E) → Y and trf be as above. By the (f∗, f !)-adjunction, trf +uniquely defines the adjoint trace morphism +adjtr: 1PY (E)⟨d⟩ → f ! (1Y ) . +in D(PY (E)). +Now suppose that E = Od+1 +Y +, so PY (E) = Pd +Y . Then Definition 5.4.1 defines the (cycle) class of +the “zero” section +cls : s∗1Y → 1Pd +Y ⟨d⟩. +Construction 5.7.2. In the notation as above, cls uniquely defines the adjoint cycle map morphism +adjcls : 1Y → s! � +1Pd +Y ⟨d⟩ +� +. +in D(Y ). +Lemma 5.7.3. Let c1 be a strong theory of first Chern classes on D underlying a theory of cycle +maps cl•, and f : Pd +Y → Y is the relative projective space, and the following diagram +1Y +s!1Pd +Y ⟨d⟩ +s!f !1 +adjcls +∼ +s!(adjtrf ) +commutes in D(Y ). +Proof. By passing to adjoints, it suffices to show that the diagram +f∗s∗1Y +f∗1Pd +Y ⟨d⟩ +1Y +f∗(cls) +∼ +h +trf +commutes in D(Y ). Lemma 5.4.3 and a formal argument with adjoints (similar to Remark 5.3.4) +implies that the composition +1Y +h−1 +−−→ f∗s∗1Y +f∗(cls) +−−−−→ f∗1Pd +Y ⟨d⟩ +is equal to the morphism adjoint to cd +1(OPd +Y /Y (1)): 1Pd +Y → 1Pd +Y ⟨d⟩. In other words, this composition +is equal to the morphism +cd +1 : 1Y → f∗1Pd +Y ⟨d⟩ +from Construction 5.2.7 applied to c = c1 +� +OPd +Y /Y (1) +� +. +Therefore, the question boils down to +showing that the composition +1Y +cd +1 +−→ f∗1Pd +Y ⟨d⟩ +trf +−−→ 1Y +is the identity morphism (in D(Y )). However, this follows from the definition of the trace morphism +(see Construction 5.6.4). +□ +Now we turn to the proof of Poincar´e Duality. In the process of the proof, we will need the +following simple (but useful) lemma: + +58 +BOGDAN ZAVYALOV +Lemma 5.7.4. Let D be a closed symmetric monoidal additive category with a unit object 1, and +L an invertible object. Suppose that L = 1 ⊕ X. Then X ≃ 0. +Proof. If L is an invertible object, then the natural evaluation morphism +L ⊗ L∨ → 1 +must be an isomorphism. Now we write +L ⊗ L∨ ≃ (1 ⊕ X) ⊗ (1 ⊕ X)∨ ≃ (1 ⊕ X) ⊗ (1 ⊕ X∨) ≃ 1 ⊕ X ⊕ X∨ ⊕ X ⊗ X∨ +to conclude that X = X∨ = 0. +□ +Now we specialize to the case of a vector bundle of the form E′ = E ⊕ O on an object Y ∈ C. +Then the relative projective bundle +f : PY (E ⊕ O) → Y +has a canonical section s: Y → PY (E ⊕ O) corresponding to the quotient E ⊕ O +p−→ O. +Lemma 5.7.5. Let c1 be a strong theory of first Chern classes on D underlying a theory of cycle +maps cl•, Y an object of C, E a vector bundle of rank d + 1 on Y , and +f : PY (E ⊕ O) → Y +is the relative projective bundle with the canonical section s. Then the natural morphism +s!(adjtrf): s!1PY (E⊕O)⟨d⟩ → s!f !1Y +is an isomorphism, where adj trf is from Construction 5.7.1. +Proof. We first note that the question is local on Y , so we can assume that E = O⊕d+1 +Y +. +So +PY (E ⊕ O) ≃ Pd +Y , and s corresponds to the “zero” section defined just before Definition 5.4.1. +Now we note that s!1Pd +Y is an invertible object. Indeed, Corollary 5.6.6 (and Definition 2.3.6) +implies that f !1Y is an invertible object. Therefore, Lemma 2.1.6 implies that +1Y ≃ s!f !1Y ≃ s!1Pd +Y ⊗ s∗f !1Y . +Since s∗f !1Y is invertible and s!1Pd +Y is dual to it, we formally conclude that s!1Pd +Y is invertible as +well. +Now we note that Construction 5.7.2 defines a morphism +adjcls : 1Y → s!1Pd +Y . +Lemma 5.7.3 implies that the composition +1Y +adjcls +−−−→ s!1Pd +Y ⟨d⟩ +s!(adjtrf) +−−−−−−→ s!f !1Y ≃ 1Y +is the identity morphism (in the homotopy category D(Y )). So 1Y is a direct summand of the +invertible object s!1Pd +Y ⟨d⟩. Therefore, Lemma 5.7.4 implies that both adjcls and s!(adjtrf) must be +isomorphisms. +□ +Theorem 5.7.6. Suppose that a 6-functor formalism D is either A1-invariant or pre-geometric . +And let c1 be a strong theory of first Chern classes on D underlying a theory of cycle maps cl•, +and f : X → Y be a smooth morphism of pure relative dimension d (see [Hub96, Def. 1.8.1]). Then +the right adjoint to the functor +f! : D(X) → D(Y ) + +POINCAR´E DUALITY REVISITED +59 +is given by the formula +f !(−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X). +Proof. Corollary 5.6.6 and Lemma 5.2.11 already imply that any smooth morphism f : X → Y is +cohomologically smooth. Thus the question of computing f ! boils down to computing the dualizing +object ωf = f !1Y . +Now Theorem 4.2.8 (if D is A1-invariant) and Theorem 4.2.12 (if D is pre-geometric) imply that +f !1Y is given by the formula +f !1Y ≃ CX(Tf) ≃ s∗g!1Y , +where g: VX(Tf) → X is the total space of the (relative) tangent bundle, and s is the zero section. +We may compactify g to the morphism25 +g: P := PX(T∨ +f ⊕ OX) → X, +where s corresponds to the “zero” section defined just before Definition 5.4.1. Therefore, it suffices +show that +s∗g!1X ≃ 1X⟨d⟩. +For this, we note that +1X ≃ s!g!1X ≃ s!1P ⊗ s∗g! +X, +where the second isomorphism follows from Lemma 2.1.6 and the fact that g!1X is invertible due +to cohomological smoothness. Thus, it suffices to produce an isomorphism +s!1P ≃ 1X⟨−d⟩. +This follows from Lemma 5.7.5 and Lemma 2.1.6. +□ +Theorem 5.7.7. Let D be a 6-functor formalism satisfying the excision axiom (see Definition 2.1.8) +and admitting a a theory of first Chern classes c1. Suppose that f : X → Y is a smooth morphism +of pure relative dimension d. Then the right adjoint to the functor +f! : D(X) → D(Y ) +is given by the formula +f !(−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X). +Proof. Firstly, we note that Lemma 5.3.7 constructs a theory of cycle maps underlying c1. Furthe- +more, Theorem 5.5.7 ensures that c1 is a strong theory of first Chern classes. Then Corollary 5.6.8 +implies that D is A1-invariant (or even motivic). Thus the result follows from Theorem 5.7.6. +□ +6. Poincar´e Duality in examples +In this section, we apply Theorem 5.7.7 to two particular examples of 6-functor formalisms: ℓ- +adic ´etale sheaves on locally noetherian analytic adic spaces (resp. schemes) developed by R. Huber +in [Hub96], and “solid almost O+/p-ϕ-modules” on p-adic adic spaces developed by L. Mann in +[Man22b]. +In the first example, we recover Poincar´e Duality previously established by R. Huber in [Hub96, +Thm 7.5.3]. The proof is essentially formal: after unravelling all the definitions, Theorem 5.7.7 +tells us that, for the purpose of proving Poincar´e Duality, it suffices to construct a theory of first +Chern classes and compute cohomology of the relative projective line. Both things are particularly +easy in the case of ´etale sheaves: the theory of first Chern classes comes from the Kummer exact +sequence, and the computation of ´etale cohomology of the projective line essentially boils down to +25The dual vector bundle T∨ +f shows up due to the conventions used in [Zav23, Def. 6.14]. + +60 +BOGDAN ZAVYALOV +proving Pic(P1 +C) ≃ Z. This proof completely avoids quite elaborate construction of the trace map +and verification of Deligne’s fundamental lemma (see [Hub96, §7.2-7.4]). The same proof applies +to ℓ-adic sheaves on schemes and simplifies the argument as well. +Then we apply the same methods to the theory of “solid almost O+/p-ϕ-modules”. The proof +of Poincar´e Duality for ℓ-adic sheaves applies essentially verbatim in this context. The main new +ingredient is to verify that this 6-functor formalism satisfies the excision axiom; this is not automatic +in this situation. Nevertheless, the approach taken in this paper simplifies the proof of Poincar´e +Duality established in [Man22b, Cor. 3.9.25]. In particular, it avoids any usage of Grothendieck +Duality on the special fiber, and any explicit computations related to the “p-adic nearby cycles” +on the formal model of D1 +C. +6.1. ℓ-adic duality. The main goal of this section is to give an essentially formal proof of Poincar´e +Duality for ´etale cohomology of schemes and (locally noetherian) adic spaces. The proof is almost +uniform in both setups: the only difference is the computation of the cohomology groups of the +projective line. +In this section, we fix a locally noetherian analytic adic space S (resp. a scheme S) and an +integer n invertible in OS. We emphasize that, in the case of adic spaces, we do not make the +assumption that n is invertible in O+ +S until the very end. In what follows, C denotes the category +of locally finite type adic S-spaces (resp. locally finitely presented S-schemes). +We begin the section by defining the theory of ´etale first Chern classes. Before we start the +construction, we advise the reader to take a look at Section 5.2 since we will follow the notations +introduced there. In particular, we recall that in order to speak of (weak) first Chern classes, we +first fix an invertible object 1S⟨1⟩ ∈ D(S). +Definition 6.1.1. We define the Tate twist as 1S⟨1⟩ := µn[2] ∈ D(S´et; Z/nZ). +This object is +clearly invertible, so it fits into the assumptions of Section 5. +Now we recall that there is a natural Kummer exact sequence +0 → µn → Gm +f�→fn +−−−−→ Gm → 0 +on X´et for any X ∈ C. This sequence is functorial in X, so defines a morphism of D(Z)-valued +presheaves: +Gm[1] c−→ µn[2]: Cop → D(Z). +By passing to the derived ´etale sheafifications (see [Cla21, L. 3, Cor. 11]), we get a morphism of +D(Z)-valued sheaves +RΓ´et(−, Gm)[1] c−→ RΓ´et(−, µn)[2]: Cop → D(Z). +Definition 6.1.2. A theory of ´etale first Chern classes is the homomorphism of D(Z)-valued ana- +lytic sheaves +c´et +1 : RΓan(−, O×)[1] → RΓ´et(−, µn)[2] = RΓ(−; 1⟨1⟩) +obtained as the composition +RΓan(−, O×)[1] → RΓ´et(−, Gm)[1] c−→ RΓ´et(−, µn)[2]: Cop → D(Z), +where the first map is the natural morphism from the analytic cohomology of O× to the ´etale +cohomology of Gm. + +POINCAR´E DUALITY REVISITED +61 +Construction 6.1.3. Let X be an adic S-space. Then, after passing to H0(−), Definition 6.1.2 +defines a homomorphism +c´et +1 : Pic(X) ≃ H1 +an(X, O× +X) → H2(X, µn). +In what follows, we slightly abuse the notation and do not distinguish between these two versions +of the homomorphism c´et +1 . +Now we will later need to know that c´et +1 is a theory of first Chern classes in the sense of Defini- +tion 5.2.4 (if n is invertible in O+ +S ). Concretely, this means that we have to show that the natural +morphism +c´et +1 (O(1)) + f ∗ : Z/nZS ⊕ µn[2] → Rf∗µn,P1 +S[2] +is an isomorphism for the relative projective line f : P1 +S → S. We will show this claim with the +assumption that n is only invertible in OS. +In the rest of this section, we do the computations entirely in the analytic context. +In the +algebraic case, the computation is standard (see [Fu11, Thm. 7.2.9]). +We start with the case when S is a “geometric point”. More explicitly, we fix an algebraically +closed non-archimedean field C and assume that S = Spa(C, OC). +Lemma 6.1.4. Let X be a 1-dimensional rigid-analytic variety over S = Spa(C, OC), and n an +integer invertible in C. Then +(1) the natural morphism µn(C) → H0(X; µn) is an isomorphism if X is connected; +(2) we have Hi(X, µn) = 0 for i ≥ 3; +(3) the first Chern class c´et +1 : Pic(X)/n → H2(X, µn) is an isomorphism (see Construction 6.1.3). +Proof. Step 0. The morphism µn(C) → H0(X; µn) is an isomorphism if X is connected. Since C is +algebraically closed, we can choose a non-canonical isomorphism µn ≃ Z/nZ. Therefore, it suffices +to show that the natural morphism +Z/nZ → H0(X, Z/nZ) +is an isomorphism for a connected X. This is a standard result that we leave to the interested +reader. +To prove the other parts, we consider the morphism of sites π: X´et → Xan. +Step 1. Riπ∗µn = 0 for i ≥ 2. It suffices to show that the stalk (Riπ∗µn)x = 0 for every x ∈ X. +Now [Hub96, Cor. 2.4.6] ensures that, for each integer i and x ∈ X, +� +Riπ∗µn +� +x ≃ Hi � +Spa +� +K (x) , K (x)+� +, µn +� +. +Thus [Zav23, Lemma 9.2] implies that it suffices to prove the vanishing for rank-1 points x ∈ X. +In this case, +Hi(Spa +� +K (x) , OK(x) +� +, µn) ≃ Hi +cont(GK(x), µn). +So it suffices to show that GK(x) is of cohomological degree 1 for any x ∈ X. This follows from +[Hub96, Cor. 1.8.8 and Lemma 2.8.3]26 or one can adapt the proof of [Ber93, Lemma 5.2.5]. +Step 2. R1π∗Gm = 0. We first note that [Hub96, (2.2.7)] implies that the natural morphism +Pic(U) ≃ H1 +an(U, O× +U) → H1 +´et(U, Gm) +26The henselization in [Hub96, Lemma 2.8.3] disappears in the rank-1 case because OK is henselian with respect +to its pseudo-uniformizer ̟ and m = rad(̟) (see [Sta23, Tag 09XJ]). + +62 +BOGDAN ZAVYALOV +is an isomorphism (alternatively, this can be deduced from [KL19, Thm 2.5.11]). Therefore, the +definition of higher pushforwards imply that R1π∗Gm is the sheafification (in the analytic topology +on X) of the presheaf +U �→ Pic(U). +Since any class α ∈ Pic(U) trivializes analytically locally on U, we conclude the sheafification of +this presheaf is zero. +Step 3. Finish the proof. The Kummer exact sequence +0 → µn → Gm +·n +−→ Gm → 0 +implies that we have an exact triangle +Rπ∗µn → Rπ∗Gm +·n +−→ Rπ∗Gm. +(19) +Note that π∗Gm = O× +X, so Steps (1) and (2) imply that (19) stays exact after applying τ ≤1 to +Rπ∗Gm. Thus we get the following exact triangle +Rπ∗µn → O× +X +f�→fn +−−−−→ O× +X. +Since Hi(Xan, O× +X) = 0 for i ≥ 2 by [Hub96, Cor. 1.8.8] and [Sta23, Tag 0A3G], we conclude that +Hi(X, µn) = 0 for i ≥ 3 and the natural morphism +Pic(X)/n ≃ H1(Xan, O× +X)/n → H2(X, µn) +is an isomorphism. After unravelling the definitions, one sees that this morphism coincides with c1 +from Construction 6.1.3. +□ +Corollary 6.1.5. Let X = P1 +C be the (analytic) projective line over Spa(C, OC), and n an integer +invertible in C. Then +(1) the natural morphism µn(C) → H0(P1 +C, µn) is an isomorphism; +(2) we have Hi(P1 +C, µn) = 0 for i ≥ 3; +(3) the unique homomorphism c1 : Z/nZ → H2(P1 +C, µn) sending 1 to c1(O(1)) is an isomor- +phism. +Proof. This follows formally from Lemma 6.1.4 and the fact that the morphism +Z → Pic(P1 +C), +sending n to O(n), is an isomorphism. The latter fact follows from [Zav23, Cor. 7.10]. +□ +Now we go back to the case of a general locally noetherian analytic adic base S. Then we consider +the relative (analytic) projective line f : P1 +S → S. This comes with the “universal” line bundle O(1) +(see [Zav23, Rmk. 6.13] for the construction in the analytic setup). The first Chern class c1(O(1)) +defines a morphism +c1(O(1)): Z/nZP1 +S → µn[2]. +in the (triangulated) derived category D(P1 +S; Z/nZ). Due to the (f ∗, Rf∗)-adjunction, c1(O(1)) +defines a morphism +c´et +1 (O(1)): Z/nZS → Rf∗µn,P1 +S[2]. + +POINCAR´E DUALITY REVISITED +63 +Proposition 6.1.6. Let f : P1 +S → S be the relative (analytic) projective line over S, and n an +integer invertible in S. Then the natural morphism +c´et +1 (O(1)) + f ∗ : Z/nZS ⊕ µn[2] → Rf∗µn,P1 +S[2] +is an isomorphism27. +Proof. It suffices to show that the morphism c´et +1 (O(1)) + f ∗ is an isomorphism on stalks. [Zav23, +Lemma 9.3] ensures that Rf∗ preserves overconvergent sheaves, so it is sufficient on stalks over +rank-1 points. Now we note that the formation of first Chern classes commute with arbitrary base +change (similarly to Remark 5.2.6(3)), [Hub96, Prop. 2.6.1] ensures that it suffices to prove the claim +under the additional assumption that S = Spa(C, OC) for an algebraically closed, non-archimedean +field C. Then the result follows directly from Corollary 6.1.5. +□ +Theorem 6.1.7. Let S be a locally noetherian analaytic adic space, n an integer invertible in O+ +S , +and D´et(−; Z/nZ): Corr(C) → Cat∞ be the 6-functor formalism formalism constructed in [Zav23, +Thm. 8.4 and Rmk. 8.5]. Then +(1) D´et(−; Z/nZ) satisfies the excision axiom (see Definition 2.1.8); +(2) Definition 6.1.2 defines a theory of first Chern classes on D´et(−; Z/nZ) (see Definition 5.2.8) +with 1S⟨1⟩ = µn[2]. +Proof. It is essentially obvious that D´et(−; Z/nZ) satisfies the excision axiom. More precisely, it +suffices to show that, for any locally finite type adic S-space X, a complex F ∈ D´et(X; Z/nZ), and +a Zariski-closed immersion i: Z → X, the triangle +j!j∗F → F → i∗i∗F +is exact, where j : U → X is the open complement of Z. This is clear by arguing on stalks. The +fact that c1 is a theory of first Chern classes follows directly from Proposition 6.1.6. +□ +Before we state the general version of Poincar´e Duality, we recall that the Tate twist Z/nZ(m) +is by definition the ´etale sheaf µ⊗m +n +(with the obvious meaning if m is negative). Likewise, for a +sheaf F ∈ D(X´et; Z/nZ), we denote its Tate twist F ⊗ Z/nZ(m) simply by F(m). +Theorem 6.1.8. Let Y be a locally noetherian analytic adic space, and f : X → Y a smooth +morphism is of pure dimension d, and n is an integer invertible in O+ +Y . Then the functor +Rf! : D(X´et; Z/nZ) → D(Y´et; Z/nZ) +admits a right adjoint given by the formula +f ∗(d)[2d]: D(Y´et; Z/nZ) → D(X´et; Z/nZ). +Proof. Put S = Y and consider the ´etale 6-functor formalism D´et(−; Z/nZ) that associates to X +the ∞-derived category D(X´et; Z/nZ) (see [Zav23, Thm. 8.4 and Rmk. 8.5]). Then Theorem 6.1.7 +implies that D´et satisfies the excision axiom and admits a theory of first Chern classes with 1S⟨1⟩ = +µn[2]. Thus the result follows from Theorem 5.7.7. +□ +Remark 6.1.9. The proof of Theorem 6.1.8 works in essentially the same way for the 6-functor +formalism of ´etale Z/nZ-sheaves on schemes (see [Zav23, Rmk. 8.6] for the construction of ´etale +6-functor formalism). In particular, this reproves the classical Poincar´e Duality in the theory of +´etale cohomology of schemes. +27The notation “f ∗” means the natural morphism µn[2] → Rf∗µn,P1 +S[2] coming as the unit of the (f ∗, Rf∗)- +adjunction. + +64 +BOGDAN ZAVYALOV +Remark 6.1.10. Note that the only place, where we used that n is invertible in O+ +S (as opposed +to being invertible in OS) is to make sure that the categories D´et(−; Z/nZ) can be arranged into +a 6-functor formalism. If n is not invertible in O+ +S , the problem is that the proper base change +formula does not hold in general. In the next section, we work around this issue by using another +6-functor formalism closely related to the p-adic cohomology of p-adic rigid-analytic spaces. +6.2. p-adic duality. The goal of this section is to give a new proof of Poincar´e Duality for O+/p- +ϕ-modules”. +In what follows, we fix a locally noetherian analytic adic space S with a morphism S → +Spa(Qp, Zp), and C the category of locally finite type adic S-spaces. +Now we briefly sketch the construction of the 6-functor formalism of O+/p-(ϕ-)modules developed +in [Man22b]. We will not discuss the full construction of this formalism here; instead we only sketch +the part that are important for the discussion of this section, and refer to [Man22b] for the thorough +construction of this 6-functor formalism. +To begin with, we recall that [Man22b, Thm. 3.6.12 and Prop. 3.9.13] define28 two (closely related) +6-functor formalisms +Da +□(−; O+/p): Corr(C) → Cat∞, +and +Da +□(−; O+/p)ϕ : Corr(C) → Cat∞. +These two 6-functor formalisms are defined in a significantly more general setup, that generality +will not play a huge role in our discussion beyond the point that we can evaluate Da +□(−; O+/p) on +strictly totally disconnected perfectoids over S (which are essentially never locally finite type over +S). +We briefly discuss the construction of the category Da +□(X; O+/p) in [Man22b]. +First, for a +(strictly) totally disconnected perfectoid space with a map Spa(R, R+) → S, one puts +Da +□(Spa(R, R+); O+/p) = Da +□(R+/p) +the almost category of solid R+/p-modules (see [Man22b, Def. 3.1.2]). Then one shows that this +assignment satisfies (hyper-)descent in the v-topology (see [Man22b, Thm. 3.1.27 and Def. 3.1.3]) +on (strictly) totally disconnected perfectoid spaces over S. +After that, Mann formally extends +Da +□(X; O+/p) to all adic S-spaces by descent. +This category comes equipped with the usual 4 +functors: f∗, f ∗, Hom, and ⊗. The question of defining the shriek functors is quite subtle and we +refer to [Man22b, §3.6] for their construction. +The ϕ-version of Da +□(X; O+/p) is defined as the equalizer (in the ∞-categorical sense) +Da +□(X; O+/p)ϕ := eq +� +Da +□(−; O+/p) +ϕ−id +−−−→ Da +□(−; O+/p) +� +. +Then [Man22b, Prop. 3.9.13] extends the 6-functors to Da +□(X; O+/p)ϕ. +Our first goal is to show that both of these 6-functor formalisms satisfy the excision axiom +(see Definition 2.1.8). This will allow us to apply Theorem 5.7.7 to this situation and reduce the +question of proving Poincar´e Duality to the question of constructing a theory of first Chern classes +and computing the cohomology groups of the projective line P1 +C. +One useful tool in proving the excision axiom will be the (sub)category of discrete objects +Da +□(X; O+ +X/p)ω ⊂ Da +□(X; O+ +X/p) introduced in [Man22b, Def. 3.2.17]. +If X admits a map29 to +28See also [Man22b, Prop. 3.5.14] to conclude that any locally finite type morphism of analytic adic spaces is bdcs +in the sense of [Man22b, Defn. 3.6.9]. +29This condition ensures that X ∈ XΛ +v in the sense of [Man22b, Def. 3.2.5]. + +POINCAR´E DUALITY REVISITED +65 +an affinoid perfectoid space Spa(R, R+) [Man22b, Prop. 3.3.16] justifies the name and shows that +there is a functorial equivalence +Da +□(X; O+ +X/p)ω ≃ Shv�(X´et; O+,a +X /p)oc +between discrete objects in Da +□(X; O+,a +X /p) and overconvergent objects in the left-completed ∞- +derived category of ´etale sheaves of almost O+ +X/p-modules (see [Man22b, Prop. 3.3.16]). +Lemma 6.2.1. Let X = Spa(R, R+) be a strictly totally disconnected perfectoid space over S, +i: Z → X is a Zariski-closed affinoid perfectoid subspace (in the sense of [Sch17, Def. 5.7]), and +j : U → X is the open complement. Then +j!O+,a +U /p → O+,a +X /p → i∗O+,a +Z /p +is a fiber sequence in Da +□(X; O+/p). +Proof. Step 1. j!O+,a +U /p is discrete. We first consider the morphism π: |X| → π0(X) from [Sch17, +Lemma 7.3]. Since Z is Zariski-closed, it is both closed under generalizations and specializations. +Thus the same holds for U, so the natural morphism U → π−1(π(U)) is an isomorphism. Since π +is a quotient morphism, we conclude that U ′ := π(U) must be open in π0(X). +Now recall that π0(X) is a profinite set. So clopen subsets form a base of topology on π0(X). +Therefore U ′ = ∪i∈IU ′ +i is a filtered union of clopen subset U ′ +i (in particular, they are quasi-compact). +Thus we conclude that U = ∪i∈Iπ−1(U ′ +i) is a filtered union of clopen subspaces of X. We denote +the pre-image U ′ +i by ji : Ui → X. Then, by construction (see [Man22b, Lemma 3.6.2]), we have +j!O+,a +U /p ≃ colim ji,!O+,a +Ui /p. +Since each ji is clopen, we conclude that ji,! = ji,∗. Thus each ji,!O+,a +Ui /p = ji,∗O+,a +Ui /p is discrete by +[Man22b, Lemma 3.3.10(ii)]. So the colimit is also discrete by [Man22b, Lemma 3.2.19]. +Step 2. Reduce to the case X = Spa(C, C+). Now we note that i∗O+,a +Z /p is discrete by [Man22b, +Lemma 3.3.10]. So we can check that the morphism +j!O+,a +U /p → fib +� +O+,a +X /p → i∗O+,a +Z /p +� +(20) +is an isomorphism in Da +□(X; O+ +X/p)ω ≃ Shv�(X´et; O+,a +X /p)oc. However, a property of a map being +an isomorphism in Shv�(X´et; O+,a +X /p)oc can be checked on stalks. Therefore, it suffices to prove the +claim after a pullback30 along each morphism +Spa(C, C+) → X, +where C is an algebraically closed non-archimedean field, and C+ ⊂ C is an open bounded valuation +ring. +But this is essentially obvious: note that Z ×X Spa(C, C+) is a Zariski-closed subspace +of Spa(C, C+), so it is either empty or equal to Spa(C, C+). +In both cases, Morphism (20) is +tautologically an isomorphism. +□ +Lemma 6.2.2. The 6-functor formalisms Da +□(−; O+/p) and Da +□(−; O+/p)ϕ satisfy the excision +axiom. +Proof. We fix a locally finite type adic S-space X, a Zariski-closed immersion Z +i֒−→ X, and the open +complement U +j֒−→ X. We wish to show that, for any F ∈ Da +□(X; O+/p) (resp. F ∈ Da +□(X; O+/p)ϕ), +the natural morphism +j!j∗F → fib (F → i∗i∗F) +30Here, we implicitly use base change for both j! and i∗ + +66 +BOGDAN ZAVYALOV +is an isomorphism. Since the forgetful functor +Da +□(X; O+/p)ϕ → Da +□(X; O+/p) +commutes with limits, all 6-functors, and is conservative (see [Man22b, Lem. 3.9.12]), it is sufficient +to prove that Da +□(−; O+/p) satisfies excision. +For this, we note that the projection formulas for i∗ and j! imply that it suffices to show that +the natural morphism +j!O+,a +U /p → fib +� +O+,a +X /p → i∗O+,a +Z /p +� +is an isomorphism. By v-descent and proper base change, it can be checked on the basis of strictly +totally disconnected perfectoid spaces. [Zav23, Lemma 5.2] ensures that Zariski-closed immersions +of locally noetherian analytic adic spaces pullback to Zariski-closed subsets of affinoid perfectoid. +Therefore, the result follows from Lemma 6.2.1. +□ +Now we discuss the computation of the cohomology groups of the projective line, and the con- +struction of first Chern classes. An important tool to deal with these questions is the Riemann- +Hilbert functor from [Man22b, §3.9]. We follow the notation of [Man22b], and denote by D´et(X; Fp) +the left-completed ∞-derived category31 of ´etale sheaves of Fp-modules on X. +We also denote +by D´et(X; Fp)oc ⊂ D´et(X; Fp) the full ∞-subcategory spanned by overconvergent sheaves (see +[Man22b, Def. 3.9.17]). Then [Man22b, Def. 3.9.21] defines the Riemann-Hilbert functor +− ⊗ O+,a +X /p: D´et(X; Fp)oc → Da +□(X; O+ +X/p)ϕ. +If X admits a map to an affinoid perfectoid field Spa(R, R+), then (essentially by construction) the +following diagram +D´et(X; Fp)oc +Da +□(X; O+ +X/p)ϕ +Shv�(X´et; O+,a +X /p)oc +Da +□(X; O+ +X/p) +−⊗O+,a +X /p +−⊗O+,a +X /p +can +(21) +commutes up to a homotopy, where the left vertical functor is (the left completion) of the naive +(derived) tensor product functor, and the bottom horizontal functor is the canonical identification +of Shv� +´et(X; O+,a +X /p)oc with the subcategory of discrete objects in Da +□(X; O+ +X/p). +Definition 6.2.3. The p-adic Tate twist O+,a +X /p(i) ∈ Da +□(X; O+ +X/p)ϕ (resp. O+,a +X /p(i) ∈ Da +□(X; O+ +X/p)) +is the image of the Tate twist Fp(i) under the Riemann-Hilbert functor, i.e., +O+,a +X /p(i) ≃ Fp(i) ⊗ O+,a +X /p. +Warning 6.2.4. In the next lemma, we follow the terminology of [Man22b] and do not write R +for the derived functors on the category of Fp-sheaves. +Lemma 6.2.5. Let f : X → Y be a proper morphism in C, and k an integer. Then the natural +morphism +� +f´et,∗Fp(k) +� +⊗ O+,a +Y +/p → f∗ +� +O+,a +X /p(k) +� +is an isomorphism in Da +□(Y ; O+ +Y /p)ϕ. +31It may be more appropriate to denote this category by �D´et(X; Fp) or Shv�(X´et; Fp), but we prefer to stick to +the notation used in [Man22b]. The reason to use this notation is that the left completed version naturally arises as +the “derived” category of ´etale Fp-sheaves on the associated diamond X♦. + +POINCAR´E DUALITY REVISITED +67 +Proof. The claim is v-local on the base, so we can assume that Y (and, therefore, X) admits a +morphism to an affinoid perfectoid space Spa(R, R+). Then we wish to leverage Diagram (21) to +reduce the question to the classical Primitive Comparison Theorem. +More precisely, we first note that the forgetful functor Da +□(X; O+ +X/p)ϕ → Da +□(X; O+ +X/p) is con- +servative by [Man22b, Lem. 3.9.12(i)]. Thus, it suffices to show that the corresponding morphism +f´et,∗Fp(k) ⊗ O+,a +Y +/p → f∗O+,a +X /p(k) +is an isomorphism in Da +□(Y ; O+ +Y /p). Now we note that [Man22b, Prop. 3.3.16 and Lemmas 3.3.10(ii), +3.3.15(iii)] imply that the diagram +Shv�(X´et; O+,a +X /p)oc +Da +□(X; O+ +X/p) +Shv�(Y´et; O+,a +X /p)oc +Da +□(Y ; O+ +X/p) +f´et,∗ +f∗ +(22) +commutes up to a homotopy. Therefore, Diagram (21) ensures that it suffices to show that the +natural morphism +� +f´et,∗Fp(k) +� +⊗ O+,a +Y +/p → f´et,∗O+,a +X /p(k) +is an isomorphism in Shv�(Y´et; O+,a +Y +/p). More explicitly, we reduced the question to showing that, +for each k and d, the natural morphism +Rdf´et,∗Fp(k) ⊗Fp O+ +Y /p → Rdf´et,∗O+ +X/p(k) +is an almost isomorphism of ´etale O+ +Y /p-module. This follows from the standard Primitive Compar- +ison Theorem from the p-adic Hodge theory, see [Sch13, Cor. 5.11] or [Zav21a, Lemma 6.3.7]. +□ +Now we are ready to define first Chern classes on Da +□(−; O+/p)ϕ and Da +□(−; O+/p). For this, we +note that the Riemann-Hilbert functor D´et(X; Fp)ov → Da +□(X; O+ +X/p)ϕ sends the constant sheaf +Fp to the unit object O+ +X/p, and so it defines a functorial in X morphism: +RΓ´et(X, µp) → RΓ(X, O+,a +X /p(1)) := HomDa +□(X;O+ +X/p)ϕ(O+,a +X /p, O+,a +X /p(1)). +Definition 6.2.6. We define the Tate twist as 1S⟨1⟩ := O+,a +S /p(1)[2] ∈ Da +□(S; O+ +S /p)ϕ. This object +is invertible (since − ⊗ O+,a +S /p is symmetric monoidal), so it fits into the assumptions of Section 5. +Definition 6.2.7. A theory of first Chern classes on the 6-functor formalism Da +□(−; O+/p)ϕ is the +morphism of Sp-valued presheaves +cϕ +1 : RΓan(−, O×)[1] → RΓ(−, O+,a/p)[2] = RΓ(−, 1⟨1⟩) +obtained as the composition +RΓan(−; O×)[1] +c´et +1 +−→ RΓ´et(−; µp)[2] → RΓ(−; O+/p(1))[2], +where the first morphism comes from Definition 6.1.2. +Theorem 6.2.8. Let S be a locally noetherian analaytic adic space over Spa(Qp, Zp). Then +(1) Da +□(−; O+/p)ϕ satisfies the excision axiom; +(2) cϕ +1 is a theory of first Chern classes on Da +□(−; O+/p)ϕ. + +68 +BOGDAN ZAVYALOV +Proof. Lemma 6.2.2 ensures that Da +□(−; O+/p)ϕ satisfies the excision sequence. To show that c1 is +a theory of first Chern classes, we have to show that the natural morphism +cϕ +1 (O(1)) + f ∗ : O+,a +S /p ⊕ O+,a +S /p(1)[2] → f∗ +� +O+,a +P1 +S /p(1)[2] +� +is an isomorphism. For this, we use the commutative diagram +� +Fp ⊕ µp[2] +� +⊗ O+,a +S /p +f´et,∗ (µp[2]) ⊗ O+,a +S /p +O+,a +S /p ⊕ O+,a +S /p(1)[2] +f∗ (O+/p(1)[2]) . +(c´et +1 (O(1))+f∗ +´et)⊗O+/p +cϕ +1 (O(1))+f∗ +The left vertical arrow is an isomorphism by definition, the right vertical arrow is an isomorphism +by Lemma 6.2.5, and the top horizontal map is an isomorphism by Proposition 6.1.6. Therefore, +the bottom horizontal arrow must be an isomorphism as well finishing the proof. +□ +Theorem 6.2.9. Let Y be a locally noetherian analytic adic space over Spa(Qp, Zp), and f : X → +Y a smooth morphism of pure dimension d. Then the functor +f! : Da +□(X; O+ +X/p)ϕ → Da +□(Y ; O+ +Y /p)ϕ +admits a right adjoint given by the formula +f ∗ ⊗ O+,a +X /p(d)[2d]: Da +□(Y ; O+ +Y /p)ϕ → Da +□(X; O+ +X/p)ϕ. +Proof. This is a direct consequence of Theorem 6.2.8 and Theorem 5.7.7. +□ +Remark 6.2.10. Essentially the same proof applies to the 6-functor formalism Da +□(−; O+/p). +References +[Ber93] +V. G. Berkovich. ´Etale cohomology for non-Archimedean analytic spaces. Inst. Hautes ´Etudes Sci. Publ. +Math., (78):5–161 (1994), 1993. +[Bha22] +B. Bhatt. Prismatic f-gauges. https://www.math.ias.edu/~bhatt/teaching/mat549f22/lectures.pdf, +2022. +[BL22a] +B. Bhatt and J. Lurie. Absolute prismatic cohomology. https://arxiv.org/pdf/2201.06120.pdf, 2022. +[BL22b] +B. Bhatt and J. Lurie. The prismatization of p-adic formal schemes. https://arxiv.org/abs/2201.06124, +2022. +[Cla21] +D. Clausen. Algebraic de rham cohomology. https://sites.google.com/view/algebraicderham/home, +2021. +[CS19] +D. +Clausen +and +P. +Scholze. +Lectures +on +condensed +mathematics. +http://people.mpim-bonn.mpg.de/scholze/Condensed.pdf, 2019. +[CS22] +D. +Clausen +and +P. +Scholze. +Condensed +mathematics +and +complex +geometry. +https://people.mpim-bonn.mpg.de/scholze/Complex.pdf, 2022. +[Dri22] +V. Drinfeld. Prismatization. https://arxiv.org/abs/2005.04746, 2022. +[FS21] +L. +Fargues +and +P. +Scholze. +Geometrization +of +the +local +langlands +correspondence. +https://arxiv.org/abs/2102.13459, 2021. +[Fu11] +L. Fu. Etale cohomology theory, volume 13 of Nankai Tracts in Mathematics. World Scientific Publishing +Co. Pte. Ltd., Hackensack, NJ, 2011. +[Fuj02] +K. Fujiwara. A proof of the absolute purity conjecture (after Gabber). In Algebraic geometry 2000, Azumino +(Hotaka), volume 36 of Adv. Stud. Pure Math., pages 153–183. Math. Soc. Japan, Tokyo, 2002. +[Ful98] +W. Fulton. Intersection theory, volume 2 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. +A Series of Modern Surveys in Mathematics. Springer-Verlag, Berlin, second edition, 1998. +[GH15] +D. Gepner and R. Haugseng. Enriched ∞-categories via non-symmetric ∞-operads. Adv. Math., 279:575– +716, 2015. + +POINCAR´E DUALITY REVISITED +69 +[GR17] +D. Gaitsgory and N. Rozenblyum. A study in derived algebraic geometry. Vol. I. Correspondences and +duality, volume 221 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, +RI, 2017. +[GS16] +R. Garner and M. Shulman. Enriched categories as a free cocompletion. Adv. Math., 289:1–94, 2016. +[HA] +J. Lurie. Higher algebra. https://www.math.ias.edu/~lurie/papers/HA.pdf, 2017. +[Hau15] +R. Haugseng. Rectification of enriched ∞-categories. Algebr. Geom. Topol., 15(4):1931–1982, 2015. +[Hub93] +R. Huber. Bewertungsspektrum und rigide Geometrie, volume 23 of Regensburger Mathematische Schriften. +Universit¨at Regensburg, Fachbereich Mathematik, Regensburg, 1993. +[Hub96] +R. Huber. ´Etale cohomology of rigid analytic varieties and adic spaces. Friedr. Vieweg & Sohn, Braun- +schweig, 1996. +[JY21] +N. Johnson and D. Yau. 2-dimensional categories. Oxford University Press, Oxford, 2021. +[Kha22] +A. A. Khan. Absolute Poincar´e duality in ´etale cohomology. Forum Math. Sigma, 10:Paper No. e99, 2022. +[KL19] +K. +Kedlaya +and +R. +Liu. +Relative +p-adic +hodge +theory, +ii: +Imperfect +period +rings. +https://arxiv.org/pdf/1602.06899.pdf, 2019. +[Lur18] +J. Lurie. Spectral algebraic geometry. https://www.math.ias.edu/~lurie/papers/SAG-rootfile.pdf, +2018. +[Lur22] +J. Lurie. Kerodon. https://kerodon.net, 2022. +[LZ17] +Y. +Liu +and +W. +Zheng. +Enhanced +adic +formalism +and +base +change +for +higher +artin +stacks. +https://arxiv.org/abs/1211.5948, 2017. +[LZ22] +Q. Lu and W. Zheng. Categorical traces and a relative Lefschetz-Verdier formula. Forum Math. Sigma, +10:Paper No. e10, 24, 2022. +[Man22a] L. +Mann. +The +6-functor +formalism +for +Zℓ- +and +Qℓ-sheaves +on +diamonds. +https://arxiv.org/abs/2209.08135, 2022. +[Man22b] L. Mann. A p-adic 6-functor formalism in rigid-analytic geometry. https://arxiv.org/abs/2206.02022, +2022. +[Ols15] +M. Olsson. Borel-Moore homology, Riemann-Roch transformations, and local terms. Adv. Math., 273:56– +123, 2015. +[Sch13] +P. Scholze. p-adic Hodge theory for rigid-analytic varieties. Forum Math. Pi, 1:e1– 77, 2013. +[Sch17] +P. Scholze. ´etale cohomology of diamonds. https://arxiv.org/abs/1709.07343, 2017. +[Sch22] +P. Scholze. Six-functor formalisms. https://people.mpim-bonn.mpg.de/scholze/SixFunctors.pdf, 2022. +[SGA IV] A. Grothendieck. Th´eorie des topos et cohomologie ´etale des sch´emas. Lecture Notes in Mathematics, +Vol. 269. Springer-Verlag, Berlin-New York. S´eminaire de G´eom´etrie Alg´ebrique du Bois-Marie 1963–1964 +(SGA 4), Dirig´e par M. Artin, et J. L. Verdier. Avec la collaboration de N. Bourbaki, P. Deligne et B. +Saint-Donat. +[Sta23] +T. Stacks project authors. The stacks project. https://stacks.math.columbia.edu, 2023. +[Tan22] +L. Tang. Syntomic cycle classes and prismatic poincar´e duality. https://arxiv.org/abs/2210.14279, 2022. +[Zav21a] +B. Zavyalov. Almost coherent modules and almost coherent sheaves. https://arxiv.org/abs/2110.10773, +2021. +[Zav21b] +B. Zavyalov. Mod-p poincar´e duality in p-adic analytic geometry. https://arxiv.org/abs/2111.01830, +2021. +[Zav21c] +B. +Zavyalov. +Quotients +of +admissible +formal +schemes +and +adic +space +by +finite +groups. +https://arxiv.org/abs/2102.02762, 2021. +[Zav23] +B. Zavyalov. Notes on adic geometry. https://bogdanzavyalov.com/refs/adic_notes.pdf, 2023. + diff --git a/O9E2T4oBgHgl3EQfVQdj/content/tmp_files/load_file.txt b/O9E2T4oBgHgl3EQfVQdj/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..3cc90f01bac11e8e294388eea1ffedf4fa722ec6 --- /dev/null +++ b/O9E2T4oBgHgl3EQfVQdj/content/tmp_files/load_file.txt @@ -0,0 +1,3712 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf,len=3711 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='03821v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='AG] 10 Jan 2023 POINCAR´E DUALITY REVISITED BOGDAN ZAVYALOV Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We revisit Poincar´e Duality in the context of an abstract 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we provide a small list of assumptions that implies Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As an application, we give new uniform (and essentially formal) proofs of some previously established Poincar´e Duality results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Contents 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Introduction 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Abstract six functor formalisms 12 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Abstract Poincar´e Duality 23 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Dualizing object 29 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' First Chern classes 39 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Poincar´e Duality in examples 59 References 68 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Introduction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Historical overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Six functor formalisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Historically, the first 6-functor formalism was introduced by A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Grothendieck in [SGA IV] in the context of ´etale cohomology of Spec Z[1/n]-schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To explain what this means, we note that ´etale cohomology come with the assignment X �→ D(X) = D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) that sends a Spec Z[1/n]-scheme X to the derived category of ´etale sheaves of Z/nZ-modules on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This recovers the absolute ´etale cohomology via the formula RΓ (X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) ≃ RHomD(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='Z/nZ) � Z/nZX, Z/nZX � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It turns out that this assignment comes equipped with 6-operations � f ∗, Rf∗, ⊗L, RHom, Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', Rf !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='� that satisfy the following list of “axioms”: Axioms 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) the tensor product ⊗L defines the structure of a symmetric monoidal cat- egory on D(X);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) every second functor is right adjoint to the previous one;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) the pullback functor f ∗ is symmetric monoidal;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' commutes with base change;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (5) Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' satisfies the projection formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1 2 BOGDAN ZAVYALOV Since then, it turned out that many other cohomology theories come equipped with the cor- responding 6-functor formalisms (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' D-modules, mixed Hodge modules, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, it often happens that interesting cohomology theories admit “coefficient” theories X �→ D(X) accom- panied by 6-operations1 � f ∗, f∗, ⊗, Hom, f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='� satisfying the same set of axioms and recovering the corresponding cohomology complexes via the formula RΓ(X) = HomD(X)(1X, 1X), where 1X is the unit object of D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, it is somewhat difficult to make the definition of a 6-functor formalism precise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To point out the main difficulty, we stick our attention to the projection formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For a morphism f : X → Y , there is no canonical morphism between f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (F ⊗ f ∗G) and f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' F ⊗ G, so Axiom 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1(5) should really specify, for every f : X → Y , an isomorphism f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (− ⊗ f ∗−) ≃ f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ⊗ − of functors D(X) × D(Y ) → D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the natural question is how functorial this isomorphism is, how well it interacts with composition of morphisms or base change, and etc, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Answering these questions would involve further choices of equivalences between equivalences that we would like to also be functorial in some precise way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' But these higher coherences are pretty difficult to spell out explicitly making it hard to give a precise definition of a 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This problem has been recently beautifully resolved by defining2 a 6-functor formalism to be an ∞-functor D: Corr → Cat∞ from the appropriate category of correspondences to the ∞-category of ∞-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This idea originally goes back to J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie, and was first spelled out by D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Gaitsgory and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Rozenblyum in [GR17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Unfortunately, some of their claims still seem to be unproven, so we instead use a recent (weaker) version of the formalization of a 6-functor formalism due to L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mann [Man22b] (based on the work of Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Liu and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zheng, see [LZ17]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We review this theory in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Recent examples of six functors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Recently, there has been a huge rise of interest in construct- ing new 6-functor formalisms (see [LZ17], [Sch17], [CS19], [CS22], [Man22b], [Man22a]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' What unites all these examples (and all interesting previous examples) is that they all satisfy a version of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Namely, in each of these 6-functor formalisms, any smooth morphism f : X → Y admits an invertible object ωf ∈ D(X) and an equivalence f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ≃ f ∗(−) ⊗ ωf of functors D(Y ) → D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Furthermore, in most of these examples, it is possible to give an easy formula for the dualizing object ωf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Despite this similarity, the proofs of Poincar´e Duality in each particular context are pretty hard and require a lot of work specific to each situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As far as we are aware, there is no uniform approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this paper is to provide a uniform approach to the question of proving Poincar´e Duality, also simplifying previously existing proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, before we discuss our results, we wish to discuss two examples of the proofs of Poincar´e Dualities in more detail to show how each of these 6-functor formalisms depends on the specifics of the situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1From now on, we will follow the notation that supresses R’s except for the RΓ notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2We refer to Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 for the actual definition of a 6-functor formalism used in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 3 First Example (ℓ-adic ´etale sheaves in analytic geometry) In [Sch17], P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze proves a (weaker) version of Poincar´e Duality for (ℓ-adic) ´etale sheaves on diamonds (see [Sch17, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using standard reductions, it suffices to consider the case of the relative unit ball D1 X → X over a diamond X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, approximation arguments and the comparison with Huber’s theory to reduce the question to the usual ´etale Poincar´e Duality for D1 X → X over a strongly noetherian analytic adic space X that has been established before by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Huber in [Hub96, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the crux of the argument lies in the proof of [Hub96, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] that we discuss in more detail now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Huber’s proof of Poincar´e Duality follows the strategy of proving Poincar´e Duality in ´etale cohomology of schemes: one first constructs the trace map by reducing to the case of curves, and then one proves Deligne’s fundamental lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We note that both steps are specific to ´etale sheaves and use almost all prior results established in the book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Furthermore, the proof of the adic version of Deligne’s fundamental lemma uses non-trivial results from the theory of ´etale cohomology of schemes, making the proof not intrinsic to adic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One extra difficulty in Huber’s proof is the need to work with fibers over points of higher ranks: these fibers do not admit any structure of an adic space and so these fibers can be treated only in a somewhat artifical way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We note [Sch17] is logically independent of [Hub96] except for the two facts: quasi-compact base schange (see [Hub96, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1(c)]) and Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it seems desirable to give proofs of these facts entirely in the realm of diamonds making [Sch17] independent of [Hub96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do not have anything to say about the first question, but Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 provides a new soft proof of Poincar´e Duality that is essentially independent of the results in [Hub96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Second Example (solid almost O+/p-ϕ-modules) Another example that we want to consider in more detail is the 6-functor formalism of “solid almost O+/p-ϕ-modules” X �→ Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ developed by L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mann in [Man22b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This 6-functor formalism satisfies Poincar´e Duality (see [Man22b, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='20]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In order to prove this, Mann reduces the general question to the case of the torus T1 X → X over a strictly totally disconnected X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this situation, he proves a strong ver- sion of v-descent for Da □(A+/p), and then argues by choosing a formal model of T1 and performing explicit computations related to the Faltings trace map to reduce the question to (solid almost) Grothendieck duality on the mod-p fiber of the formal model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This argument is also specific to this particular 6-functor formalism: the formal model consid- erations are not available in most other geometric situations and the reduction to Grothendieck duality is very specific to the p-adic situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this paper, we give a soft proof of Poincar´e duality for Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ that (essentially) only uses the computation of the cohomology groups of the projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Formulation of the questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We fix a base scheme (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally noetherian analytic adic space) S, C the category of locally finitely presented S-schemes (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finite type adic S-spaces), and 6-functor formalism D: Corr(C) → Cat∞ (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As mentioned in Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, all interesting examples of 6-functor formalisms satisfy Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In order to make this precise, we follow [Sch17] and introduce the following terminology: Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 and Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7) A morphism f : X → Y is called weakly cohomologically smooth (with respect to D) if 4 BOGDAN ZAVYALOV (1) the co-projection morphism f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) ⊗ f ∗(−) → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) from Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(2) is an equiva- lence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) the dualizing object ωf := f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) is an invertible object of D(X), and it commutes with an arbitrary base change Y ′ → Y , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', for any Cartesian diagram in C X′ X Y ′ Y, g′ f′ f g the natural morphism (g′)∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) → f ′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ′) from Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(3) is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A morphism f : X → Y is called cohomologically smooth (with respect to D) if, for any morphism g: Y ′ → Y in C, the base change f ′ : X′ → Y ′ is weakly cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the question of proving Poincar´e Duality reduces to the following two (essentially indepen- dent) questions: Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' What is a minimalistic set of conditions on D that would ensure that any smooth morphism f : X → Y is cohomologically smooth (with respect to D)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If every smooth morphism is cohomologically smooth, is there a reasonable for- mula for the dualizing object ωf?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Is there a minimalistic set of conditions on D that would ensure that ωf is equal to the Tate twist (appropriately defined)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this paper is to give positive answers to both questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our answer to Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 is optimal: it gives a characterization of all such D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3, it seems harder to get an optimal answer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' however, we give some results that cover all interesting examples of 6-functors established up until the present moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Somewhat surprisingly, our answers are uniform for schemes and adic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Furthermore, the same results can be achieved in any “geometry” satisfying the property that, for any f : X → Y , the diagonal morphism X → X ×Y X is “locally closed” and admitting a reasonable notion of vector bundles and blow-ups (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' complex-analytic spaces, formal geometry, derived schemes, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, it seems hard to make precise what the word “geometry” should mean, so we stick to the examples of schemes and adic spaces in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Before we discuss the main results of this paper, we want to point out the main problem in answering these questions, especially in the situation of an abstract 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that we have somehow guessed the correct formula for the dualizing object ωf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So the question of proving Poincar´e Duality essentially boils down to the question of constructing an isomorphism HomD(Y ) (F, f ∗G ⊗ ωf) ≃ HomD(X) (f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' F, G) , functorial in F ∈ D(X) and G ∈ D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now the problem is that we do not have almost any control over the categories D(X) and D(Y ) for a general 6-functor formalism D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This is probably not a big issue in the classical 6-functor formalisms, but this becomes a serious issue in the recent 6-functor formalisms (for example, [CS22] or [Man22b]), where the categories D(X) are defined abstractly via descent so one does not have good control over D(X) for a general X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the main problem is to prove adjunction without really understanding the involved categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Miraculously, it turns out to be possible, as we explain in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our Answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we are ready to discuss the answers to Questions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 that we obtain in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To answer Questions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, we separate the exact conditions needed to prove Poincar´e duality for one particular morphism f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do this via the concept of a trace-cycle theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we fix a morphism f : X → Y with the diagonal morphism ∆: X → X ×Y X and the projections p1, p2 : X ×Y X → X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4) A trace-cycle theory on f is a triple (ωf, trf, cl∆) of (1) an invertible object ωf ∈ D(X), (2) a trace morphism trf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ωf → 1Y in the homotopy category D(Y ), (3) a cycle map cl∆ : ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X −→ p∗ 2 ωf in the homotopy category D(X ×S X) such that 1X p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) 1X p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 2 ωf) , ∼ id p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆) trp1 (1) ωf p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 1ωf ⊗ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 1ωf ⊗ p∗ 2ωf) ωf 1X ⊗ ωf p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='p∗ 1ωf ⊗ ωf, ∼ id p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (id⊗cl∆) ≀ ∼ trp2 ⊗id (2) commute3 in D(X) (with the right vertical arrow in the second diagram being the projection formula isomorphism).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1, Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2) Let f : X → Y be a morphism in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then f is cohomologically smooth if and only if f admits a trace-cycle theory (ωf, trf, cl∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main point of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 is that it allows us to “decategorify” the question of Poincar´e Duality and reduce it to the question of constructing two morphisms and verifying commutativity of two diagrams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, one does not need to understand the categories D(X) and D(Y ) itself (only maps between very specific objects).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 is sufficiently strong to answer Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 in full generality: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3) The relative projective line g: P1 S → S admits a trace-cycle theory (ωg, trg, cl∆) if and only if every smooth morphism f : X → Y is cohomologically smooth (with respect to D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3See Construction 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 for the precise definition of trpi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Roughly, it is just the corresponding base change of trf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6 BOGDAN ZAVYALOV Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 implies that, in the presense of a trace-cycle theory on the relative projective line, the question of proving the full version of Poincar´e Duality boils down to the question of computing the dualizing object ωf = f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y for any smooth morphism f : X → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In general, this is a pretty hard question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To see that there could not be any “trivial” formula for the dualizing object, one could think about the case of the (solid) quasi-coherent 6-functor formalism D□(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O) on locally finite type (derived) Z-schemes (see [CS19]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this situation, for a smooth morphism f : X → Y of pure dimension d, the dualizing object is given by Ωd X/Y [d].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, this object remembers the geometry of f in a non-trivial way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Nevertheless, we are able to give a formula for the dualizing object for any smooth morphism f : X → Y under some extra assumptions on the 6-functor formalism D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the next construction, we assume that all smooth morphisms are cohomologically smooth with respect to D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Variant 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3) Let f : VX(E) → X be the total space of a vector bundle E on X with the zero section s: X → VX(E).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we define CX(E) ∈ D(X) as CX(E) = s∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 and Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12) Suppose the 6-functor formalism D is motivic or geometric (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 and Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a smooth morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then there is a canonical isomorphism f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ CX(Tf) ∈ D(X), where Tf is the relative tangent bundle of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 implies that any A1-invariant 6-functor formalism (see Defini- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10) with a trace-cycle theory on the relative projective line P1 S → S is motivic in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 applies in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 answers the first part of Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3, at least under some further assumptions on D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we discuss the second part of Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main tool in answering this question will be the notion of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To introduce an abstract notion of first Chern classes, we need to introduce some notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of this section, we fix an invertible object 1S⟨1⟩ ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For each f : X → S, we define 1X⟨1⟩ := f ∗1S⟨1⟩ ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For each integer d ≥ 0, we define 1X⟨d⟩ := 1X⟨1⟩⊗d ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For d ≤ 0, we define 1X⟨d⟩ := 1X⟨−d⟩∨ ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4, Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8) A weak theory of first Chern classes on a 6-functor formalism D is a morphism4 of Sp-valued sheaves5 c1 : RΓan(−, O×)[1] → RΓ(−, 1⟨1⟩): Cop → Sp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A theory of first Chern classes is a weak theory of first Chern classes c1 such that, for the relative projective line f : P1 S → S, the morphism c1 + f ∗⟨1⟩: 1S ⊕ 1S⟨1⟩ → f∗1P1 S⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4See Notation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 for the definition of RΓ(−, 1⟨1⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5The definition below is written in the context of adic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the case of schemes, one has to replace RΓan(−, O×)[1] with RΓZar(−, O×)[1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 7 is an isomorphism6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A strong theory of first Chern classes is a weak theory of first Chern classes c1 such that, for any integer d ≥ 1 and the relative projective space f : Pd S → S, the morphism d � k=0 ck 1⟨d − k⟩: d � k=0 1S⟨d − k⟩ → f∗1Pd S⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 implies that, if c1 is a theory of first Chern classes, then 1S⟨−1⟩ ≃ Cone � 1S → f∗1P1 S � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So the invertible object 1S⟨1⟩ is unique up to an isomorphism, and axiomitizes the “Tate twist”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A weak theory of first Chern classes is roughly just a sufficiently functorial additive way to assign first Chern classes c1(L) ∈ H0(X, 1X⟨1⟩) for any line bundle L on a space X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A theory of first Chern classes is a weak theory satisfying the projective bundle formula for P1 S → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A strong theory of first Chern classes is a weak theory of first Chern classes satisfying the projective bundle formula Pd S → S for all d ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' With that definition at hand, we give an answer to the second part of Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 in the following two theorems: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7) Let D be a 6-functor formalism satisfying the excision axiom (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8) and admitting a theory of first Chern classes c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that f : X → Y is a smooth morphism of pure relative dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the right adjoint to the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X) → D(Y ) is given by the formula f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16 is essentially the best possible answer to Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16 in the presence of the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It reduces the question of proving Poincar´e Duality to constructing a (weak) theory of first Chern classes and computing the cohomology of the projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also prove a version of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16 without assuming that D satisfies the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Unfortunately, this result is not as strong though it seems to be sufficiently strong to apply to the potential crystalline and prismatic 6-functor formalisms: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6) Suppose that a 6-functor formalism D is either A1-invariant or pre-geometric (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 and Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' And let c1 be a strong theory of first Chern classes on D underlying a theory of cycle maps cl• (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3), and f : X → Y be a smooth morphism of pure relative dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the right adjoint to the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X) → D(Y ) is given by the formula f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The condition that D is pre-geometric is satisfied if, for example, for every space Y and an invertible object L ∈ D(P1 Y ) on the relative projective line f : P1 Y → Y , there is an invertible object N ∈ D(Y ) with an isomorphism f ∗N ∼= L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6See Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 for the precise meaning of the morphisms c1 and ck 1 in the formula below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8 BOGDAN ZAVYALOV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Simplification of the previous proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16, we can give simpler proofs of previously established Poincar´e Dualities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Firstly, we can give new, easier proofs of the ´etale Poincar´e Duality in different settings: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([SGA IV, Exp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' XVIII, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5], Remark 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9) Let Y be a scheme and f : X → Y a smooth morphism of pure dimension d, and n an integer invertible in OY .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) admits a right adjoint given by the formula f ∗(d)[2d]: D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([Hub96, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3], Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8) Let Y be a locally noetherian analytic adic space, and f : X → Y a smooth morphism is of pure dimension d, and n is an integer invertible in O+ Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) admits a right adjoint given by the formula f ∗(d)[2d]: D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our results are slightly stronger than the classical versions appearing in [SGA IV, Exp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' XVII, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5] and [Hub96, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Namely, we do not assume that f is separated and we do not make any boundedness assumptions on the derived categories D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) and D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As mentioned in Subsection 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, this gives a new proof of Poincar´e Duality making [Sch17] almost independent of [Hub96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Before we go into the proofs of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, we mention that these results formally imply a big part of the standard foundational results in the theory of ´etale cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Application 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Cohomological purity) If i: X → Y is a (Zariski)-closed immersion of smooth S-schemes (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' adic spaces) of pure dimension dX and dY respectively, then Ri!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='Z/nZ ≃ Z/nZ(−c)[−2c], where c = dY − dY .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows directly from Poincar´e Duality and the isomorphism Ri!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ Rf !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Y ≃ Rf !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' X, where fX and fY are the structure morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Application 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Smooth base change) Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2) and Proposi- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9 imply the smooth base change in ´etale cohomology7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the next application, we recall that [Zav23, Lemma 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2] provides a categorical description for the category of constructible sheaves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Namely, it identifies Db,≥0 cons(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) with the subcategory of compact objects in Db,≥0(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) for any qcqs scheme (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' qcqs locally noetherian adic space) X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7We are not aware of any other proof of smooth base change simpler than the original proof in [SGA IV, Exp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' XVI, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The classical proof of Poincar´e Duality uses smooth base as an input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, one cannot deduce smooth base change from the classical proof of Poincar´e Duality and Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 9 Application 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Preservation of constructible sheaves) If f : X → Y is a smooth qcqs mor- phism, then Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' restricts to the functor Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(b) cons(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(b) cons(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we can assume that Y is qcqs, then the discussion above implies that we only need to show that (the restriction) Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D≥−N(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D≥−N(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) preserves compact objects for any integer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This can be easily seen from the fact that the right adjoint Rf !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = f ∗(d)[2d] commutes with infinite direct sums and is of finite cohomological dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the next application, we recall that [Zav23, Lemma 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1] identifies D(b) lisse(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) with the category of dualizable objects in D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Application 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Preservation of lisse sheaves) If f : X → Y is proper and smooth, then Rf∗ restricts to the functor Rf∗ : D(b) lisse(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(b) lisse(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By the discussion above, it suffices to show that Rf∗ preserves dualizable objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now using Poincar´e Duality, it is formal to see that, for a dualizable object L, Rf∗L is also dualizable with the dual Rf∗(L∨(d)[2d]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we briefly discuss the proofs of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our strategy is to use Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16 to reduce the question to constructing first Chern classes (in a sufficiently functorial manner) and verifying the projective bundle formula for the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The construction of the first Chern classes comes from the Kummer short exact sequence (see Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2), so the question of proving Poincar´e Duality essentially boils down to the question of computing cohomology of the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, one can reduce to the case of S = Spec C or S = Spa(C, OC) for an algebraically closed (non-archimedean) field C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then this computation is standard in both theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Apart from the computation of cohomology of the projective line, the proofs in the analytic and algebraic situations are uniform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Another concrete example of Poincar´e Daulity that we consider in this paper is the version of Poincar´e Duality for the 6-functor formalism of “solid almost O+/p-ϕ-modules” Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ developed by L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mann in [Man22b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this context, we can give a new proof of the following result: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([Man22b, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='20], Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9) Let Y be a locally noetherian analytic adic space over Spa(Qp, Zp), and f : X → Y a smooth morphism of pure dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ → Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ Y /p)ϕ admits a right adjoint given by the formula f ∗ ⊗ O+,a X /p(d)[2d]: Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ Y /p)ϕ → Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9 follows the same strategy as the one of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2: we define first Chern classes and then compute cohomology of the relative P1 Y → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The two main complications come from the fact that it is not, a priori, clear that this 6-functor formalism satisfies the excision axiom, and the definition of this 6-functor formalism is so abstract that it seems difficult to compute even cohomology of the projective line from first principles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, it turns out that the verification of the excision axiom is not that hard, and we resolve the second issue via the Primitive Comparison Theorem that reduces the computation to the computation in ´etale cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Besides these relatively minor points, the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9 is essentially identical to that of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 10 BOGDAN ZAVYALOV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Potential new examples of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Recently, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Drinfeld [Dri22] and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Bhatt– J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie [BL22b] gave a new (stacky) perspective on prismatic cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Namely, for a bounded prism (A, I) and a bounded p-adic formal scheme X over A/I, they construct its (relative derived) prizmatization stack WCartX/A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For an lci X, this comes equipped with an isomorphism Dqc(WCartX/A) ≃ �Dcrys((X/A)∆, O∆) between the ∞-categories of quasi-coherent sheaves on WCartX/A and prismatic O∆-crystals on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it is reasonable to expect that Dqc(WCartX/A) provide a reasonable coefficient theory for (relative) prismatic cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Unfortunately, this assignment can not be promoted to a 6- functor formalism because this is already impossible for Dqc(−) (even on schemes);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' the problem being that the open immersion pullback j∗ does not admit a left adjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the case of (derived) schemes, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Clausen [CS19] were able to enlarge the category Dqc(−) to the category of all solid modules D□(−) to get a 6-functor formalism on (derived) schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it is reasonable to expect that appropriately defined ∞-category D□(WCartX/A) of solid sheaves on the stack WCartX/A should give the correct coefficient theory for the prismatic cohomology and admit a 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Furthermore, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Tang has recently proven Poincar´e Duality for prismatic cohomology of smooth and proper p-adic formal A/I-schemes (see [Tan22, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This makes it reasonable to expect that this potential 6-functor formalism should satisfy the full version of Poincar´e Duality with all solid coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Once this 6-functor formalism D is constructed, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='18 reduces Poincar´e Duality to the question of constructing (strong) first Chern classes, cycle class maps for divisors, and showing that D is pre-geometric (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We expect that, under the correct formalization of D□(WCartX/A), all these questions should follow from the already existing results: (1) (First Chern classes) A strong theory of prismatic first Chern classes has already been constructed in [BL22a, Notation 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 and Variant 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) (Cycle maps for divisors) we expect that a theory of cycle maps should follow from [Tan22, Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='32];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) (D is pre-geometric) By Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='19, it suffices to show that every invertible object on P1 Y comes from Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' At least for an lci Y , we expect that, there should be an equivalence of the ∞-categories of invertible objects Pic � D□(WCartY/A) � ≃ Pic � Dqc(WCartY/A) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This would reduce the question to showing that any prismatic line bundle on P1 B/J comes from a line bundle Spec B/J for any morphism of bounded prisms (A, I) → (B, J).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This can be explicitly seen by showing that the pullback along the natural morphism P1 B → WCartP1 B/J /B is fully faithful on line bundles and first Chern class considerations to trivialize the pullback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do not spell out the precise argument as it is beyond the scope of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We expect that similar considerations should apply to the absolute prismatizations X∆, XN, and Xsyn introduced in [Dri22] and [Bha22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Strategy of the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we discuss the strategy of our proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16: POINCAR´E DUALITY REVISITED 11 (1) (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2) We start by proving Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main step in the proof is to “de- categorify” the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The key idea is to use the 2-category of cohomological corre- spondences originally introduced in [LZ22] and reviewed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After we establish Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6, we show that it implies Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 implying that any smooth morphism is cohomologically smooth if P1 S → S admits a trace-cycle theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) (Section 4) The next goal is to deduce a formula for the dualizing object f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y for a smooth morphism f : X → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This is done via a version of Verdier’s diagonal trick and deformation to the normal cone, we show8 that f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ CX(Tf) ∈ D(X), where CX(Tf) is defined in Construction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now the question of proving Poincar´e Duality boils down to the question of constructing a trace-cycle theory of P1 S → S and then computing CX(Tf) for every smooth morphism f : X → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) (Sections 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2-5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5) We introduce the notion of a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we show that, in the presence of the excision axiom, existence of a theory of first Chern classes automatically implies A1-invariance of D, existence of cycle maps for divisors, and the projective bundle formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6) Then we construct the trace morphism for projective bundles in the presence of a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we show that, for the projective line f : P1 S → S, the triple (1P1 S⟨1⟩, trf, cl∆) forms a trace-cycle theory;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' this is essentially just a formal diagram chase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (5) (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7) Finally, the question of proving Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 boils down to the question of computing f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ CX(Tf) for every smooth morphism f : X → Y of relative pure dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we compactify the morphism g: VX(Tf) → X to the morphism g : PX(T∨ f ⊕ O) → X with the “zero” section s: X → PX(T∨ f ⊕ O).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the question reduces to constructing an isomorphism s∗g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ 1X⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Roughly, the morphism comes from the trace map constructed in the previous step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In order to show that this is an isomorphism, we can work locally on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus we can assume that Tf is a trivial vector bundle, so PX(T∨ f ⊕ O) ≃ Pd X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the cycle map of a point gives an inverse to this map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We say that an analytic adic space X is locally noetherian if there is an open covering by affinoids X = � i∈I Spa(Ai, A+ i ) with strongly noetherian Tate Ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Sometimes, such spaces are called locally strongly noetherian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We follow [Hub96, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] for the definition of a locally finite type, locally weakly finite type, and locally +-weakly finite type morphisms of locally noetherian adic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For a Grothendieck abelian category A, we denote by D(A) its triangulated derived category and by D(A) its ∞-enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8At least under the assumptions of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 that we will prove in later steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 12 BOGDAN ZAVYALOV For a symmetric monoidal ∞-category C⊗, we denote by Pic(C⊗) the full ∞-subcategory of C consisting of invertible objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also denote by Pic(C⊗) the group of isomorphism classes of invertible objects in C⊗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We heartfully thank Ofer Gabber and Peter Scholze for their ques- tions after author’s presentation of his previous work on p-adic Poincar´e Duality [Zav21b] at the RAMpAGe seminar and the Oberwolfach workshop respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' this was the starting point of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We are also very grateful to Peter Scholze for numerous illuminating conversations, which have greatly influenced the development of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The paper owes a huge intellectual debt to these conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We thank Marc Hoyois for suggesting the argument of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6, Ko Aoki and Peter Haine for explaining some necessary ∞-categorical background to the author, and Adeel Khan for patiently answering author’s questions on his paper [Kha22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also thank Toni Annala, Bhargav Bhatt, Dustin Clausen, Dmitry Kaledin, Dmitry Kubrak, Shizhang Li, Lucas Mann, and Emanuel Reinecke for many interesting conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We thank the Max Planck Institute and the Institute for Advanced Study for funding and excellent working conditions during author’s stay at these institutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Abstract six functor formalisms In this section, we remind the reader the notion of a 6-functor formalism and give some construc- tions that will be important for the rest of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we fix the notation that will be freely used in the rest of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After that, we construct the 2-category of cohomological correspondences that will play a crucial role in the proof of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of the section, we fix C a category of locally finite type adic S-spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' a category of locally finitely presented S-schemes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6-functor formalisms I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we discuss the general notion of a 6-functor formal- ism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since this is the main object of study of this paper, we have decided to spent this section to explicitly set-up all the notation that we will use later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also wish to convey the idea that almost all familiar structures on the classical 6-functor formalisms can be defined in this abstract situation in a similar manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start by recalling that Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Liu and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zheng have defined a symmetric monoidal9 ∞-category Corr(C) := Corr(C)all,all of correspondences in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do not explain the full construction here and instead refer to [LZ17, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] (and to [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4] for a nice exposition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, we specify some lower dimensional data that will be useful for us later: Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) objects of Corr(C) coincide with objects of C, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finite type adic S-spaces;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) 1-edges between X and Y are given by correspondences of the form Z X Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 9See [HA, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7] for the precise definition of this notion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 13 (3) in the homotopy category hCorr(C), the composition of morphisms X ← T → Y and Y ← S → Z is given by the following outer correspondence (in red): T ×Y S T S X Y Z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) the tensor product X ⊗ Y of two objects X and Y is their cartesian product X ×S Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the next definition, we consider the Cartesian symmetric monoidal structure on Cat∞ the ∞-category of (small) ∞-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7]) A weak 6-functor formalism is a lax symmetric-monoidal10 functor D: Corr(C) → Cat∞ such that (1) for each morphism f : X → Y in C, the functors D([X id ←− X f−→ Y ]): D → D(Y ) and D([Y f←− X id −→ X]): D(Y ) → D(X) admit right adjoints;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) for each X ∈ C, the symmetric monoidal ∞-category D(X) is closed (in the sense of [HA, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='15]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The associated homotopy 1-category hD(X) is denoted by D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One can compose D with the functor h: Cat∞ → Cat≃ 1 to the (2, 1)-category of categories that sends an ∞-category X to its homotopy category hX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By the universal property of homotopy 2-categories, this functor (essentially) uniquely descends to the functor D := hD: h2 Corr(C) → Cat≃ 1 such that D(X) = hD(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The data of a weak 6-functor formalism is a very dense piece of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Below, we mention some consequences of this definition, and refer to [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8] for the discussion on how to derive these consequences from Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) for each X ∈ C, a closed symmetric monoidal ∞-category D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote the tensor product functor and the inner Hom functor by − ⊗ −: D(X) × D(X) → D(X), and HomX(−, −): D(X)op × D(X) → D(X);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) for each morphism f : X → Y in C, we have a symmetric monoidal functor f ∗ : D(Y ) → D(X), and a functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X) → D(Y );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) for each f : X → Y , f ∗ and f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' admit right adjoints that we denote by f∗ : D(X) → D(Y ) and f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(Y ) → D(X);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' satisfies the projection formula, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', there is an isomorphism f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ⊗ (−) ≃ f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (− ⊗ f ∗(−)) of functors D(X) × D(Y ) → D(Y );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 10By a lax symmetric-monoidal functor, we mean a functor of the associated ∞-operads, see [HA, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7] 14 BOGDAN ZAVYALOV (5) the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' satisfies proper base-change, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', for any Cartesian diagram X′ X Y ′ Y, g′ f′ f g there is a specified isomoprhism of functors g∗ ◦ f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ f ′ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ (g′)∗;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (6) a lot of higher coherences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) (Unit object) In what follows, we fix a unit object 1S ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For each f : X → S in C, we denote by 1X := f ∗ (1S) the pullback of 1S to X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It is a unit object in D because f ∗ is a (symmetric) monoidal functor;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) (Co-projection morphism) for any f : X → Y in C, there is a natural morphism of functors w(−),(−) : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ⊗ f ∗(−) → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (− ⊗ −) from D(Y ) × D(Y ) to D(X) that is defined to be adjoint to the morphism f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ⊗ f ∗(−)) ≃ f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−)) ⊗ (−) adj⊗id −−−−→ − ⊗ −;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) (Shriek base-change) If X′ X Y ′ Y f′ g′ f g is a Cartesian diagram in C, there is a natural morphism (g′)∗ ◦ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → (f ′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ g∗ defined as an adjoint to f ′ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ (g′)∗ ◦ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ g∗ ◦ f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' g∗(adj) −−−−→ g∗, where the first morphism is the proper base-change morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the later use, we prove the following very general (but easy) lemma: Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y a morphism in C, and F, E objects of D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that E is invertible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the co-projection morphism wF,E: f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='F ⊗ f ∗E → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (F ⊗ E) is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Consider the morphism wF⊗E,E−1 : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (F⊗E)⊗E−1 → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It induces a morphism w′ : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (F⊗ E) → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (F) ⊗ E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using that projection morphisms compose well, one easily checks that w′ is the inverse to w up to a homotopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We put the word “weak” in Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 for the following reasons: (1) in practice, ∞-categories D(X) are stable (in the sense of [HA, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9]), aka additive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It seems reasonable to put this into the definition of a 6-functor formalism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) also, in practice, the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is equal to f∗ for a proper morphism f and it is left adjoint to f ∗ for an ´etale f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This also seems reasonable to put into the definition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 15 We fix these issues in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' But before we do this, we discuss some further axioms that one can put on a weak 6-functor formalism D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first discuss excision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let i: Z ֒→ X be a Zariski-closed immersion and j : U ֒→ X its open complement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, proper base-change specifies a homotopy i∗j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Data of such homotopy defines a commutative diagram j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='j∗ idD(X) 0 i∗i∗ (3) in the ∞-category Fun(D(X), D(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, it makes sense to ask if this diagram is Carte- sian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A weak 6-functor formalism D satisfies the excision axiom if Diagram (3) is Cartesian for any Zariski-closed S-immersion Z ⊂ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' An equivalent way to say this is that Diagram (3) defines an exact triangle of functors j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='j∗ → id → i∗i∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If D satisfies the excision axiom, we can pass to right adjoints in (4) to get an exact triangle of functors i∗i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → id → j∗j∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we discuss the A1-invariance of an abstract 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A weak 6-functor formalism D on C is A1-invariant if, for every X ∈ C and the morphism f : A1 X → X, the natural morphism 1X → f∗1A1 X is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the next lemma, we denote by Pic(D(X)) the ∞-subcategory of D(X) consisting of invertible objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be an A1-invariant weak 6-functor formalism, X ∈ C, and f : A1 X → X the natural morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the pullback functor f ∗ : Pic � D(X) � → Pic � D(A1 X) � is fully faithful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We fix two invertible objects L, L′ ∈ Pic � D(X) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the claim follows from the following sequence of isomorphisms: Hom(f ∗L, f ∗L′) ≃ Hom(L, f∗f ∗L′) ≃ Hom(L, f∗1A1 X ⊗ L′) ≃ Hom(L, L′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first isomorphism follows from the (f ∗, f∗)-adjunction, the second isomorphism follows from the projection formula for invertible objects (argue as in the proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The last isomorphism follows from the A1-invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 16 BOGDAN ZAVYALOV 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∞, 2)-category of cohomological correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to construct the (∞, 2)-category of cohomological correspondences, a 2-categorical variant of which was first introduced in [FS21, IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] (based on [LZ22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We learnt11 the arguments of this section from Marc Hoyois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the rest of the paper, we will never need the (∞, 2)-version of this category;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' the 2-categorical version will be sufficient for all our applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, it seems that a rigorous explicit con- struction even of the associated 2-category is an extremely tedious exercise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Even though it is probably possible to do by hand, we are not aware of any place in the literature where this has been done in full detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For instance, to verify the pentagon axiom in the context of ´etale cohomology, one needs to check that the pentagon diagram of 5 associativity constraints is commutative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Each associativity constraint includes 2 proper base-change morphisms and 2 projection formula morphisms (and a lot of implicit identifications).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Each proper base change and projection formula morphism is, in turn, constructed by decomposing a morphism into a composition of an ´etale and a proper morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the pentagon axiom effectively has at least 40 arrows involved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Even though it is probably formal that it commutes, it seems really tedious to prove it without some other machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Because of this reason, we take another approach (explained to us by Marc Hoyois) that actually produces an (∞, 2)-categorical version of this category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since, in this approach, it is essentially the same amount of pain to construct it as an (∞, 2)-category as to construct it simply as a 2- category, and the (∞, 2)-categorical version may be useful for other purposes, we write the proof in this generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We then sketch how the same argument could be run entirely in the realm of 2-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of the section, we fix a weak 6-functor formalism D: Corr(C) → Cat∞ in the sense of Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start the section by giving an informal definition of the 2-categorical vesion of the category of correpondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we need to fix some notation: Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X1, X2, X3 be objects of C, and F ∈ D(X1 ×S X2) and G ∈ D(X2 ×S X3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the composition G ◦ F ∈ D(X1 ×S X3) is equal to p1,3,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � p∗ 1,2F ⊗ p∗ 2,3G � ∈ D(X1 ×S X3), where pi,j : X1 ×S X2 ×S X3 → Xi ×S Xj are the natural projections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X, Y, Z, W be objects of C, and F ∈ D(X ×S Y ), G ∈ D(Y ×S Z), H ∈ D(Z ×S W).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) there is a canonical isomorphism ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ◦ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X, where ∆: X → X ×S X is the diagonal morphism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) there is a canonical isomorphism H ◦ (G ◦ F) ≃ (H ◦ G) ◦ F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We claim that both results are formal consequences of proper base-change and the projection formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We show the first part, and refer to [Sta23, Tag 0G0F] for the proof of the second part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 11Ko Aoki has informed the author that a similar construction has also been known to Adam Dauser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 17 We first consider the Cartesian square X ×S X X ×S X ×S X X X ×S X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ∆×Sid p1 p1,2 ∆ Then proper base-change implies that p∗ 1,2∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X) ≃ (∆ ×S id)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X×SX) , and similarly p∗ 2,3∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X) ≃ (id ×S ∆)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X×SX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we use the Cartesian square X X ×S X X ×S X X ×S X ×S X, ∆ ∆ id×S∆ ∆×Sid the proper base change theorem, and the projection formula to get a sequence of isomorphisms ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X ◦ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X ≃ p1,3,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � p∗ 1,2∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X ⊗ p∗ 2,3∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X � ≃ p1,3,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ((∆ ×S id)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X×SX ⊗ (id ×S ∆)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X×SX) ≃ p1,3,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∆ ×S id)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ((∆ ×S id)∗ (id ×S ∆)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X×SX)) ≃ p1,3,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∆ ×S id)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X) ≃ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we are ready to define the 2-category of cohomological correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([FS21, IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3]) The 2-category of cohomological correspondences CS is the following 2-category: (1) the objects of CS are objects of C;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) for every two objects X, Y ∈ Ob(CS), the Hom-category is defined as HomCS(X, Y ) = D(X ×S Y );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) for every triple X1, X2, X3 ∈ Ob(CS), the composition functor HomCS(X2, X3) × HomCS(X1, X2) → HomCS(X1, X3) is defined as (A, B) �→ π13,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (π∗ 12B ⊗ π∗ 23A) , where pi,j : X1 ×S X2 ×S X3 is the projection on Xi ×S Xj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) for every X ∈ Ob(CS), the identity 1-morphism is idX = ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X), where ∆: X → X ×S X is the diagonal morphism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (5) the unit and associativity constraints come from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the rest of the section, we show that Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 actually defines a 2-category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As explained at the beginning of this section, the hard part is to verify axiom (P) from [Lur22, Tag 007Q].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 18 BOGDAN ZAVYALOV Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be a symmetric monoidal ∞-category such that each object X ∈ D is dualizable (in the sense of [HA, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 and Rem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then D is a closed symmetric monoidal ∞-category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since D is symmetric monoidal, it suffices to show that D is right closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In other words, we have to show that, for every object X ∈ D, the functor − ⊗ X : D → D admits a right adjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since X is dualizable, there is a dual object X∨ with the coevaluation and evaluation morphisms c: 1D → X ⊗ X∨, e: X ⊗ X∨ → 1D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We claim that the functor − ⊗ X∨ : D → D is right adjoint to − ⊗ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Indeed, we define the unit and counit transformations explicitly as η: id id⊗c −−−→ id ⊗ X ⊗ X∨, ǫ: id ⊗ X ⊗ X∨ ≃ X ⊗ X∨ ⊗ id e⊗id −−−→ id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One easily checks that this defines the desired adjunction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Any object of the symmetric monoidal ∞-category Corr(C) is self-dual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In partic- ular, the symmetric monoidal ∞-categorical structure on Corr(C) is closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X ∈ Corr(C) be an adic S-space with the structure morphism f : X → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We wish to show that X is self-dual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we define the co-evaluation morphism c: S → X ⊗ X to be represented by the correspondence S f←− X ∆ −→ X ×S X, where ∆ is the diagonal morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Likewise, we define the evaluation morphism e: X ⊗ X → S to be represented by the correspondence X ×S X ∆ ←− X f−→ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then it is easy to check that this morphisms define a self-duality on X (see Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we are ready to rigorously construct the category CS, and even its (∞, 2)-enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A crucial technical tool that we will use is the formalism of ∞-categories enhanced in monoidal ∞-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We refer to [GH15] for a detailed discussion of this notion, and especially to [GH15, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' There is an (∞, 2)-category C(∞,2) S such that its 2-homotopy category h2C(∞,2) S is equivalent to CS from Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, CS is indeed a 2-category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 implies that every object in Corr(C) is self-dual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 ensures that Corr(C) is a closed symmetric monoidal ∞-category with the inner Hom given by HomCorr(C)(X, Y ) = X × Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, [GH15, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10] implies that Corr(C) is enriched over itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we use the lax- monoidal functor D: Corr(C) → Cat∞ to transfer12 the constructed above Corr(C)-enrichment on Corr(C) to a Cat∞-enrichment on Corr(C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This defines the desired (∞, 2)-category13 C(∞,2) S by 12For this, look at [GH15, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, and Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 13We refer to [Hau15] for the relation with other models for the theory of (∞, 2)-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 19 [GH15, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 and Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6]14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Essentially by construction, the associated 2-homotopy category h2C(∞,2) S is equivalent to CS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One can run the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 entirely in the realm of 2-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this approach, one constructs a 2-category weakly enriched over Cat≃ 1 that is tautologically equivalent to CS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, we mention the main changes that one needs to make in the proof of Proposi- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 to avoid any mention of (∞, 2)-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Firstly, one should use the notion of a monoidal 2-category15 in place of the notion of a monoidal ∞-category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Secondly, one should replace enrich- ments in the sense of [GH15] with weak enrichments in the sense of [GS16, §3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thirdly, one should use the 2-categorical version of the category of correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lastly, and one should replace the ∞-functor D with its 2-categorical version D from Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the same argument works in the world of 2-categories with the only16 caveat that we do not know a reference for the fact that a closed monoidal 2-category is enriched over itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6-functor formalisms II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we follow [Sch22, Lecture VI] and define the notions of cohomologically ´etale and proper morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this paper we take a minimalistic approach that is sufficient for all our purposes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Sch22, Lecture VI] contains a more thorough consideration of cohomologically proper and ´etale morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' These notions are needed to spell out the full definition of a 6-functor formalism that is used in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the latter reference, we also discuss the notion of cohomologically smooth morphisms in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Cohomologically proper and ´etale morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we fix a weak 6-functor for- malism D: Corr(C) → Cat∞ in the sense of Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We wish to axiomitize the conditions f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = f∗ and f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = f ∗;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' this will be done via the notions of cohomologically ´etale and cohomologically proper morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start with the case of a monomorphism f : X → Y in C (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', the diagonal morphism ∆: X → X ×Y X is an isomorphism).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case we have the following cartesian diagram: X X X Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' id id f f (5) Construction 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that f : X → Y is a monomorphism in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) there is the natural transformation of functors D(X) → D(Y ) αf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f∗ defined as the adjoint to the proper base change equivalence f ∗f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ idD(X) coming from Diagram (5);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) there is the natural transformation of functors D(Y ) → D(X) βf : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f ∗ defined as the shrieck base morphism (see Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(3)) applied to Diagram (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 14See also [GH15, Rem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13] for the meaning of a somewhat confusing notation Cat(−) (∞,k) 15See [JY21, Definition 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 and Explanation 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4]) 16Use [GS16, §13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2] to transfer a weak enrichment along a lax-monoidal functor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 20 BOGDAN ZAVYALOV Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A monomorphism f : X → Y is cohomologically proper (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' cohomologically ´etale) if the natural tranformation αf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f∗ (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' βf : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f ∗) is an equivalence .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we move to the case of a general morphism f : X → Y in C and consider the commutative diagram X X ×Y X X X Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ∆ id id q p f f (6) Note that ∆ is always a monomorphism, so it makes sense to ask if ∆ is cohomologically proper (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' cohomologically smooth).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a morphism in C with the diagonal morphism ∆: X → X ×Y X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) if ∆ is cohomologically proper, there is a natural transformation of functors D(X) → D(Y ) αf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f∗ defined as the adjoint to the composition f ∗f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='q∗ p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (adj∆◦q∗) −−−−−−−→ p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='∆∗∆∗q∗ ≃ p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='∆∗q∗ ≃ id, where the first isomorphism comes from proper base-change, the second morphism is induced by the (∆∗, ∆∗)-adjunction, the third isomorphism comes from cohomological properness of ∆, and the last isomorphism comes from the fact that p ◦ ∆ = idX and q ◦ ∆ = idX;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) if ∆ is cohomologically ´etale, there is a natural transformation of functors D(Y ) → D(X) βf : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f ∗ defined as the composition f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ ∆∗q∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → ∆∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f ∗ ≃ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f ∗ ≃ f ∗, where the first isomorphism comes from the fact that q◦∆ = idX, the second isomorphism is induced from the shriek base-change (see Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(3)), the third isomorphism comes from cohomological ´etaleness of ∆, and the last isomorphism comes from the fact that p ◦ ∆ ≃ idX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A morphism f : X → Y in C is cohomologically proper (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' cohomologically ´etale) if the diagonal morphism ∆: X → X ×Y X is cohomologically proper (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' cohomologically ´etale) in the sense of Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, and the natural transformation αf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f∗ (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' βf : f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → f ∗) is an equivalence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let g: Y ′ → Y a cohomologically ´etale morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) the co-projection morphism g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ⊗ g∗(−) → g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (− ⊗ −) is an equivalence of functors (see Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 21 (2) for any Cartesian diagram X′ X Y ′ Y f′ g′ f g in C, the natural transformation (g′)∗ ◦ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → (f ′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ g∗ is an isomorphism (see Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first claim follows from the equality g∗ = g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='. The second claim follows from proper base-change by passing to right adjoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Cohomologically smooth morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We follow [Sch17] and introduce the notion of a coho- mologically smooth morphism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' the idea is to require the morphism f : X → Y to satisfy Poincar´e Duality “up to a trivialization of the dualizing object f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we fix a weak 6-functor formalism D: Corr(C) → Cat∞ in the sense of Defini- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A morphism f : X → Y in C is called weakly cohomologically smooth (with respect to D) if (1) the co-projection morphism f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) ⊗ f ∗(−) → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) from Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(2) is an equiva- lence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) the dualizing object ωf := f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) is an invertible object of D(X), and it commutes with an arbitrary base change Y ′ → Y , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', for any Cartesian diagram in C X′ X Y ′ Y, g′ f′ f g the natural morphism (g′)∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) → f ′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ′) from Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(3) is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A morphism f : X → Y in C is called cohomologically smooth (with respect to D) if, for any morphism g: Y ′ → Y in C, the base change f ′ : X′ → Y ′ is weakly cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 formally implies that cohomologically smooth morphisms are closed under composition and (arbitrary) base change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first mention some formal properties of this definition: Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X′ X Y ′ Y g′ f′ f g be a cartesian square in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) the natural morphism f ′ ∗ ◦ (g′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ f∗ is an isomorphism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 22 BOGDAN ZAVYALOV (2) (Cohomologically smooth base change) the natural morphism g∗ ◦ f∗ → (f ′)∗ ◦ (g′)∗ is an isomorphism if g is cohomologically smooth;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) the natural morphism (g′)∗ ◦ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' → (f ′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ g∗ is an isomorphism if either f or g is cohomo- logically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' All these claims are well-known;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' we spell out the proof only for the reader’s convenience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof of (1) is formal: it follows from proper base-change by passing to right adjoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof of (2) is also essentially formal (and well-known).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The assumption that g cohomolog- ically smooth implies that there is an invertible object ωg ∈ D(Y ′) such that g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ≃ g∗(−) ⊗ ωg and (g′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) ≃ (g′)∗(−) ⊗ (f ′)∗ωg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then it is clear that (1) implies an equivalence g∗ ◦ f∗ ≃ (f ′)∗ ◦ (g′)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main subtlety is to check that this isomorphism is the inverse of the natural morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, one uses (the first) commutative diagram from the proof of [LZ17, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we show (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If f is cohomologically smooth, the statement follows from the definition of cohomological smoothness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If g is cohomologically smooth, one can argue similarly to (2): using the notion of cohomological smoothness, it is easy to construct an equivalence (g′)∗ ◦ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ≃ (f ′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ◦ g∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To see that this equivalence coincides with the natural morphism, one should use (the second) commutative diagram from the proof of [LZ17, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6-functor formalisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we are ready to give the definition of a 6-functor formalism that will be used in this paper: Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A 6-functor formalism is a weak 6-functor formalism D: Corr(C) → Cat∞ such that (1) for each X ∈ C, the ∞-category D(X) is stable and presentable;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) D∗|Cop : Cop → Cat∞ satisfies analytic (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zariski in case of schemes) descent, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', for any analytic open covering U = {Ui → X}i∈I, the natural morphism D(X) → lim n∈∆ � i1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=',in∈I D(Ui1 ×X · · · ×X Uin) is an equivalence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) every proper morphism f is cohomologically proper17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, for any proper mor- phism f : X → Y , there is a canonical identification f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = f∗;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) every ´etale morphism f is cohomologically ´etale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, for any ´etale morphism f : X → Y , there is a canonical identification f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = f ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The same definition makes sense if we everywhere replace the category C with the category C′ of +-weakly finite type adic S-spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the adic world, this version is actually useful for constructing 6-functor formalisms in the sense of Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 because it is easier to construct compactifications in the category C′ (see [Hub96, §5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 17Strictly speaking, we should first require that any Zariski-closed immersion is cohomologically proper in the sense of Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' And then it makes sense to require that any proper morphism is cohomologically proper in the sense of Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 23 Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If D is a 6-functor formalism, all the functors f ∗, f∗, f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', ⊗, Hom are exact in the sense [HA, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1] (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', commute with finite limits and colimits).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Indeed, all of them are either left or right adjoints, so they commute with all colimit or limits respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' But then [HA, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1] implies they must be exact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the most part of the paper, we do not need to assume that D(X) are stable ∞-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, we lack any examples of non-stable 6-functor formalisms, so we prefer to put stability of D(X) into the definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the unstable case, the upper shriek functor i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' usually does not exist even for a Zariski-closed immersion i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We recall that any stable ∞-category is canonically enriched over Sp the ∞- category of spectra (see [GH15, Ex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14 and Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='18]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, for a 6-functor for- malism D, D(X) is naturally enriched over Sp for every X ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Different Homs) For any two objects F, G ∈ D(X), we denote their inner Hom by HomX(F, G) ∈ D(X), their Hom-spectrum by HomX(F, G) ∈ Sp, and the Hom-group in the associated triangulated category D(X) by HomD(X)(F, G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The relation between these objects is the following: HomX (1X, HomX (F, G)) ≃ HomX (F, G) H0 (HomX (F, G)) = HomD(X)(F, G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first show that, for a 6-functor formalism, the notion of a cohomologically smooth morphism (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7) is sufficiently local: Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be a 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) the notion of cohomologically smooth morphism is analytically (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zariski) local on X and Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) ´etale morphisms are cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first claim is formal from analytic (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zariski) descent and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the second claim, it suffices to show that ´etale morphisms are weakly cohomologically smooth since ´etale morphisms are closed under pullbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now weak cohomological smoothness follows the assumption from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Abstract Poincar´e Duality The main goal of this section is to give a “formal” proof of (a weak version of) Poincar´e Duality in any 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We recall that the usual proof of Poincar´e Duality in ´etale cohomology is inductive and does not really tell the exact input one has to check to get Poincar´e Duality for one particular smooth morphism f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We abstract out this condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Surprisingly, it turns out that one needs a very limited amount of extra data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We give such a characterization in terms of the trace-cycle theories (see Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It roughly says that, in order to prove Poincar´e Duality, one only needs to construct a trace morphism for f and a cycle map of the relative diagonal with some natural compatibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After that, we give a minimalistic set of hypothesis that ensures that any smooth morphism is cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This step reduces the question of proving Poincar´e Duality to the question of computing the dualizing object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This question is studied in more detail in the next two sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 24 BOGDAN ZAVYALOV For the rest of the section, we fix a locally noetherian analytic adic space S (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' a scheme S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote by C the category of locally finite type (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finitely presented) adic S-spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' S-schemes), and fix a weak 6-functor formalism D: Corr(C) → Cat∞ (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In what follows, we will freely use the terminology of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, for each X ∈ C, we denote the associated stable ∞-category by D(X) and its triangulated homotopy category by D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Formal Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we use the 2-category of cohomological correspon- dences CS to reduce the question of proving Poincar´e Duality to the question of constructing an adjoint to 1-morphism in the 2-category of cohomological correspondences CS (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start by considering the (co-)representable 2-functor hS = HomCS(S, −): CS → Cat1 that is a 2-functor from the 2-category of cohomological correspondences to the 2-category of categories (see [JY21, §8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2] for the (dual) theory of representable functors in the 2-categorical context).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It turns out that hS is quite easy to describe explicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, it will be convenient to introduce the notion of a Fourier-Mukai functor: Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X1, X2 be objects in C, and F ∈ D(X1 ×S X2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the Fourier-Mukai functor FMF : D(X1) −→ D(X2) is defined by the rule G �→ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 1F ⊗ G) , where pi : X1 ×S X2 → Xi is the natural projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Explicitly, the functor hS is quite easy to describe: (1) to every object X ∈ CS, it associates the category hS(X) = D(X);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) to every pair of objects X, Y ∈ CS, it associates the functor FM(−) : D (X ×S Y ) → FunCat1 (D(X), D(Y )) F �→ FMF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It is also possible to describe the identity and composition constraints in terms of the projection formula and proper base-change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do not do this here because we will never explicitly need it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also recall the definition of adjoint morphisms in a 2-category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we fix a 2-category C′, objects C and D of C′, and a pair f : C → D, g: D → C of 1-morphisms in C′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([Lur22, Tag 02CG]) An adjunction between f and g is a pair of 2-morphisms (η, ǫ), where η: idC → g ◦ f is a morphism in the category HomC′(C, C) and ǫ: f ◦ g → idD is a morphism in the category HomC′(D, D), which satisfy the following compatibility conditions: (Z1) The composition f ρ−1 f −−→ ∼ f ◦ idC idf ◦η −−−→ f ◦ (g ◦ f) αf,g,f −−−→ ∼ (f ◦ g) ◦ f ǫ◦idg −−−→ idD ◦ f λf −→ ∼ f is the identity 2-morphism from f to f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Here λf and ρf are the left and right unit constraints of the 2-category C′ (see [Lur22, Tag 00EW]) and αf,g,f is the associativity constraint for the 2-category C′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 25 (Z2) The composition g λ−1 g −−→ ∼ idC ◦ g η◦idg −−−→ (g ◦ f) ◦ g α−1 g,f,g −−−→ ∼ g ◦ (f ◦ g) idg◦ǫ −−−→ g ◦ idD ρg −→ ∼ g is the identity 2-morphism from g to g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If C′ = Cat1 is the 2-category of (small) categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 recovers the usual notion of adjunction of functors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ([Lur22, Tag 02CM]) Let F : C′ → C′′ be a 2-functor between 2-categories, and (f, g) is a pair of adjoint morphisms in C′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (F(c), F(g)) is a pair of adjoint morphisms in C′′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Formal Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' I) Let f : X → S be a morphism in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that the 1-morphism A = 1X ∈ HomCS(X, S) is left adjoint to a 1-morphism B = I ∈ HomCS(S, X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−): D(X) −→ D(S) admits a right adjoint given by the formula f ∗(−) ⊗ I : D(S) −→ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' First of all, it suffices to check that two functors are adjoint by passing to the corresponding homotopy categories by (see [Lur22, Tag 02FX]), so we can argue with the associated homotopy catetories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We consider the (co)-representable 2-functor hS : CS → Cat1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 guarantees that � hS(A), hS(B) � is a pair of adjoint functors between the categories hS(X) and hS(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then Re- mark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 provides us with the identifications hS(X) ≃ D(X), hS(S) ≃ D(S), hS(A) = f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) and hS(B) = f ∗(−) ⊗ I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we conclude that f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is left adjoint to f ∗(−) ⊗ I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Trace-cycle theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we “decategorify” Poincar´e Duality and reduce it to constructing two morphisms subject to two commutativity relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main tool for this decategorification process will be the 2-category of cohomological correspondences CS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We recall that throughout this section we have fixed a weak 6-functor formalism D: Corr(C) → Cat∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a morphism in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A trace theory on f is a pair (ωf, trf) of an invertible object ωf ∈ D(X) and a morphism trf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (ωf) → 1Y in the homotopy category D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We point out that proper base-change implies that any base change of a morphism with a trace theory (ωf, trf) admits a canonical trace theory given by (g′∗ ωf, g∗(trf)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, let X′ X Y ′ Y f′ g′ f g be a Cartesian diagram in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then proper base-change tells us that the natural morphism g∗f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ωf ∼ −→ f ′ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (g′)∗ ωf 26 BOGDAN ZAVYALOV is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the pullback g∗(trf) defines a trace map trf′ := g∗ (trf) : f ′ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � g′∗ωf � → 1Y ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Warning 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The construction of trf′ depends on the choice of g: Y ′ → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, this will never cause any confusion in the examples where we apply this construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the next definition, we fix a morphism f : X → Y with the diagonal morphism ∆: X → X ×Y X and the projections p1, p2 : X ×Y X → X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A trace-cycle theory on f is a triple (ωf, trf, cl∆) of (1) an invertible object ωf ∈ D(X), (2) a trace morphism trf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ωf → 1Y in the homotopy category D(Y ), (3) a cycle map cl∆ : ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X −→ p∗ 2 ωf in the homotopy category D(X ×S X) such that 1X p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) 1X p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 2 ωf) , ∼ id p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆) trp1 (7) ωf p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 1ωf ⊗ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 1ωf ⊗ p∗ 2ωf) ωf 1X ⊗ ωf p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='p∗ 1ωf ⊗ ωf, ∼ id p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (id⊗cl∆) ≀ ∼ trp2 ⊗id (8) commute in D(X) (with the right vertical arrow in the second diagram being the projection formula isomorphism).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The name trace-cycle theory comes from the fact that, in the case of the ´etale 6-functor formalism, the morphism cl∆ is equivalent to a class in H2d ∆ (X ×Y X, Z/nZ(d)), which comes from the cycle class of the diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Commutativity of the first diagram in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 should be thought as a formal way of saying that trace of the cycle class of a point is “universally” equal to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Similarly to Constrution 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2, one can pullback trace-cycle theories along any morphism Y ′ → Y in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we are ready to show the main result of this section: Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Formal Poincar´e Duality II) Let f : X → S be a morphism in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that f admits a trace-cycle theory (ωf, trf, cl∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−): D(X) → D(S) admits a right adjoint given by the formula f ∗(−) ⊗ ωf : D(S) → D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 27 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6, it suffices to verify that A = 1X ∈ HomCS(X, S) is left adjoint to B = ωf ∈ HomCS(S, X) in the 2-category of cohomological correspondences CS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction of the counit ǫ: A ◦ B → idS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition, the composition A ◦ B corresponds to f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (ωf) ∈ D(S) = HomCS(S, S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also note the the identity morphism idS is given by 1S since S ×S S = S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We define the counit 2-morphism ǫ: f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (ωf) → 1S to be the trace morphism trf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction of the unit η: idX → B◦A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition, the composition B◦A corresponds to the object p∗ 2 (ωf) ∈ D(X ×S X), and the identity 1-morphism idX corresponds to the object ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus we define the unit 2-morphism η: ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X → p∗ 2 (ωf) to be the cycle morphism cl∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Verification of the axiom (Z1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One needs to check that the composition A ρ−1 A −−→ ∼ A ◦ idX idA◦η −−−→ A ◦ (B ◦ A) αA,B,A −−−−→ ∼ (A ◦ B) ◦ A ǫ◦idB −−−→ idS ◦ A λA −−→ ∼ A is equal to the identity morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After unravelling the definitions, this verification essentially boils down to the definition of a trace-cycle theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We explain this verification in more detail for the convenience of the reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We make the diagram explicit: (1) First, we see that A ◦ idX is equal to the A ◦ idX = p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 21X ⊗ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) = p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1X ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The right unit constraint ρ−1 A is identified with the natural isomorphism 1X ∼ −→ p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X coming from the fact that p1 ◦ ∆ = idX;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) the composition A ◦ (B ◦ A) is the object A ◦ (B ◦ A) = p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 2 ωf) ∈ D(X) and the morphism idX ◦ η is given by p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) the composition (A ◦ B) ◦ A is given by f ∗f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ωf and the associativity constraint αA,B,A is the inverse of the base change isomorphism f ∗f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ωf → p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 2ωf) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) idS ◦ A is just equal to 1X since the diagonal S → S ×S S is the identity morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' And the composition ǫ ◦ idA is equal to f ∗(trf): f ∗(f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='ωf) → 1X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (5) finally, the left unit constraint λA is the identity morphism because the diagonal S → S×SS is the identity morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 28 BOGDAN ZAVYALOV After making all these identifications, we see that the composition αA,β,A ◦ (idA ◦ η) is equal to trp1 by the very definition of trp1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the axiom (Z1) boils down to checking that the diagram 1X p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) 1X p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (p∗ 2ωf) ∼ id p1,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆) trp1 commutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We finish the proof by noting that this is part of the definition of a trace-cycle theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Verification of the axiom (Z2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The verification is essentially the same as the one in Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After unravelling all the definitions, the axiom boils down to the commutativity of the second diagram in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → S be as in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8, and S′ → S is a morphism in C, and f ′ : X′ → S′ is the base change of f along g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor f ′ !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−): D(X′) → D(S′) admits a right adjoint given by the formula (f ′)∗(−) ⊗ (g′)∗ (ωf) : D(S′) → D(X′), where g′ : X′ → X is the base-change morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7, we can pullback the trace-cycle theory on f to a trace cycle theory on f ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we denote by C′ the slice category C/S′ and restrict the 6-functor formalism D on Corr(C′) to apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 to f ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We note that Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9 is already a quite non-trivial statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It is not clear why duality for f should imply duality for f ′ from first principles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Cohomological smoothness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to show how Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 can be used to formulate a pretty minimalistic set of assumptions that ensures that any smooth morphism is cohomologically smooth (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This statement should be thought like a version of Poincar´e Duality without identifying the dualizing object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We recall that throughout this section we have fixed a weak 6-functor formalism D: Corr(C) → Cat∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a morphism in C with a trace-cycle theory (ωf, trf, cl∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then f is cohomologically smooth (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows directly from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 and Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It is not hard to see that f : X → Y is cohomologically smooth if and only if f admits a trace-cycle theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Indeed, we put ωf := f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y , and trf : f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ωf → 1Y to be the counit of the (f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' )-adjunction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we note that Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 implies that 1X ≃ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 11X ≃ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='p∗ 2 ωf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we define the cycle morphism cl∆ : ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X → p∗ 2ωf to be counit the (∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' )-adjunction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We leave it to the reader to verify that the triple (ωf, trf, cl∆) satisfies the assumptions of Defini- tion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 29 Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that D is a 6-functor formalism (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the relative projective line g: P1 S → S admits a trace-cycle theory (ωg, trg, cl∆) if and only if every smooth morphism f : X → Y is cohomologically smooth (with respect to D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The “if” part follows directly from Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So we prove the “only if” part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16(1), we can argue analytically locally on X and Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, [Zav23, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8] implies that we may assume that X is ´etale over the relative disk Dd Y (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' affine space Ad Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16(2) and Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 ensure that it suffices to show that the natural projection Dd Y → Y (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Ad Y → Y ) is cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we use Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 once again to reduce the question further to the case of the one-dimensional relative disk D1 Y → Y (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A1 Y → Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, it suffices to show it for the relative projective line P1 Y → Y compactifying the relative disk (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' affine line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, the result follows Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Dualizing object Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 gives a minimalistic condition that implies Poincar´e Duality up to computing the dualizing object ωf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus the question of proving the full version of Poincar´e Duality reduces to computing the dualizing object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we show that (under a relatively mild assumption) there is always a “formula” for the dualizing object f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y in terms of the relative tangent bundle Tf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The formula says that ωf is equal to 0∗ Xg!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X, where g: VX(Tf) → X is the total space of the relative tangent bundle and 0X is the zero section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, it implies that, for the purpose of computing f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y , it suffices to assume that f is the total space of a vector bundle and make the computating in a “neighborhood” of the zero section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the next section, we will use this to show that, in the presence of first Chern classes, one can fully trivialize f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y (up to the appropriate Tate twists).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We prove the desired formula in two steps: we first use Verdier’s diagonal trick to reduce the question of computing ωf for a general smooth morphism to the question of computing s∗ωf for a smooth morphism f with a section s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we use a version of the deformation to the normal cone to reduce further to the case, where f is the total space of the (normal) vector bundle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The methods of this section are essentially independent of Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we always put into our assumptions that any smooth morphism in C is cohomologically smooth with respect to D (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 shows that this is equivalent to the existence of a trace-cycle theory on the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Throughout this section, we fix a locally noetherian analytic adic space S (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' a scheme S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote by C the category of locally finite type (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finitely presented) adic S-spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' S-schemes), and fix a 6-functor formalism D: Corr(C) → Cat∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Verdier’s diagonal trick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start the discussion by reviewing a version of Verdier’s diagonal trick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a cohomologically smooth morphism in C, ∆: X → X ×Y X the relative diagonal, and p: X ×Y X → X is the projection onto the first factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then there is a canonical isomorphism ∆∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 30 BOGDAN ZAVYALOV Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We consider the commutative diagram X X ×Y X X X Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ∆ id id q p f f Then we have a sequence of isomorphisms: f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ ∆∗q∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ ∆∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f ∗1Y ≃ ∆∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first isomorphism follows from the equality q ◦ ∆ = id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The second isomorphism follows from the base change condition in the definition of cohomological smoothness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The third isomorphism is trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ We note that Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 allows us to reduce the question of computing f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' for a general smooth morphism f to the question of computing s∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X in the case f has a section s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For our later convenience, we axiomize this construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We recall that Pic(D(Y )) denotes the group of the isomorphism classes of invertible objects in D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a cohomologically smooth morphism in C with a section s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we denote by C(f, s) ∈ Pic(D(Y )) the object C(f, s) := s∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition of cohomological smoothness, the formation of C(f, s) commutes with an arbitrary base change Y ′ → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of this section, we assume that all smooth morphisms in C are cohomologically smooth with respect to D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Variant 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : VX(E) → X be the total space of a vector bundle E on X with the zero section s: X → VX(E).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we define CX(E) ∈ Pic(D(X)) as CX(E) = C(f, s) ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using this notation, Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 tells us that, for a smooth morphism f : X → Y , we have a canonical isomorphism f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ C(p, ∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our goal is to relate C(p, ∆) to CX(Tf), where Tf is the total space of the relative tangent bundle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This will be done in the next section using (a version) of the deformation to the normal cone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the rest of this section, we would like to show that CY (−) defines an additive morphism from K0(Vect(Y )) to Pic(D(Y )), where K0(Vect(Y )) is the Grothendieck group of vector bundle on Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This will not play any role in this paper, but it seems to be of independent interest as it defines an interesting invariant of a 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 31 Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Assume that all smooth morphisms in C are cohomologically smooth with respect to D, and let X be an object of C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the construction CX(E) defines an additive homomorphism CX : K0(Vect(X)) → Pic(D(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The only thing that we need to show is that, for any short exact sequence of vector bundle 0 → E′ i−→ E π−→ E′′ → 0, there is an isomorphism CX(E) ≃ CX(E′) ⊗ CX(E′′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we denote the structure morphism of VX(E) by f and the zero section by 0E, similarly for f ′, f ′′ and 0E′ and 0E′′ Now we consider the commutative diagram VX(E′) VX(E) X VX(E′′) X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' i f′ π f id 0E′ 0E′′ 0E f′′ (9) Now the result follows from the following sequence of isomorphisms: CX(E) = 0∗ E � f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X � ≃ 0∗ E � π!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (f ′′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X � ≃ 0∗ Eπ∗ � (f ′′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X � ⊗ 0∗ E � π!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1V(E′′) � ≃ 0∗ E′′ � (f ′′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X � ⊗ 0∗ E′ � i∗π!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1V(E′′) � ≃ 0∗ E′′ � (f ′′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X � ⊗ 0∗ E′ � (f ′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X � = CX(E′′) ⊗ CX(E′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first equality holds by definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The second isomorphism comes from the equality f = f ′′ ◦ π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The third isomorphism comes from invertibility of (f ′′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The fourth isomorphism comes from the equalities π ◦ 0E = 0E′′ and 0E = i ◦ 0E′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The fifth isomorphism comes from the fact that π is cohomologically smooth, and so formation of π!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 commutes with arbitrary base change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' And sixth equality holds by definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Deformation to the normal cone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our goal in this section is to fulfil the promise made in Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 and show that C(p, ∆) = CX(Tf).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We are going to do this via deforming (or, actually, specializing) to the normal cone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The idea of using deformation to the normal cone to compute the dualizing object is due to Dustin Clausen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, a version of this argument is used in [CS22, Lecture XIII] to compute the dualizing object in the 6-functor formalism of liquid sheaves on complex-analytic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 32 BOGDAN ZAVYALOV We give two slightly different arguments for the formula C(p, ∆) = CX(Tf) under two different assumptions on the 6-functor formalism D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Motivic 6-functor formalisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this subsection, we show that C(p, ∆) = CX(Tf) under the assumption that D is A1-invariant in the strong sense: Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let C be the category of locally finite type (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finitely presented) adic S-spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' S-schemes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A 6-functor formalism D: Corr(C) → Cat∞ is motivic if (1) A1-invariant (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10), (2) any smooth moprhism f in C is cohomologically smooth with respect to D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main idea of the proof is to deform a Zariski-closed immersion s: Y → X into the zero section of its normal cone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The construction of the deformation to the normal cone uses blow-ups, so we refer to [Zav23, Section 6] for the detailed discussion of the Proj and blow-up construction in the adic world, and to [Zav23, Section 5] for the notion of an lci (Zariski-closed) immersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the case of schemes, these notions are standard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Deformation to the normal cone) Let Z i֒−→ X be an lci S-immersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the deformation to the normal cone DZ(X) is the S-space DZ(X) := BlZ×S0S � X ×S A1 S � − BlZ(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition, it admits a morphism π: DZ(X) → A1 X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Moreover, by functoriality, there is a morphism DZ(Z) = A1 Z �i−→ DZ(X) making the diagram A1 Z DZ(X) A1 X �i π commute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Local construction) (1) Suppose first that X = Spec A and Z = Spec A/I for a regular ideal I ⊂ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then [Ful98, §5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1, end of p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='51] implies that DZ(X) has a very concrete description as the spectrum of the Rees algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely18, DZ(X) ≃ Spec � n∈Z InT −n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Moreover, under this isomorphism, the natural morphism π: DZ(X) → A1 X is equal to the morphism Spec � n∈Z InT −n → Spec A[T] induces by the natural morphism A[T] → � n∈Z InT −n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The fiber over 0X is isomorphic to Spec � n≤0 In/In+1, the total space of the normal bundle19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 18In the formula below, the convention is that In = A for n < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 19Here we use the lci assumption to make sure that I/I2 is projective and In/In+1 = Symn A/II/I2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 33 (2) Now if Z ⊂ X is a general lci S-imersion of pure codimension c (either in the analytic or algebraic world).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then DZ(X) can be alternatively defined via gluing (and relative analytification20) the local algebraic construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Similarly to the algebraic geometry (or by deducing using the local description in Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3(1)), one sees that there is a commutative diagram Gm,Z A1 Z Z Gm,X DZ(X) VZ(NZ/X) Gm,X A1 X X, i×idGm,S �i 0Z ≀ π 0X where 0X and 0Z are the corresponding zero sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we apply this construction in one particular example when f : X → Y is a smooth morphism, and i = s: Y → X is a Zariski-closed immersion that is a section of f (it is automatically an lci immersion by [Zav23, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, we slightly change our notation as follows: Notation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the situation as above, we denote DZ(X) by � X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It fits into the following commutative diagram Gm,X � X VY (Ns) Gm,Y A1 Y Y, f×Gm �f f0 s×Gm �s 0Y s0 (10) where �f : � X → A1 Y is the composition � X → A1 X → A1 Y , �s is the morphism previously denoted by �i, and s0 is the zero section of the total space of the normal cone of Y inside X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 implies that �f is smooth in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose the 6-functor formalism D is motivic (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a smooth morphism, s: Y → X a Zariski-closed section of f, and �f : � X → A1 Y and �s: A1 Y → � X be as in Notation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the invertible object C( �f, �s) = �s∗ �f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1A1 Y ∈ Pic � D � A1 Y � � lies in the essential image of the pullback functor Pic (D (Y )) → Pic � D � A1 Y �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Localize on Y and reduce to a simpler situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first note that Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11 ensures that the functor g∗ : Pic (D (Y )) → Pic � D � A1 Y �� is fully faithful for any Y ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, using the analytic (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zariski) descent, we can check that an object lies in the essential image of g∗ locally on Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 20See [Hub93, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 34 BOGDAN ZAVYALOV We fix a point y ∈ Y , so [Zav23, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8] ensures that we can find an open s(y) ∈ U ⊂ X such that f|U : U → Y factors as U r−→ Dd Y → Y such that r is ´etale and s(y) ∈ r−1(0Y ) = Y ∩ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we replace U with f −1(s(Y ) ∩ U) ∩ U to get an open U ⊂ X such that (1) s(y) ∈ U;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) if V := f(U) ⊂ Y is the (open) image of U in Y , then s(V ) ⊂ U;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) the morphism f|U : U → V factors as the composition U r−→ Dd V → V such that r is ´etale and r−1(0V ) = s(V ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we consider the square U XV X V V Y, f|U fV f s|V id s|V s where all horizontal arrows are open immersion, and the right square is Cartesian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we use that C(� fV , � sV ) = �s∗ V �f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' V 1 ∈ Pic(D(A1 V )) depends only on the open neighborhood of the section s(V ) to get a canonical identification C( �f, �s)|A1 V ≃ C(� fV , � sV ) ≃ C(� f|U, � s|V ) ∈ Pic � D � A1 V �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In other words, since we are allowed to argue locally on Y , we may replace the pair (f, s) by the pair (f|U, s|V ) to assume that f : X → Y factors as X r−→ Dd Y h−→ Y with an ´etale r and s(Y ) = r−1(0Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Reduce further to the case of the relative affine space Dd Y → Y with the zero section s = 0Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We consider the Cartesian square Y X Y Dd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' s id r 0Y POINCAR´E DUALITY REVISITED 35 Since the formation of the deformation of the normal cone commutes with ´etale base change (for this, use [Zav23, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7], and [Zav21c, Remark B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7]), we get a Cartesian square A1 Y A1 Y � X � Dd Y A1 Y A1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' �s id �0Y �f �h id Since the formation of C(−, −) commutes with arbitrary base change, we conclude that C( �f, �s) ≃ C(�h,�0Y ) ∈ Pic(D(A1 Y )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it suffices to show the claim for X = Dd Y with f : Dd Y → Y being the natural projection, and s = 0Y the zero section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using that the formation of C commutes with arbitrary base change, we can reduce further to the case S = Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The case of the natural projection f : X = Dd S → S and the zero section 0S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since the question is local on S (see Step 1), we can assume that S = Spa(OS(S), O+ S (S)) is a strongly noetherian Tate affinoid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Denote the d-dimensional relative Tate algebra by A = OS(S)⟨T1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' , Td⟩ with the ideal I = (T1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' , Td) ⊂ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3(1) tells us that � Dd S is isomorphic to the relative analytification of the A-algebra Rees(A) := � n∈Z Int−n, where In = A if n ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then, similarly to the situation in algebraic geometry, one checks that the unique OS(S)-linear ring homomorphism OS(S)⟨X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' , Xd⟩[T] → � n∈Z Int−n sending Xi to Tit−1 and T to t is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, after passing to the relative analyti- fication, we see that we have a canonical isomorphism � Dd S ≃ Dd S ×S A1 S such that the projection �f : � Dd S → A1 S corresponds to the projection onto the second factor, and the section �0S : A1 S → � Dd S corresponds to the “zero”-section idDd × 0S : A1 S → Dd S ×S A1 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 36 BOGDAN ZAVYALOV In particular, there is a commutative square A1 S S � Dd S A1 S A1 S S, �0S 0S �f g g where each square is Cartesian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since the formation of C(f, s) commutes with arbitrary base change, we conclude that C � �f,�0S � ≃ g∗C � g, 0S � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the notation of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6, there is a canonical isomorphism C(f, s) ≃ CY (Ns) ∈ D(Y ), where Ns is the normal bundle of s(Y ) in X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Consider the deformation to the normal cone construction: Gm,X � X VY (Ns) Gm,Y A1 Y Y, f×Gm �f f0 s×Gm �s 0Y s0 Then we know that the the formation of C( �f, �s) commutes with arbitrary base change21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore we get isomorphisms C( �f, �s)|0Y ≃ C(f0, 0Y ) = CY (Ns) ∈ D(Y ), C( �f, �s)|1Y ≃ C(f, s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 comes as a pullback from D(Y ), so we get a canonical identi- fication of the “fibers” C(f, s) ≃ C( �f, �s)|1Y ≃ C( �f, �s)|0Y ≃ CS(Ns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose the 6-functor formalism D is motivic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a smooth morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then there is a canonical isomorphism f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ CX(Tf) ∈ D(X), where Tf is the relative tangent bundle of f and CX(Tf) is from Variant 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 21This step implicitly uses that �f is a smooth morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This can either be seen from the proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 or from the local description in Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 POINCAR´E DUALITY REVISITED 37 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 says that f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ ∆∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X = C(p, ∆), where p: X ×Y X → X is the projection onto the first factor, and ∆: X → X ×Y X is the diagonal morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then [Zav21c, Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] ensures that we can decompose ∆ as X i−→ U j−→ X ×Y X, where i is a Zariski-closed immersion, and j is an open immersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we see that C(p, ∆) = ∆∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ i∗j∗p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ i∗(p ◦ j)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X = C(i, p ◦ j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now clearly i is a Zariski-closed section of a smooth morphism g := p ◦ j : U → X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So the result follows directly from Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 and the observation that the normal bundle of the (relative) diagonal is equal to the (relative) tangent bundle Tf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Geometric 6-functor formalisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we perform the deformation to the normal cone type argument under a different assumption on D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A 6-functor formalism D: Corr(C) → Cat∞ is pre-geometric if, for every object Y ∈ C and an invertible object L ∈ Pic(P1 Y ), there is an isomorphism L|0Y ∼= L|1Y inside D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A 6-functor formalism D: Corr(C) → Cat∞ is geometric if any smooth moprhism f in C is cohomologically smooth with respect to D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To adapt the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 to a geometric 6-functor formalism D, we need to introduce the projective version of Construction 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 Construction 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Projective deformation to the normal cone) Let Z i֒−→ X be an lci S- immersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the projective deformation to the normal cone PDZ(X) is the S-space PDZ(X) := BlZ×S0S � X ×S P1 S � − BlZ(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition, it admits a morphism π: PDZ(X) → P1 X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Moreover, by functoriality, there is a morphism PDZ(Z) = P1 Z �i−→ PDZ(X) making the diagram P1 Z PDZ(X) P1 X �i π commute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Similarly to Notation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5, we specialize Construction 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 to the case when f : X → Y is a smooth morphism, and i = s: Y → X is a Zariski-closed immersion that is a section of f (it is automatically an lci immersion by [Zav23, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, we slightly change our notation as follows: 38 BOGDAN ZAVYALOV Notation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the situation as above, we denote PDZ(X) by � X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It fits into the following commutative diagram A1 X � X VY (Ns) A1 Y P1 Y Y, f×A1 S �f f0 s×A1 S j �s 0Y s0 (11) where j is the open complement to the zero section 0Y : Y → P1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose the 6-functor formalism D is geometric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a smooth morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then there is an isomorphism f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ∼= CX(Tf) ∈ D(X), where Tf is the relative tangent bundle of f and CX(Tf) is from Variant 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The same proof as in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 reduces the question to proving that C(f, s) ≃ CY (Ns) for a smooth morphism f : X → Y with a Zariski-closed section s and a geometric 6-functor formalism D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we use the projective deformation to the normal cone A1 X � X VY (Ns) A1 Y P1 Y Y f×A1 S �f f0 s×A1 S j �s 0Y s0 and the fact that, for an invertible object C( �f, �s) ∈ D(P1 Y ), the fibers over 1Y and 0Y are isomorphic to conclude that there is a sequence of isomorphisms C(f, s) ≃ C( �f, �s)|1Y ∼= C( �f, �s)|0Y ≃ C(f0, 0Y ) = CY (Ns) ∈ D(Y S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In practice, the isomorphism L|1Y ≃ L|0Y in Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9, can be always achieved to be “canonical”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This would make the isomorphism in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12 also canonical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, this should apply to the potential crystalline or prismatic 6-functor formalisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, it seems annoying to explicitly spell out what this ”canonicity” should mean in an abstract 6-functor formalism, so we do not discuss it here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 39 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' First Chern classes We note that Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 and Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 (or Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12) together already imply a big part of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 gives a minimalistic way to check that all smooth morphisms are cohomologically smooth with respect to a 6-functor formalism D, and Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 gives a “formula” for the dualizing object ωf = f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, in many cases, the dualizing object has a particularly nice description as the tensor power of the “Tate object” (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' relative reduced cohomology of the projective line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This descrip- tion is not automatic and does not happen for all (geometric) 6-functor formalisms (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' this is false for the (solid) quasi-coherent 6-functors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, this further trivialization requires some new argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we give different conditions that imply that a 6-functor formalism D automatically satisfies the strongest possible version of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The strategy is to use Chern classes to both construct the trace map for the relative projective line, and trivialize the dualizing object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We get essentially the optimal result if D satisfies the excision axiom (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' in this case, existence of a theory of first Chern classes (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8) implies Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After unravelling the definition, a theory of first Chern classes essentially boils down to a sufficiently functorial additive assignment of a first Chern class c1(L) to a line bundle L with the constraint that it satisfies the projective bundle formula for the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For a general 6-functor formalism, the results are slightly less nice and we need to put more assumptions on D in order to get Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We need to assume that D is either A1- invariant or pre-geometric (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9), that there is a strong theory of first Chern classes c1 (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8), and there is a theory of cycle maps underlying c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Even though the results are not as strong as in the case of a 6-functor formalism, these conditions seem not that hard to verify in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of the section, we fix a locally noetherian analytic adic space S (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' a scheme S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote by C the category of locally finite type (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finitely presented) adic S-spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' S-schemes), and fix a 6-functor formalism D: Corr(C) → Cat∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also fix an inverible object 1S⟨1⟩ ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we fix some notation that we will freely use later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We recall that we fixed an invertible object 1S⟨1⟩ ∈ D(S) for the rest of this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) (Tate objects) For a non-negative integer d ≥ 0, we define Tate objects 1S⟨d⟩ := 1S⟨1⟩⊗d ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using that 1S⟨1⟩ is invertible, we extend the above formula to negative integers d by the following formula: 1S⟨d⟩ := (1S⟨−d⟩)∨ ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) (Tate twists) In general, for a morphism f : X → S, an object F ∈ D(X), and an integer d, we define its Tate twist F⟨d⟩ := F ⊗ f ∗1S⟨d⟩ ∈ D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, the object 1X⟨d⟩ ∈ D(X) is defined to be f ∗1S⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 40 BOGDAN ZAVYALOV 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to define the notion of a theory of first Chern classes and verify some of its formal properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start the section by giving a precise definition of a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This will be convenient to do in the ∞-categorical setting to automatically keep track of all higher coherences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One nice feature of this definition, is that it allows us to define localized first Chern classes for free, while in the 1-categorical approach, it seems to be extra data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Recall that we have fixed a 6-functor formalism D: Corr(C) → Cat∞ with an invertible object 1S⟨1⟩ ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We write Can for the site whose underlying category is the category C and whose coverings are analytic open coverings (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zariski open coverings).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We consider sheaf of abelian group O× : Cop an → ModZ defined by X �→ O× X(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We can compose it with the natural morphism ModZ → D(Z), to get the ∞-functor O× : Cop an → D(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This functor is not a D(Z)-valued sheaf (in the sense [Lur18, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The sheafification of the D(Z)-valued functor O× is the functor RΓan(−, O×): Cop an → D(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By [Cla21, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 11], the values of this functor on an object X ∈ C are canonically identified with RΓan(X, O× X) justifying the name.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In what follows, we will usually consider the functor RΓan(−, O×) as an Sp-valued sheaf by compositing with the natural “forgetful” functor D(Z) → D(Sp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also consider absolute cohomology as an Sp-valued functor RΓ(−, 1⟨c⟩): Cop an → Sp that sends an object X ∈ C to RΓ(X, 1X⟨c⟩) = HomX(1X, 1X⟨c⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One easily checks that it is a Sp-valued sheaf due to the fact that D satisfies analytic descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A weak theory of first Chern classes on a 6-functor formalism D is a morphism c1 : RΓan(−, O×)[1] → RΓ(−, 1⟨1⟩) of Sp-valued sheaves on Can.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This definition may seem a bit random at first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, it does have a strong connection to is classically called a theory of (additive) first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We will see in a moment that this definition, in particular, assigns a cohomology class to each line bundle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Furthermore, this assignment is sufficiently functorial so, in the presence of the excision axiom, it even allows us to assign “localized” classes to a line bundle with a trivialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It also encodes functoriality and additivity of this classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the following remark, we partially unravel the content of Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) (First Chern classes) By passing to H0, a weak theory of first Chern classes gives a group homomorphism H1 an(X, O× X) → H0(X, 1X⟨1⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 41 Recall that the group H1 an(X, O× X) classifies the isomorphism classes of line bundles on X, so, for each isomorphism class of line bundles L, a weak theory of first Chern classes assigns the first Chern class of L as an element c1(L) ∈ H0(X, 1⟨1⟩) = HomD(X)(1X, 1X⟨1⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For our purposes, it will be convenient to also consider this class as a homotopy class of morphisms c1(L): 1X → 1X⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) (Additivity) Since c1 is a map of spectra, we see that localized first Chern classes are additive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If L and L′ two isomorphism classes of line bundles on X, then c1(L) + c1(L′) = c1(L ⊗ L′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) (Base Change) The formation of c1(L) commutes with arbitrary base due to functoriality of c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, if Y → X is a morphism in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we have an equality of classes f ∗� c1(L) � = c1 � f ∗L � ∈ HomD(Y )(1Z′, 1Y ⟨1⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we show that if D satisfies the excision axiom (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8), then one can also define the localized version of the usual first Chern classes: Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1) (Localized first Chern classes) More generally, let Z i֒−→ X be a Zariski- closed subset with the complement U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the group H0 � fib � RΓan(X, O× X) → RΓan(U, O× U) � [1] � = H1 Z(X, O× X) classifies22 isomorphism classes of pairs (L, φU) of a line bundle L and a trivialization φ: OU → L|U on U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, for any such isomorphism class, a weak theory of first Chern classes assigns the localized Chern class of (L, ϕU) as an element23 c1(L, ϕU) ∈ H0 � fib (RΓ(X, 1X⟨1⟩) → RΓ(U, 1U⟨1⟩)) � ≃ H0 Z(X, 1X⟨1⟩) = HomD(X)(i∗1Z → 1X⟨1⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Again, for our purposes, it will also be convenient to think about the localized first Chern class as of a homotopy class of morphisms c1(L, ϕU): i∗1Z → 1X⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Non-localized first Chern classes can be recovered from this construction by taking Z = X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) (Additivity) Since c1 is a map of spectra, we see that localized first Chern classes are additive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If (L, ϕU) and (L′, ϕ′ U) two isomorphism classes of line bundles with a trivialization on U, then c1(L, ϕU) + c1(L′, ϕ′ U) = c1(L ⊗ L′, ϕU ⊗ ϕ′ U).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 22Even though this fact is well-known, it does not seem to be explicitly formulated in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The interested reader may adapt the argument used in [Ols15, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13] to this situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 23Use the excision sequence from Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9 for the second isomorphism below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 42 BOGDAN ZAVYALOV (3) (Base Change) The formation of c1(L, ϕU) commutes with arbitrary base due to functori- ality of c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, if Z′ Z Y X i′ f′ i f is a Cartesian diagram in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we have an equality of classes f ∗� c1(L, ϕU) � = c1 � f ∗L, f ∗(ϕU) � ∈ HomD(Y )(i′ ∗1Z′, 1Y ⟨1⟩).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In other words, the diagram f ∗i∗1Z f ∗(1X⟨1⟩) i′ ∗1Z′ 1Y ⟨1⟩ ≀ f∗� c1(L,ϕU) � ≀ c1 � f∗L,f∗ϕU � commutes (up to homotopy), where the left vertical map is the base-change morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (4) (Localization) Now we discuss another instance of functoriality of c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let i1 and i2 Z1 Z2 X i1 i2 be Zariski-closed immersions with open complements U1 and U2 respectively, and (L, ϕU1) a pair of a line bundle and its trivialization on U1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the diagram i2,∗1Z2 i1,∗1Z1 1X⟨1⟩ c1(L,ϕU1|U2) c1(L,ϕU1) commutes (up to homotopy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that f : X → Y is a morphism is C, and c: f ∗1Y = 1X → 1X⟨1⟩ is a morphism in D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By the (f ∗, f∗)-adjunction, this uniquely defines a morphism adjc: 1Y → f∗1X⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Unless there is some possible confusion, we will denote the morphism adjc simply by c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Applying the same construction to tensor powers of c, we get morphisms ck : 1Y → f∗1X⟨k⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We note that, for k = 0, we get simply the adjunction morphism that we denote by f ∗ : 1Y → f∗1X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we apply this construction to the projective bundle f : PY (E) → Y for some vector bundle E on Y of rank d + 1 (see [Zav23, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14]) and the first Chern class morhism of the universal line bundle: c1 = c1(O(1)): 1PY (E) → 1PY (E)⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 43 Then Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 gives us a morphism d � k=0 ck 1⟨d − k⟩: d � k=0 1Y ⟨d − k⟩ → f∗1PY (E)⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A theory of first Chern classes is a weak theory of first Chern classes c1 such that, for the relative projective line f : P1 S → S, the morphism c1 + f ∗⟨1⟩: 1S ⊕ 1S⟨1⟩ → f∗1P1 S⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A strong theory of first Chern classes is a weak theory of first Chern classes c1 such that, for any integer d ≥ 1 and the relative projective space f : Pd S → S, the morphism d � k=0 ck 1⟨d − k⟩: d � k=0 1S⟨d − k⟩ → f∗1Pd S⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 implies that, if c1 is a theory of first Chern classes, then 1S⟨−1⟩ ≃ Cone � 1S → f∗1P1 S � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So the invertible object 1S⟨1⟩ is unique up to an isomorphism, and axiomitizes the “Tate twist”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Projective Bundle Formula) Let c1 be a theory of strong first Chern classes, Y an element of C, and f : PY (E) → Y a projective bundle for a vector bundle E of rank d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the morphism d � k=0 ck 1⟨d − k⟩: d � k=0 1Y ⟨d − k⟩ → f∗1PY (E)⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If c1 is a theory first Chern classes, the same holds for vector bundles of rank 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since D is an analytic sheaf, we can check that �d i=0 ck 1⟨d−k⟩ is an isomorphism analytically locally on Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we may and do assume that E is a trivial vector bundle of rank d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, the result follows from Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8, proper base change, and the fact that c1(O(1)) commutes with base change along Y → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we show that a strong theory of first Chern classes automatically implies that the braiding morphism s: 1S⟨1⟩⊗2 → 1S⟨1⟩⊗2 is homotopic to the identity morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This will be used later to simplify the second diagram in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 in the presence of a strong theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let c1 be a theory of strong first Chern classes on a 6-functor formalism D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the braiding morphism s: 1S⟨1⟩⊗2 → 1S⟨1⟩⊗2 is homotopic to the identity morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Firstly, it suffices to prove the analogous claim for 1S⟨−1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The key is that 1S⟨−1⟩ can be realized as a direct summand of the “relative” cohomology of P2 S .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 44 BOGDAN ZAVYALOV We first fix the relative projective space f : P2 S → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that f∗ is a right-adjoint to a symmetric monoidal functor f ∗, so it is lax-monoidal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, for every object F ∈ D(P2 S) with the braiding morphism sF : F⊗2 → F⊗2, we have a commutative diagram (f∗F)⊗2 f∗(F⊗2) (f∗F)⊗2 f∗(F⊗2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' sf∗F ∪ f∗(sF) ∪ (12) in the homotopy category D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we consider the (twisted) first Chern class morphism c1(O(1))⟨−1⟩: 1P2 S⟨−1⟩ → 1P2 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then similarly to Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7, we get the morphism adjc1 : 1S⟨−1⟩ → f∗1P2 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The same construction applied to c1(O(1))⟨−1⟩⊗2 : 1P2 S⟨−1⟩⊗2 → 1⊗2 P2 S produces the morphism adjc2 1 : 1S⟨−1⟩⊗2 → f∗(1⊗2 P2 S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A formal diagram chase implies that the diagram 1S⟨−1⟩⊗2 � f∗1P2 S �⊗2 f∗ � 1⊗2 P2 S � adjc1⊗adjc1 adjc2 1 ∪ commutes in D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 (with maps twisted by 1⟨−2⟩) implies that adjc2 1 realizes 1S⟨−1⟩⊗2 as a direct summand of f∗ � 1⊗2 P2 S � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we consider the commutative diagram 1S⟨−1⟩⊗2 � f∗1P2 S �⊗2 f∗ � 1⊗2 P2 S � 1S⟨−1⟩⊗2 � f∗1P2 S �⊗2 f∗ � 1⊗2 P2 S � , s adjc1⊗adjc1 adjc2 1 ∪ sf∗(1) f∗(s1) adjc1⊗adjc1 adjc2 1 ∪ where s stand for the braiding morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The left square commutes by the definition of a sym- metric monoidal category, and the right square commutes due to Diagram (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since adjc2 1 splits, it suffices to show that f∗(s1) is equal to id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' But this is clear since the braiding morphism of the unit object is homotopic to the identity morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ In the next couple of sections, we will show how a theory of first Chern classes can be used to prove the full version of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 45 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theory of cycle maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to axiomitize a theory of cycle maps (for divisors) on a 6-functor formalism D “compatible” with a weak theory of first Chern classes c1 on D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we show that, if D satisfies the excision axiom, one can canonically construct such a theory from any weak theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this subsection, we explain the definition of a theory of cycle maps (for divisors) and what it means for a theory of first Chern classes to underlie a theory of cycle maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As previously, we fix an invertible object 1S⟨1⟩ ∈ D(S) and always consider (weak) theories of first Chern Classes with respect to this invertible object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let i: D ֒→ X be an effective Cartier divisor with the associated coherent ideal sheaf I = ker(OX → i∗OD) ⊂ OX (see [Zav23, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The associated line bundle OX(D) := I∨ is the dual of I, we denote its dual by OX(−D) (that is simply just a different name for I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A theory of cycles maps (for effective Cartier divisors) cl• on a 6-functor for- malism D: Corr(C) → Cat∞ is a collection of morphisms cli : i∗1Y → 1X⟨1⟩ in the homotopy category D(X) for each effective Cartier divisor i: Y → X such that they satisfy transversal base change, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', for any Cartesian diagram Y ′ Y X′ X g′ i′ i g such that the vertical arrows are effective Cartier divisors, the diagram g∗i∗1Y g∗(1X⟨1⟩) i′ ∗1Y ′ 1X′⟨1⟩ ≀ g∗(cli) ≀ cli′ commutes in D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A weak theory of first Chern classes c1 underlies a theory of cycle maps cl• if, for every effective Cartier divisor i: Y → X, the composition 1X → i∗1Y cli −→ 1X⟨1⟩ is equal to c1(OX(Y )) in the homotopy category D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the next remark, we fix a weak theory of first Chern classes c1 underlying a theory of cycle maps cl•.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a morphism in C, and i: D ֒→ X an effective Cartier divisor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We can apply Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 to the composition morphism 1X i∗1D 1X⟨1⟩ c1(OX(D)) cli to get the morphism c: 1Y → f∗1X⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then c has an alternative description as the composition 1Y −→ f∗i∗1D f∗(cli) −−−−→ f∗(1X)⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 46 BOGDAN ZAVYALOV 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Constructing cycle maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this subsection is to show that, if D satisfies the excision axiom, then any weak theory of first Chern classes c1 canonically underlies a theory of cycle maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Warning 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do not know a way to extract a theory of cycle maps from a weak theory of first Chern classes without the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, in practice, all 6-functor formalisms with a (strong) theory of first Chern classes admit a compatible theory of cycle maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it may be possible that there is a weaker assumption on D allowing the (canonically) construct cycle maps from first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of this section, we fix a 6-functor formalism D satisfying the excision axiom and a weak theory of first Chern classes c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To construct cycle clases, we note that an effective Cartier divisor D comes with the canonical short exact sequence (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1): 0 → OX(−D) → OX → i∗OD → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By passing to duals, we get a morphism OX → OX(D) that is an isomorphism over U := X \\ D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote its restriction on U by an isomorphism ϕU : OU ≃ −→ OX(D)|U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now, in the presence of the excision axiom, we can give the following definition: Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A cycle map (relative to c1) of an effective divisor D ⊂ X is a homotopy class of morphisms cli : i∗1D → 1X⟨1⟩ equal to c1(OX(D), ϕU) ∈ H0 D(X, 1X⟨1⟩) = HomD(X)(i∗1D, 1X⟨1⟩) (see Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be a 6-functor formalism satisfying the excision axiom, and c1 is a weak theory of first Chern classes on D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the construction of cycle maps cl• from Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 defines a theory of cycle maps (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2) such that c1 underlies cl• (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We need to check two things: cycle maps commute with transversal base change and, for each effective Cartier divisor i: Y ֒→ X, the composition 1X → i∗1Y cli −→ 1X⟨1⟩ is equal to c1(OX(Y )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first claim is automatic from Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6(3) and [Zav23, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The second claim is automatic from Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6(4) by taking Z1 = Y and Z2 = X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Cycle map of a point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we construct the (naive) cycle map of the (“zero”) section on the relative projective space fd : Pd S → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We do not develop a robust theory of cycle maps for all lci closed immersions of higher co-dimension, instead we give an ad hoc construction in this particular case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The theory of higher dimensional cycle classes can be developed if D satisfies the excision axiom (following the strategy of defining cycle classes in ´etale cohomology developed in [Fuj02]), but we are not aware of a way of doing this for a general D so we do not discuss it in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The ad hoc construction mentioned above is enough for all purposes of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Before we go into details, we point out that this construction will be used both in establishing Poincar´e Duality for A1-invariant or pre-geometric (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 and Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9) 6- functor formalisms with a strong theory of first Chern classes underlying a theory of cycle maps, POINCAR´E DUALITY REVISITED 47 and in proving that a theory of first Chern classes is automatically a strong theory of first Chern classes if D satisfies the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of this section, we fix a 6-functor formalism D with a theory of weak first Chern classes c1 underlying a theory of cycle maps cl• (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We fix a relative projective space fd : Pd Y → Y with homogenenous coordinates X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' , Xd+1 and a set of d + 1-standard Y -hyperplanes H1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' , Hd, Hd+1 ⊂ Pd Y given as the vanishing locus of the homogeneous coordinate Xi respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We note that the intersection H1 ∩ H2 ∩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Hd is canonically isomorphic to Y and the natural embedding s: H1 ∩ H2 ∩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Hd = Y → Pd Y defines the “zero” section of Pd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also denote by id : Hd → Pd Y the natural immersion of Hd into Pd Y , and by s′ : Y → Hd the closed immersion of H1 ∩ H2 ∩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Hd into Hd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we have the following commutative diagram: Y Hd Pd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' s s′ id Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Naive Cycle map of the (“zero”) section) We define the naive cycle map of s (relative to c1, cl•) to be the homotopy class of morphisms cls : s∗1Y → 1Pd Y ⟨d⟩ inductively obtained by the following rule: (1) if d = 1, s is an effective Cartier divisors, so cls is the cycle map of the corresponding effective Cartier divisor;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) if d > 1, we suppose that we defined cls for all d′ < d (so, in particular, it is defined for s′), and define cli as the composition s∗1Y ≃ id,∗s′ ∗1S id,∗(cls′) −−−−−→ id,∗1Hd⟨d − 1⟩ id1⟨d−1⟩⊗clid −−−−−−−−→ 1Pd Y ⟨d⟩, where cls′ is defined due to the induction hypothesis and clid is the cycle map of an effective Cartier divisor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Warning 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1, a priori, depends on the choice of coordinates on Pd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, it is not clear that the cycle map cli does not change if we permute coordinates on Pd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let c1 be a weak theory of first Chern classes on D underlying a theory of cycle maps cl•, fd : Pd Y → Y is the relative projective space, and the morphism cls : s∗1Y → 1Pd Y ⟨d⟩ is the naive cycle map from Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the diagram 1Pd Y s∗1Y 1Pd Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' c1(OPd Y /Y (1))⊗d adjs cls commutes in D(Pd Y ), where adjs is the canonical morphism coming from the (s∗, s∗)-adjunction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 48 BOGDAN ZAVYALOV Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We argue by induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If d = 1, the claim follows directly from Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we suppose the claim is know for all d′ < d and wish to show it for d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Note that, in particular, the induction hypothesis applies to the morphism s′ : Y → Hd ≃ Pd−1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we conclude that the diagram id,∗1Hd id,∗s′ ∗1Y id,∗1Hd⟨d − 1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' id,∗ � c1(OHd/Y (1)) ⊗d−1 � id,∗(adjs′) id,∗(cls′) commutes in D(Pd Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now note that OHd/Y (1) ≃ i∗ dOPd Y /Y (1) to conclude that the following diagram commutes in D(Pd Y ): 1Pd Y 1Pd Y ⟨d − 1⟩ s∗1Y id,∗1Hd id,∗s′ ∗1Y id,∗1Hd⟨d − 1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' adjid adjs c1 � OPd Y /Y (1) �⊗d−1 adjid ≀ id,∗ � c1(OHd/Y (1)) ⊗d−1 � id,∗(adjs′) id,∗(cls′) (13) By definition of a (weak) theory of first Chern classes underlying a theory of cycle maps (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3), we also get a commutative diagram 1Pd Y ⟨d − 1⟩ id,∗1Hd⟨d − 1⟩ 1Pd Y ⟨d⟩ adjid id1⟨d−1⟩⊗c1 � OPd Y /Y (1) � id1⟨d−1⟩⊗clid (14) Therefore, we may combine Diagram (13) and Diagram (14) to conclude that the composition 1Pd Y adjs −−→ s∗1Y cls −→ 1Pd Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is equal (in the homotopy category D(Pd Y )) to the following composition: 1Pd Y c1 � OPd Y /Y (1) �⊗d−1 −−−−−−−−−−−−−→ 1Pd Y ⟨d − 1⟩ id1⟨d−1⟩⊗c1(OPd Y /Y (1)) −−−−−−−−−−−−−−−→ 1Pd Y ⟨d⟩ that is just equal to c1(OPd Y /Y (1))⊗d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ POINCAR´E DUALITY REVISITED 49 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' First Chern classes and excision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to show that, if D satisfies the excision axiom, then any theory of first Chern classes on D is automatically a strong theory of first Chern classes (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, we have to show that the projective bundle formula for the P1 S implies the projective bundle formula for all higher dimensional relative projective spaces in the presence of the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We show this by induction on d cutting Pd S into a closed subspace Pd−1 S and an open complement Ad S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To deal with the open complement, we use the naive cycle map of the zero section from Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of the section, we fix a 6-functor formalism D satisfying the excision axiom, and a theory of first Chern classes c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also fix an object Y ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Setup 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote by 0Y : Y → Ad Y the zero section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This fits into the following commutative diagram: Y Ad Y Pd Y Pd−1 Y ≃ Hd+1 Y, 0Y s id g j fd fd−1 id+1 where fd, fd+1, and g are the structure morphisms, j is the natural open immersion, and s is the “zero” section from the discussion above Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (Naive cycle map of the zero section) We define the naive cycle map of 0Y to be the homotopy class of morphisms cl0Y : 0Y,∗1Y → 1Ad Y ⟨d⟩ equal to j∗(cls), where cls is from Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, cl0Y is obtained as the compo- sition 0Y,∗1Y ≃ j∗i∗1Y j∗(cls) −−−−→ j∗1Pd Y ⟨d⟩ ≃ 1Ad Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Alternatively, one can repeat Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 in the affine case, and define cl0Y to be the composition of d − 1 cycle maps of divisors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Following the notion from Setup 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1, let cd 1 : 1Y → fd,∗1Pd Y ⟨d⟩ be the morphism obtained by applying Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 to c1(OPd Y /Y (1))⊗d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the diagram 1Y g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Ad Y ⟨d⟩ fd,∗1Pd Y ⟨d⟩ g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) cd 1 can commutes in (the homotopy category) D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 50 BOGDAN ZAVYALOV Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Essentially by construction, we have the following commutative diagram 1Y g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Ad Y ⟨d⟩ fd,∗1Pd Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) fd,∗(cls) can Thus, we are only left to identify fd,∗(cls) with cd 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows from Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 and Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose D satisfies the excision axiom, and c1 is a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Following the notion from Setup 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1, then there is a morphism of exact triangles 1Y �d k=0 1Y ⟨d − k⟩ �d−1 k=0 1Y ⟨d − k⟩ g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Ad Y ⟨d⟩ fd,∗1Pd Y ⟨d⟩ fd−1,∗1Pd−1 Y ⟨d⟩ g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) �d k=0 ck 1⟨d−k⟩ �d−1 k=0 ck 1⟨d−k⟩ in D(S), where the left lower map is the evident inclusion and the right lower map is the evident projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The upper exact triangle is evident, and the lower exact triangle comes by applying fd,∗ = fd,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' to the excision fiber sequence (see Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9) j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Ad Y ⟨d⟩ → 1Pd Y ⟨d⟩ → id+1,∗1Pd−1 Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 ensures that the left square commutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So using the axioms of triangulated categories, we can extend this commutative square to a morphism of exact triangles: 1Y �d k=0 1Y ⟨d − k⟩ �d−1 k=0 1Y ⟨d − k⟩ g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Ad Y ⟨d⟩ fd,∗1Pd Y ⟨d⟩ fd−1,∗1Pd−1 Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) �d k=0 ck 1⟨d−k⟩ c The only thing we are left to show is to compute c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It suffices to do separately on each direct summand 1Y ⟨d − k⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we use that the upper exact triangle is split to see that c|1Y ⟨d−k⟩ must be equal to the composition 1Y ⟨d − k⟩ ci 1⟨d−k⟩ −−−−−→ fd,∗1Pd Y ⟨d⟩ can −−→ fd−1,∗1Pd−1 Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Using the first Chern classes commute with pullbacks and OPd Y /Y (1)|Pd−1 Y = OPd−1 Y /Y (1), one easily sees that the composition is equal to ck 1⟨d − k⟩: 1Y ⟨d − k⟩ → fd−1,∗1Pd−1 Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ POINCAR´E DUALITY REVISITED 51 Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that D satisfies the excision axiom, and c1 is a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let g: Ad Y → Y be a relative affine space, and 0Y : Y → Ad Y be the zero section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the natural morphism 1Y g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) −−−−−→ g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � 1Ad Y ⟨d⟩ � is an isomorphism for any d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We prove this claim by induction on d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Base of induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Here, we follow the notation of Setup 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 with d = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, we note that the Zariski-closed immersion i2 : P0 Y → P1 Y is the “∞”-section of P1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So the commutative diagram from Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 simplifies to the following form: 1Y 1Y ⊕ 1Y ⟨1⟩ 1Y ⟨1⟩ g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1A1 Y ⟨1⟩ f∗1P1 Y ⟨1⟩ 1Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) c1+f∗⟨1⟩ id The right vertical map is clearly an isomorphism, and the middle vertical arrow is an isomorphism by Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we conclude that g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0Y ) is also an isomorphism finishing this step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Inductive argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We suppose that we know the result for integers < d and deduce it for 2 ≤ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we consider the commutative diagram Y Ad−1 Y Ad Y Ad−1 Y Y, i i id 0Y id j f g h where i is the zero section of Ad Y , and j is the Zariski-closed immersion realizing Ad−1 Y inside Ad Y as the vanishing locus of the last coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We warn the reader that this notation is different from the one used in Setup 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3, we have an equality (up to canonical identifications24) cl0Y = clj⟨1⟩ ◦ j∗(cli), (15) 24In this proof, we will ignore canonical identifications and write “=” meaning canonically isomorphic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This does not cause any problems because our goal is to show that a well-defined morphism is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 52 BOGDAN ZAVYALOV where cli is the naive cycle of the zero section i: Y → Ad−1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we have the following sequence of equalities g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0S) = g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � clj⟨1⟩ ◦ j∗ (cli) � = g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � clj⟨1⟩ � g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � j∗ (cli) � = h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (clj⟨1⟩) � h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (j∗ (cli)) � = h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (clj⟨1⟩) � h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � cli � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first equality comes from Equation (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The second equality comes from the fact that g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' is a functor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The third equality comes from the fact that g = h ◦ f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The fourh equality comes from the fact f ◦ j = id and j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = j∗ (because j is a closed immersion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that the induction hypothesis implies that h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cli) is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Similarly, we note that the induction hypothesis implies that f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (clj) is an isomorphism by applying it to relative A1-morphism f : Ad+1 Y → Ad Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we conclude that the composition g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl0S) = h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (clj⟨1⟩) � h!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � cli � must be an isomorphism as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that D satisfies the excision axiom, and c1 is a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then c1 is a strong theory of first Chern classes (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Following the notation of Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8, we need to show that the morphism d � k=0 ck 1⟨d − k⟩: d � k=0 1S⟨d − k⟩ → fd,∗1Pd S⟨d⟩ is an isomorphism for the relative projective space fd : Pd S → S for any d ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For d = 1, this is the definition of a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For d > 1, this follows from Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 and Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 by an evident inductive argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Trace morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to construct the trace morphism for the relative projective line from a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we show that any theory of first Chern classes underlying a theory of cycle maps (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3) admits a trace-cycle theory on the relative projective line (see Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' When combined with Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3, this already shows that any smooth morphism is cohomologically smooth with respect to a 6-functor formalism with a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As previously, we fix an invertible object 1S⟨1⟩ ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we also fix a theory of first Chern Classes with respect 1S⟨1⟩ (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Recovering trace morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we discuss the construction of the trace morphism for the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It comes as the “inverse” of the first Chern class morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, we fix the relative projective line f : P1 Y → Y and recall that Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 provides us with the isomorphism c1 + f ∗⟨1⟩: 1Y ⊕ 1Y ⟨1⟩ → f∗1P1 Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (16) POINCAR´E DUALITY REVISITED 53 We denote by (c1)−1 : f∗1P1 Y ⟨1⟩ → 1Y the projection onto the first component of the decomposi- tion (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The trace map trf : f∗1P1 Y ⟨1⟩ → 1Y is the morphism (c1)−1 : f∗1P1 Y ⟨1⟩ → 1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The formation of trf commutes with arbitrary base change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This formally follows from the fact that c1(OP1 Y /Y (1)) commutes with arbitrary base change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Warning 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This construction is well-defined only if we assume that c1 is a theory of first Chern classes, and not merely a weak theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the later reference, it will also be convenient to discuss a more general construction of trace morphisms for a strong theory of first Chern classes (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this situation, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10 provides us with the isomorphism d � k=0 ck 1⟨d − k⟩: d � k=0 1Y ⟨d − k⟩ → f∗1PY (E)⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (17) for any object Y ∈ C, a rank d + 1 vector bundle E, and the corresponding projective bundle f : PY (E) → Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' As above, it make sense to define (cd 1)−1 : f∗1PY (E)⟨d⟩ → 1Y to be the projection onto the last component of decomposition (17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the notation as above, the trace map trf : f∗1PY (E)⟨d⟩ → 1Y is the morphism (cd 1)−1 : f∗1PY (E)⟨d⟩ → 1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Properties of the trace morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our next goal is to show that, if c1 is a theory of first Chern classes underlying a theory of cycle maps cl•, then the triple (1P1 S⟨1⟩, trf, cl∆) satisfies the definition of a trace-cycle theory (see Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4) where f : P1 S → S is the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we will actually show a stronger statement: Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let c1 be a theory of first Chern classes on D underlying a theory of cycle maps cl• (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3), f : P1 Y → Y the relative projective line, and s ∈ P1 Y (Y ) a section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let trf : f∗1P1 Y ⟨1⟩ → 1Y be the trace morphism from Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the diagram 1Y f∗ (s∗1Y ) 1Y f∗1P1 Y ⟨d⟩ ∼ Id f∗(cls) trf commutes in D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Whenever we use Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 in the following proof, we use the notation adjc to distinguish Chern morphisms on the base and morphisms adjoint to Chern morphisms on P1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first note that Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 implies that f∗(cls) (up to a canonical identification f∗s∗1Y ≃ 1Y ) is equal to adjc1(O(s)): 1Y → f∗1P1 Y ⟨1⟩, 54 BOGDAN ZAVYALOV where O(s) is the line bundle corresponding to the effective Cartier divisor s: S → P1 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We wish to show that trf ◦adjc1(O(s)) = id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (18) [Zav23, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10] (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' its schematic counterpart) implies that there is a decomposition of S into clopen subspaces S = ⊔i∈ISi with the induced morphisms fi : P1 Si → Si, si : Si → P1 Si such that, OP1 Si(si) = f ∗ i Li ⊗ OP1 Si/Si(ni) for some L ∈ Pic(Si) and integers ni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Equation (18) can be checked on each Si separately, so we can assume that O(s) ≃ f ∗L ⊗ O(n) for a line bundle L on S and an integer n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By restricting onto a fiber, one concludes that n = 1, so we have an isomorphism O(S) ≃ f ∗L ⊗ O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we see that adjc1(O(s)) =adj c1(f ∗L) +adj c1(O(1)): 1Y → f∗1P1 Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition, we know that trf ◦adjc1(O(1)) = id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus we reduce the question to showing that trf ◦adjc1(f ∗L) = 0 for any line bundle L on S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we note that c1(f ∗L) = f ∗c1(L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, after unravelling Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7, we get that adjc1(f ∗L) is equal to the composition 1Y c1(L) −−−→ 1Y ⟨1⟩ f∗⟨1⟩ −−−→ f∗1P1 Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By definition of the trace map, we have trf ◦f ∗⟨1⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, this formally implies that trf ◦adjc1(f ∗L) = 0 finishing the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 already has some non-trivial consequences: Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let c1 be a strong theory of first Chern classes on D underlying a theory of cycle maps cl•.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the triple (1P1 S⟨1⟩, trf, cl∆) forms a trace-cycle theory on the relative projective line f : P1 S → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, any smooth morphism in C is cohomologically smooth with respect to D (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this proof, we will freely identify p∗ 11P1 S ≃ 1P1 S×SP1 S ≃ p∗ 21P1 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus, the cycle map of the diagonal takes the form cl∆ : ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1P1 S → 1P1 S×SP1 S⟨1⟩ defining a cycle map in the sense of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now commutativity of the first diagram in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 follows directly from Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 by taking Y = P1 S, f = p1, and s = ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We wish to establish commutativity of the second diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For brevity, we denote P1 S by X and P1 S ×S P1 S by X2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We have to check that the composition 1X⟨1⟩ ≃ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X2⟨1⟩ ⊗ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (id⊗cl∆) −−−−−−−→ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X2⟨1⟩⊗1X2⟨1⟩) ≃ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X2⟨1⟩⊗1X⟨1⟩ trp2 ⊗id −−−−−→ 1X⟨1⟩ POINCAR´E DUALITY REVISITED 55 is equal to the identity morphism (in the homotopy category D(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we first note that Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11 implies that the diagram 1X2⟨1⟩ ⊗ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X 1X2⟨1⟩ ⊗ 1X2⟨1⟩ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ⊗ 1X2⟨1⟩ ≀ id⊗cl∆ cl∆⊗id commutes in D(X2), where the left vertical map is the braiding morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, we have the following commutative diagram 1X⟨1⟩ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X2⟨1⟩ ⊗ ∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X) p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X2⟨1⟩ ⊗ 1X2⟨1⟩) p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (∆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ⊗ 1X2⟨1⟩) p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1X2⟨1⟩ ⊗ 1X2⟨1⟩) 1X⟨1⟩ 1X ⊗ 1X⟨1⟩ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X2⟨1⟩ ⊗ 1X⟨1⟩, ∼ id ≀ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (id⊗cl∆) id p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆⊗id) ≀ ≀ ∼ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆)⊗id where the two bottom vertical maps come from the projection formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it suffices to show that the composition 1X⟨1⟩ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆)⊗id −−−−−−−→ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X2⟨1⟩ ⊗ 1X⟨1⟩ trp2 ⊗id −−−−−→ 1X⟨1⟩ is equal to the identity morphism (in the homotopy category D(X)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, it suffices to show that trp2 ◦ p2,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (cl∆) = id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows from Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 by taking Y = P1 S, f = p2, and s = ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Overall, this proves that (1P1 S⟨1⟩, trf, cl∆) forms a trace-cycle theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The “in particular” claim follows directly from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we discuss another consequence of Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5: we show that a 6-functor formalism D satisfying the excision axiom and admitting a theory of first Chern classes is automatically A1-invariant (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we need the following construction: Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : P1 Y → Y be the relative projective line with a trace morphism tr: f∗1P1 Y ⟨1⟩ → 1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By the (f∗, f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' )-adjunction, it also defines the adjoint trace morphism adjtr: 1P1 Y ⟨1⟩ → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be a 6-functor formalism satisfying the excision axiom, and c1 is a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then D is motivic (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Firstly, we note that Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 constructs a theory of cycle maps underlying c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fur- thermore, Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 implies that c1 is a strong theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 ensures that any smooth morphism is cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So we only need to show that D is A1-invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We fix a relative affine line g: A1 Y → Y and compactify it to a relative projective line f : P1 Y → Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The complement of A1 Y in P1 Y forms a section s: Y → P1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 defines a theory cycle maps underlying c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, it defines a morphism s∗1Y → 1P1 Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 56 BOGDAN ZAVYALOV Using Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5, it is essentially formal to verify that the following diagram commutes: s∗1Y s∗s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y 1P1 Y ⟨1⟩ f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' cls ≃ adj adjtr Therefore, Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 and Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 ensure that adj tr is an isomorphism, and so we get an exact triangle s∗1Y cls −→ 1P1 Y ⟨1⟩ can −−→ j∗1A1 Y ⟨1⟩, where j : A1 Y → A1 Y is the natural open immersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we apply f∗ (and Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4) to this sequence to get an exact triangle 1Y c1 −→ f∗1P1 Y ⟨1⟩ → g∗1A1 Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we have a commutative diagram of exact triangles 1Y 1Y ⊕ 1Y ⟨1⟩ 1Y ⟨1⟩ 1Y f∗1P1 Y ⟨1⟩ g∗1A1 Y ⟨1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' id c1+f∗⟨1⟩ g∗⟨1⟩ c1 Now the definition of the first Chern classes and the 2-out-of-3 property implies that 1Y ⟨1⟩ → g∗1A1 Y ⟨1⟩ is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since 1Y ⟨1⟩ is an invertible sheaf, this formally implies that the natural mor- phism 1Y → g∗1A1 Y is an isomorphism as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first goal of this section is to show that a strong theory of first Chern classes c1 underlying a theory of cycle maps (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 and Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3) implies the strongest version of Poincar´e Duality under the additional assumption that D is either A1-invariant or pre-geometric (see Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The second goal is to show that, if D satisfies the excision axiom, it suffices to assume that D admits a theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We now briefly sketch the idea behind the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 reduces the question of proving Poincar´e Duality to the question of computing dualizing object f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we use Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 (or Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12) to reduce the question to computing C(Tf).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This is done via compactifying Tf to a projective bundle and the (naive) cycle map of a point from Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For the rest of this section, we fix a 6-functor formalism D with a strong theory of first Chern classes c1 underlying a theory of cycle maps cl• (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start by defining the adjoint to the trace map from Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, let Y be an object of C, E is a vector bundle on Y of rank d + 1, and f : PY (E) → Y be the corresponding projective bundle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 defines the trace morphism trf : f∗1PY (E)⟨d⟩ → 1Y POINCAR´E DUALITY REVISITED 57 Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : PY (E) → Y and trf be as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By the (f∗, f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' )-adjunction, trf uniquely defines the adjoint trace morphism adjtr: 1PY (E)⟨d⟩ → f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (1Y ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' in D(PY (E)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now suppose that E = Od+1 Y , so PY (E) = Pd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 defines the (cycle) class of the “zero” section cls : s∗1Y → 1Pd Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the notation as above, cls uniquely defines the adjoint cycle map morphism adjcls : 1Y → s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' � 1Pd Y ⟨d⟩ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' in D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let c1 be a strong theory of first Chern classes on D underlying a theory of cycle maps cl•, and f : Pd Y → Y is the relative projective space, and the following diagram 1Y s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y ⟨d⟩ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1 adjcls ∼ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (adjtrf ) commutes in D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By passing to adjoints, it suffices to show that the diagram f∗s∗1Y f∗1Pd Y ⟨d⟩ 1Y f∗(cls) ∼ h trf commutes in D(Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 and a formal argument with adjoints (similar to Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4) implies that the composition 1Y h−1 −−→ f∗s∗1Y f∗(cls) −−−−→ f∗1Pd Y ⟨d⟩ is equal to the morphism adjoint to cd 1(OPd Y /Y (1)): 1Pd Y → 1Pd Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In other words, this composition is equal to the morphism cd 1 : 1Y → f∗1Pd Y ⟨d⟩ from Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 applied to c = c1 � OPd Y /Y (1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the question boils down to showing that the composition 1Y cd 1 −→ f∗1Pd Y ⟨d⟩ trf −−→ 1Y is the identity morphism (in D(Y )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, this follows from the definition of the trace morphism (see Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we turn to the proof of Poincar´e Duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the process of the proof, we will need the following simple (but useful) lemma: 58 BOGDAN ZAVYALOV Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be a closed symmetric monoidal additive category with a unit object 1, and L an invertible object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that L = 1 ⊕ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then X ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If L is an invertible object, then the natural evaluation morphism L ⊗ L∨ → 1 must be an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we write L ⊗ L∨ ≃ (1 ⊕ X) ⊗ (1 ⊕ X)∨ ≃ (1 ⊕ X) ⊗ (1 ⊕ X∨) ≃ 1 ⊕ X ⊕ X∨ ⊕ X ⊗ X∨ to conclude that X = X∨ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we specialize to the case of a vector bundle of the form E′ = E ⊕ O on an object Y ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the relative projective bundle f : PY (E ⊕ O) → Y has a canonical section s: Y → PY (E ⊕ O) corresponding to the quotient E ⊕ O p−→ O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let c1 be a strong theory of first Chern classes on D underlying a theory of cycle maps cl•, Y an object of C, E a vector bundle of rank d + 1 on Y , and f : PY (E ⊕ O) → Y is the relative projective bundle with the canonical section s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the natural morphism s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (adjtrf): s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1PY (E⊕O)⟨d⟩ → s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y is an isomorphism, where adj trf is from Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first note that the question is local on Y , so we can assume that E = O⊕d+1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So PY (E ⊕ O) ≃ Pd Y , and s corresponds to the “zero” section defined just before Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y is an invertible object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Indeed, Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 (and Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6) implies that f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y is an invertible object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 implies that 1Y ≃ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y ⊗ s∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since s∗f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y is invertible and s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y is dual to it, we formally conclude that s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y is invertible as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that Construction 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 defines a morphism adjcls : 1Y → s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3 implies that the composition 1Y adjcls −−−→ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y ⟨d⟩ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (adjtrf) −−−−−−→ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ 1Y is the identity morphism (in the homotopy category D(Y )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So 1Y is a direct summand of the invertible object s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Pd Y ⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 implies that both adjcls and s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (adjtrf) must be isomorphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that a 6-functor formalism D is either A1-invariant or pre-geometric .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' And let c1 be a strong theory of first Chern classes on D underlying a theory of cycle maps cl•, and f : X → Y be a smooth morphism of pure relative dimension d (see [Hub96, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the right adjoint to the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X) → D(Y ) POINCAR´E DUALITY REVISITED 59 is given by the formula f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 and Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11 already imply that any smooth morphism f : X → Y is cohomologically smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus the question of computing f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' boils down to computing the dualizing object ωf = f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 (if D is A1-invariant) and Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12 (if D is pre-geometric) imply that f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y is given by the formula f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y ≃ CX(Tf) ≃ s∗g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1Y , where g: VX(Tf) → X is the total space of the (relative) tangent bundle, and s is the zero section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We may compactify g to the morphism25 g: P := PX(T∨ f ⊕ OX) → X, where s corresponds to the “zero” section defined just before Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it suffices show that s∗g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ 1X⟨d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we note that 1X ≃ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X ≃ s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1P ⊗ s∗g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' X, where the second isomorphism follows from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6 and the fact that g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1X is invertible due to cohomological smoothness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus, it suffices to produce an isomorphism s!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1P ≃ 1X⟨−d⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows from Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5 and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let D be a 6-functor formalism satisfying the excision axiom (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8) and admitting a a theory of first Chern classes c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Suppose that f : X → Y is a smooth morphism of pure relative dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the right adjoint to the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X) → D(Y ) is given by the formula f !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (−) = f ∗(−) ⊗ 1X⟨d⟩: D(Y ) → D(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Firstly, we note that Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 constructs a theory of cycle maps underlying c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Furthe- more, Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 ensures that c1 is a strong theory of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 implies that D is A1-invariant (or even motivic).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus the result follows from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Poincar´e Duality in examples In this section, we apply Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 to two particular examples of 6-functor formalisms: ℓ- adic ´etale sheaves on locally noetherian analytic adic spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' schemes) developed by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Huber in [Hub96], and “solid almost O+/p-ϕ-modules” on p-adic adic spaces developed by L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mann in [Man22b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the first example, we recover Poincar´e Duality previously established by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Huber in [Hub96, Thm 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof is essentially formal: after unravelling all the definitions, Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 tells us that, for the purpose of proving Poincar´e Duality, it suffices to construct a theory of first Chern classes and compute cohomology of the relative projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Both things are particularly easy in the case of ´etale sheaves: the theory of first Chern classes comes from the Kummer exact sequence, and the computation of ´etale cohomology of the projective line essentially boils down to 25The dual vector bundle T∨ f shows up due to the conventions used in [Zav23, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 60 BOGDAN ZAVYALOV proving Pic(P1 C) ≃ Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This proof completely avoids quite elaborate construction of the trace map and verification of Deligne’s fundamental lemma (see [Hub96, §7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2-7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The same proof applies to ℓ-adic sheaves on schemes and simplifies the argument as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we apply the same methods to the theory of “solid almost O+/p-ϕ-modules”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof of Poincar´e Duality for ℓ-adic sheaves applies essentially verbatim in this context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main new ingredient is to verify that this 6-functor formalism satisfies the excision axiom;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' this is not automatic in this situation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Nevertheless, the approach taken in this paper simplifies the proof of Poincar´e Duality established in [Man22b, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, it avoids any usage of Grothendieck Duality on the special fiber, and any explicit computations related to the “p-adic nearby cycles” on the formal model of D1 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ℓ-adic duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The main goal of this section is to give an essentially formal proof of Poincar´e Duality for ´etale cohomology of schemes and (locally noetherian) adic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof is almost uniform in both setups: the only difference is the computation of the cohomology groups of the projective line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this section, we fix a locally noetherian analytic adic space S (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' a scheme S) and an integer n invertible in OS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We emphasize that, in the case of adic spaces, we do not make the assumption that n is invertible in O+ S until the very end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In what follows, C denotes the category of locally finite type adic S-spaces (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' locally finitely presented S-schemes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We begin the section by defining the theory of ´etale first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Before we start the construction, we advise the reader to take a look at Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 since we will follow the notations introduced there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, we recall that in order to speak of (weak) first Chern classes, we first fix an invertible object 1S⟨1⟩ ∈ D(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We define the Tate twist as 1S⟨1⟩ := µn[2] ∈ D(S´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This object is clearly invertible, so it fits into the assumptions of Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we recall that there is a natural Kummer exact sequence 0 → µn → Gm f�→fn −−−−→ Gm → 0 on X´et for any X ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This sequence is functorial in X, so defines a morphism of D(Z)-valued presheaves: Gm[1] c−→ µn[2]: Cop → D(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By passing to the derived ´etale sheafifications (see [Cla21, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 11]), we get a morphism of D(Z)-valued sheaves RΓ´et(−, Gm)[1] c−→ RΓ´et(−, µn)[2]: Cop → D(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A theory of ´etale first Chern classes is the homomorphism of D(Z)-valued ana- lytic sheaves c´et 1 : RΓan(−, O×)[1] → RΓ´et(−, µn)[2] = RΓ(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1⟨1⟩) obtained as the composition RΓan(−, O×)[1] → RΓ´et(−, Gm)[1] c−→ RΓ´et(−, µn)[2]: Cop → D(Z), where the first map is the natural morphism from the analytic cohomology of O× to the ´etale cohomology of Gm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 61 Construction 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X be an adic S-space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then, after passing to H0(−), Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 defines a homomorphism c´et 1 : Pic(X) ≃ H1 an(X, O× X) → H2(X, µn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In what follows, we slightly abuse the notation and do not distinguish between these two versions of the homomorphism c´et 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we will later need to know that c´et 1 is a theory of first Chern classes in the sense of Defini- tion 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 (if n is invertible in O+ S ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Concretely, this means that we have to show that the natural morphism c´et 1 (O(1)) + f ∗ : Z/nZS ⊕ µn[2] → Rf∗µn,P1 S[2] is an isomorphism for the relative projective line f : P1 S → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We will show this claim with the assumption that n is only invertible in OS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the rest of this section, we do the computations entirely in the analytic context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the algebraic case, the computation is standard (see [Fu11, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We start with the case when S is a “geometric point”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More explicitly, we fix an algebraically closed non-archimedean field C and assume that S = Spa(C, OC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X be a 1-dimensional rigid-analytic variety over S = Spa(C, OC), and n an integer invertible in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) the natural morphism µn(C) → H0(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' µn) is an isomorphism if X is connected;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) we have Hi(X, µn) = 0 for i ≥ 3;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) the first Chern class c´et 1 : Pic(X)/n → H2(X, µn) is an isomorphism (see Construction 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The morphism µn(C) → H0(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' µn) is an isomorphism if X is connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since C is algebraically closed, we can choose a non-canonical isomorphism µn ≃ Z/nZ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it suffices to show that the natural morphism Z/nZ → H0(X, Z/nZ) is an isomorphism for a connected X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This is a standard result that we leave to the interested reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To prove the other parts, we consider the morphism of sites π: X´et → Xan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Riπ∗µn = 0 for i ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It suffices to show that the stalk (Riπ∗µn)x = 0 for every x ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now [Hub96, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6] ensures that, for each integer i and x ∈ X, � Riπ∗µn � x ≃ Hi � Spa � K (x) , K (x)+� , µn � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus [Zav23, Lemma 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2] implies that it suffices to prove the vanishing for rank-1 points x ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In this case, Hi(Spa � K (x) , OK(x) � , µn) ≃ Hi cont(GK(x), µn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So it suffices to show that GK(x) is of cohomological degree 1 for any x ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows from [Hub96, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3]26 or one can adapt the proof of [Ber93, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' R1π∗Gm = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first note that [Hub96, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7)] implies that the natural morphism Pic(U) ≃ H1 an(U, O× U) → H1 ´et(U, Gm) 26The henselization in [Hub96, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] disappears in the rank-1 case because OK is henselian with respect to its pseudo-uniformizer ̟ and m = rad(̟) (see [Sta23, Tag 09XJ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 62 BOGDAN ZAVYALOV is an isomorphism (alternatively, this can be deduced from [KL19, Thm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the definition of higher pushforwards imply that R1π∗Gm is the sheafification (in the analytic topology on X) of the presheaf U �→ Pic(U).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since any class α ∈ Pic(U) trivializes analytically locally on U, we conclude the sheafification of this presheaf is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Finish the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The Kummer exact sequence 0 → µn → Gm n −→ Gm → 0 implies that we have an exact triangle Rπ∗µn → Rπ∗Gm n −→ Rπ∗Gm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (19) Note that π∗Gm = O× X, so Steps (1) and (2) imply that (19) stays exact after applying τ ≤1 to Rπ∗Gm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus we get the following exact triangle Rπ∗µn → O× X f�→fn −−−−→ O× X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since Hi(Xan, O× X) = 0 for i ≥ 2 by [Hub96, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8] and [Sta23, Tag 0A3G], we conclude that Hi(X, µn) = 0 for i ≥ 3 and the natural morphism Pic(X)/n ≃ H1(Xan, O× X)/n → H2(X, µn) is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After unravelling the definitions, one sees that this morphism coincides with c1 from Construction 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X = P1 C be the (analytic) projective line over Spa(C, OC), and n an integer invertible in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) the natural morphism µn(C) → H0(P1 C, µn) is an isomorphism;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) we have Hi(P1 C, µn) = 0 for i ≥ 3;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (3) the unique homomorphism c1 : Z/nZ → H2(P1 C, µn) sending 1 to c1(O(1)) is an isomor- phism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows formally from Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 and the fact that the morphism Z → Pic(P1 C), sending n to O(n), is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The latter fact follows from [Zav23, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we go back to the case of a general locally noetherian analytic adic base S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we consider the relative (analytic) projective line f : P1 S → S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This comes with the “universal” line bundle O(1) (see [Zav23, Rmk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13] for the construction in the analytic setup).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The first Chern class c1(O(1)) defines a morphism c1(O(1)): Z/nZP1 S → µn[2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' in the (triangulated) derived category D(P1 S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Due to the (f ∗, Rf∗)-adjunction, c1(O(1)) defines a morphism c´et 1 (O(1)): Z/nZS → Rf∗µn,P1 S[2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 63 Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : P1 S → S be the relative (analytic) projective line over S, and n an integer invertible in S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the natural morphism c´et 1 (O(1)) + f ∗ : Z/nZS ⊕ µn[2] → Rf∗µn,P1 S[2] is an isomorphism27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It suffices to show that the morphism c´et 1 (O(1)) + f ∗ is an isomorphism on stalks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Zav23, Lemma 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3] ensures that Rf∗ preserves overconvergent sheaves, so it is sufficient on stalks over rank-1 points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that the formation of first Chern classes commute with arbitrary base change (similarly to Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6(3)), [Hub96, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1] ensures that it suffices to prove the claim under the additional assumption that S = Spa(C, OC) for an algebraically closed, non-archimedean field C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the result follows directly from Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let S be a locally noetherian analaytic adic space, n an integer invertible in O+ S , and D´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ): Corr(C) → Cat∞ be the 6-functor formalism formalism constructed in [Zav23, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 and Rmk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) D´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) satisfies the excision axiom (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 defines a theory of first Chern classes on D´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) (see Definition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8) with 1S⟨1⟩ = µn[2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' It is essentially obvious that D´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) satisfies the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, it suffices to show that, for any locally finite type adic S-space X, a complex F ∈ D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ), and a Zariski-closed immersion i: Z → X, the triangle j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='j∗F → F → i∗i∗F is exact, where j : U → X is the open complement of Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This is clear by arguing on stalks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The fact that c1 is a theory of first Chern classes follows directly from Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Before we state the general version of Poincar´e Duality, we recall that the Tate twist Z/nZ(m) is by definition the ´etale sheaf µ⊗m n (with the obvious meaning if m is negative).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Likewise, for a sheaf F ∈ D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ), we denote its Tate twist F ⊗ Z/nZ(m) simply by F(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let Y be a locally noetherian analytic adic space, and f : X → Y a smooth morphism is of pure dimension d, and n is an integer invertible in O+ Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor Rf!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) admits a right adjoint given by the formula f ∗(d)[2d]: D(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) → D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Put S = Y and consider the ´etale 6-functor formalism D´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) that associates to X the ∞-derived category D(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) (see [Zav23, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4 and Rmk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 implies that D´et satisfies the excision axiom and admits a theory of first Chern classes with 1S⟨1⟩ = µn[2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus the result follows from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The proof of Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 works in essentially the same way for the 6-functor formalism of ´etale Z/nZ-sheaves on schemes (see [Zav23, Rmk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6] for the construction of ´etale 6-functor formalism).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In particular, this reproves the classical Poincar´e Duality in the theory of ´etale cohomology of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 27The notation “f ∗” means the natural morphism µn[2] → Rf∗µn,P1 S[2] coming as the unit of the (f ∗, Rf∗)- adjunction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 64 BOGDAN ZAVYALOV Remark 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Note that the only place, where we used that n is invertible in O+ S (as opposed to being invertible in OS) is to make sure that the categories D´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Z/nZ) can be arranged into a 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If n is not invertible in O+ S , the problem is that the proper base change formula does not hold in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the next section, we work around this issue by using another 6-functor formalism closely related to the p-adic cohomology of p-adic rigid-analytic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' p-adic duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The goal of this section is to give a new proof of Poincar´e Duality for O+/p- ϕ-modules”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In what follows, we fix a locally noetherian analytic adic space S with a morphism S → Spa(Qp, Zp), and C the category of locally finite type adic S-spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we briefly sketch the construction of the 6-functor formalism of O+/p-(ϕ-)modules developed in [Man22b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We will not discuss the full construction of this formalism here;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' instead we only sketch the part that are important for the discussion of this section, and refer to [Man22b] for the thorough construction of this 6-functor formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To begin with, we recall that [Man22b, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12 and Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13] define28 two (closely related) 6-functor formalisms Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p): Corr(C) → Cat∞, and Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ : Corr(C) → Cat∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' These two 6-functor formalisms are defined in a significantly more general setup, that generality will not play a huge role in our discussion beyond the point that we can evaluate Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) on strictly totally disconnected perfectoids over S (which are essentially never locally finite type over S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We briefly discuss the construction of the category Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) in [Man22b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' First, for a (strictly) totally disconnected perfectoid space with a map Spa(R, R+) → S, one puts Da □(Spa(R, R+);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) = Da □(R+/p) the almost category of solid R+/p-modules (see [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then one shows that this assignment satisfies (hyper-)descent in the v-topology (see [Man22b, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='27 and Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3]) on (strictly) totally disconnected perfectoid spaces over S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' After that, Mann formally extends Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) to all adic S-spaces by descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This category comes equipped with the usual 4 functors: f∗, f ∗, Hom, and ⊗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The question of defining the shriek functors is quite subtle and we refer to [Man22b, §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6] for their construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The ϕ-version of Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) is defined as the equalizer (in the ∞-categorical sense) Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ := eq � Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) ϕ−id −−−→ Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then [Man22b, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13] extends the 6-functors to Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Our first goal is to show that both of these 6-functor formalisms satisfy the excision axiom (see Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This will allow us to apply Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7 to this situation and reduce the question of proving Poincar´e Duality to the question of constructing a theory of first Chern classes and computing the cohomology groups of the projective line P1 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' One useful tool in proving the excision axiom will be the (sub)category of discrete objects Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ω ⊂ Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p) introduced in [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If X admits a map29 to 28See also [Man22b, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14] to conclude that any locally finite type morphism of analytic adic spaces is bdcs in the sense of [Man22b, Defn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 29This condition ensures that X ∈ XΛ v in the sense of [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 65 an affinoid perfectoid space Spa(R, R+) [Man22b, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16] justifies the name and shows that there is a functorial equivalence Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ω ≃ Shv�(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc between discrete objects in Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p) and overconvergent objects in the left-completed ∞- derived category of ´etale sheaves of almost O+ X/p-modules (see [Man22b, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let X = Spa(R, R+) be a strictly totally disconnected perfectoid space over S, i: Z → X is a Zariski-closed affinoid perfectoid subspace (in the sense of [Sch17, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7]), and j : U → X is the open complement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a U /p → O+,a X /p → i∗O+,a Z /p is a fiber sequence in Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a U /p is discrete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We first consider the morphism π: |X| → π0(X) from [Sch17, Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since Z is Zariski-closed, it is both closed under generalizations and specializations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus the same holds for U, so the natural morphism U → π−1(π(U)) is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since π is a quotient morphism, we conclude that U ′ := π(U) must be open in π0(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now recall that π0(X) is a profinite set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So clopen subsets form a base of topology on π0(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore U ′ = ∪i∈IU ′ i is a filtered union of clopen subset U ′ i (in particular, they are quasi-compact).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus we conclude that U = ∪i∈Iπ−1(U ′ i) is a filtered union of clopen subspaces of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We denote the pre-image U ′ i by ji : Ui → X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then, by construction (see [Man22b, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2]), we have j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a U /p ≃ colim ji,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a Ui /p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since each ji is clopen, we conclude that ji,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' = ji,∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus each ji,!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a Ui /p = ji,∗O+,a Ui /p is discrete by [Man22b, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10(ii)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So the colimit is also discrete by [Man22b, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Reduce to the case X = Spa(C, C+).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that i∗O+,a Z /p is discrete by [Man22b, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' So we can check that the morphism j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a U /p → fib � O+,a X /p → i∗O+,a Z /p � (20) is an isomorphism in Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ω ≃ Shv�(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' However, a property of a map being an isomorphism in Shv�(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc can be checked on stalks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, it suffices to prove the claim after a pullback30 along each morphism Spa(C, C+) → X, where C is an algebraically closed non-archimedean field, and C+ ⊂ C is an open bounded valuation ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' But this is essentially obvious: note that Z ×X Spa(C, C+) is a Zariski-closed subspace of Spa(C, C+), so it is either empty or equal to Spa(C, C+).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In both cases, Morphism (20) is tautologically an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The 6-functor formalisms Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) and Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ satisfy the excision axiom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We fix a locally finite type adic S-space X, a Zariski-closed immersion Z i֒−→ X, and the open complement U j֒−→ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We wish to show that, for any F ∈ Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' F ∈ Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ), the natural morphism j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='j∗F → fib (F → i∗i∗F) 30Here, we implicitly use base change for both j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' and i∗ 66 BOGDAN ZAVYALOV is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Since the forgetful functor Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ → Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) commutes with limits, all 6-functors, and is conservative (see [Man22b, Lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12]), it is sufficient to prove that Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p) satisfies excision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we note that the projection formulas for i∗ and j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' imply that it suffices to show that the natural morphism j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+,a U /p → fib � O+,a X /p → i∗O+,a Z /p � is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' By v-descent and proper base change, it can be checked on the basis of strictly totally disconnected perfectoid spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Zav23, Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2] ensures that Zariski-closed immersions of locally noetherian analytic adic spaces pullback to Zariski-closed subsets of affinoid perfectoid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the result follows from Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we discuss the computation of the cohomology groups of the projective line, and the con- struction of first Chern classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' An important tool to deal with these questions is the Riemann- Hilbert functor from [Man22b, §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We follow the notation of [Man22b], and denote by D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp) the left-completed ∞-derived category31 of ´etale sheaves of Fp-modules on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We also denote by D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp)oc ⊂ D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp) the full ∞-subcategory spanned by overconvergent sheaves (see [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='17]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then [Man22b, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='21] defines the Riemann-Hilbert functor − ⊗ O+,a X /p: D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp)oc → Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' If X admits a map to an affinoid perfectoid field Spa(R, R+), then (essentially by construction) the following diagram D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp)oc Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ Shv�(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p) −⊗O+,a X /p −⊗O+,a X /p can (21) commutes up to a homotopy, where the left vertical functor is (the left completion) of the naive (derived) tensor product functor, and the bottom horizontal functor is the canonical identification of Shv� ´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc with the subcategory of discrete objects in Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The p-adic Tate twist O+,a X /p(i) ∈ Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p(i) ∈ Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)) is the image of the Tate twist Fp(i) under the Riemann-Hilbert functor, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', O+,a X /p(i) ≃ Fp(i) ⊗ O+,a X /p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Warning 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In the next lemma, we follow the terminology of [Man22b] and do not write R for the derived functors on the category of Fp-sheaves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let f : X → Y be a proper morphism in C, and k an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the natural morphism � f´et,∗Fp(k) � ⊗ O+,a Y /p → f∗ � O+,a X /p(k) � is an isomorphism in Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ Y /p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 31It may be more appropriate to denote this category by �D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp) or Shv�(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp), but we prefer to stick to the notation used in [Man22b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The reason to use this notation is that the left completed version naturally arises as the “derived” category of ´etale Fp-sheaves on the associated diamond X♦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 67 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The claim is v-local on the base, so we can assume that Y (and, therefore, X) admits a morphism to an affinoid perfectoid space Spa(R, R+).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then we wish to leverage Diagram (21) to reduce the question to the classical Primitive Comparison Theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More precisely, we first note that the forgetful functor Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ → Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p) is con- servative by [Man22b, Lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='12(i)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Thus, it suffices to show that the corresponding morphism f´et,∗Fp(k) ⊗ O+,a Y /p → f∗O+,a X /p(k) is an isomorphism in Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ Y /p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Now we note that [Man22b, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='16 and Lemmas 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10(ii), 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='15(iii)] imply that the diagram Shv�(X´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p) Shv�(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a X /p)oc Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p) f´et,∗ f∗ (22) commutes up to a homotopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, Diagram (21) ensures that it suffices to show that the natural morphism � f´et,∗Fp(k) � ⊗ O+,a Y /p → f´et,∗O+,a X /p(k) is an isomorphism in Shv�(Y´et;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+,a Y /p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' More explicitly, we reduced the question to showing that, for each k and d, the natural morphism Rdf´et,∗Fp(k) ⊗Fp O+ Y /p → Rdf´et,∗O+ X/p(k) is an almost isomorphism of ´etale O+ Y /p-module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This follows from the standard Primitive Compar- ison Theorem from the p-adic Hodge theory, see [Sch13, Cor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='11] or [Zav21a, Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Now we are ready to define first Chern classes on Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ and Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we note that the Riemann-Hilbert functor D´et(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fp)ov → Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ sends the constant sheaf Fp to the unit object O+ X/p, and so it defines a functorial in X morphism: RΓ´et(X, µp) → RΓ(X, O+,a X /p(1)) := HomDa □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='O+ X/p)ϕ(O+,a X /p, O+,a X /p(1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' We define the Tate twist as 1S⟨1⟩ := O+,a S /p(1)[2] ∈ Da □(S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ S /p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This object is invertible (since − ⊗ O+,a S /p is symmetric monoidal), so it fits into the assumptions of Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A theory of first Chern classes on the 6-functor formalism Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ is the morphism of Sp-valued presheaves cϕ 1 : RΓan(−, O×)[1] → RΓ(−, O+,a/p)[2] = RΓ(−, 1⟨1⟩) obtained as the composition RΓan(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O×)[1] c´et 1 −→ RΓ´et(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' µp)[2] → RΓ(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p(1))[2], where the first morphism comes from Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let S be a locally noetherian analaytic adic space over Spa(Qp, Zp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then (1) Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ satisfies the excision axiom;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (2) cϕ 1 is a theory of first Chern classes on Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 68 BOGDAN ZAVYALOV Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2 ensures that Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p)ϕ satisfies the excision sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' To show that c1 is a theory of first Chern classes, we have to show that the natural morphism cϕ 1 (O(1)) + f ∗ : O+,a S /p ⊕ O+,a S /p(1)[2] → f∗ � O+,a P1 S /p(1)[2] � is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' For this, we use the commutative diagram � Fp ⊕ µp[2] � ⊗ O+,a S /p f´et,∗ (µp[2]) ⊗ O+,a S /p O+,a S /p ⊕ O+,a S /p(1)[2] f∗ (O+/p(1)[2]) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' (c´et 1 (O(1))+f∗ ´et)⊗O+/p cϕ 1 (O(1))+f∗ The left vertical arrow is an isomorphism by definition, the right vertical arrow is an isomorphism by Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5, and the top horizontal map is an isomorphism by Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Therefore, the bottom horizontal arrow must be an isomorphism as well finishing the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Let Y be a locally noetherian analytic adic space over Spa(Qp, Zp), and f : X → Y a smooth morphism of pure dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Then the functor f!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' : Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ → Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ Y /p)ϕ admits a right adjoint given by the formula f ∗ ⊗ O+,a X /p(d)[2d]: Da □(Y ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ Y /p)ϕ → Da □(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+ X/p)ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' This is a direct consequence of Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='8 and Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' □ Remark 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Essentially the same proof applies to the 6-functor formalism Da □(−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' O+/p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' References [Ber93] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Berkovich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ´Etale cohomology for non-Archimedean analytic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Hautes ´Etudes Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Publ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', (78):5–161 (1994), 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Bha22] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Bhatt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Prismatic f-gauges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='ias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='edu/~bhatt/teaching/mat549f22/lectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [BL22a] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Bhatt and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Absolute prismatic cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/pdf/2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='06120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [BL22b] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Bhatt and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The prismatization of p-adic formal schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='06124, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Cla21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Clausen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Algebraic de rham cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='com/view/algebraicderham/home, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [CS19] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Clausen and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lectures on condensed mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' http://people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='mpim-bonn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='mpg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='de/scholze/Condensed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [CS22] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Clausen and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Condensed mathematics and complex geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='mpim-bonn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='mpg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='de/scholze/Complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Dri22] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Drinfeld.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Prismatization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='04746, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [FS21] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fargues and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Geometrization of the local langlands correspondence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='13459, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Fu11] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Etale cohomology theory, volume 13 of Nankai Tracts in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' World Scientific Publishing Co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Pte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Ltd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', Hackensack, NJ, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Fuj02] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fujiwara.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A proof of the absolute purity conjecture (after Gabber).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' In Algebraic geometry 2000, Azumino (Hotaka), volume 36 of Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Stud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Pure Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', pages 153–183.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Japan, Tokyo, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Ful98] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Fulton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Intersection theory, volume 2 of Ergebnisse der Mathematik und ihrer Grenzgebiete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Folge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A Series of Modern Surveys in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Springer-Verlag, Berlin, second edition, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [GH15] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Gepner and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Haugseng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Enriched ∞-categories via non-symmetric ∞-operads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', 279:575– 716, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' POINCAR´E DUALITY REVISITED 69 [GR17] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Gaitsgory and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Rozenblyum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A study in derived algebraic geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Correspondences and duality, volume 221 of Mathematical Surveys and Monographs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' American Mathematical Society, Providence, RI, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [GS16] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Garner and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Shulman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Enriched categories as a free cocompletion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', 289:1–94, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [HA] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Higher algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='ias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='edu/~lurie/papers/HA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Hau15] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Haugseng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Rectification of enriched ∞-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Algebr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Geom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Topol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', 15(4):1931–1982, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Hub93] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Huber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Bewertungsspektrum und rigide Geometrie, volume 23 of Regensburger Mathematische Schriften.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Universit¨at Regensburg, Fachbereich Mathematik, Regensburg, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Hub96] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Huber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ´Etale cohomology of rigid analytic varieties and adic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Friedr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Vieweg & Sohn, Braun- schweig, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [JY21] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Johnson and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Yau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 2-dimensional categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Oxford University Press, Oxford, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Kha22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Khan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Absolute Poincar´e duality in ´etale cohomology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Forum Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Sigma, 10:Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' e99, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [KL19] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Kedlaya and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Relative p-adic hodge theory, ii: Imperfect period rings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/pdf/1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='06899.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Lur18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Spectral algebraic geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='ias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='edu/~lurie/papers/SAG-rootfile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Lur22] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lurie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Kerodon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://kerodon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [LZ17] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Liu and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Enhanced adic formalism and base change for higher artin stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/1211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='5948, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [LZ22] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lu and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Categorical traces and a relative Lefschetz-Verdier formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Forum Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Sigma, 10:Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' e10, 24, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Man22a] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The 6-functor formalism for Zℓ- and Qℓ-sheaves on diamonds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='08135, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Man22b] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' A p-adic 6-functor formalism in rigid-analytic geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='02022, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Ols15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Olsson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Borel-Moore homology, Riemann-Roch transformations, and local terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=', 273:56– 123, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Sch13] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' p-adic Hodge theory for rigid-analytic varieties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Forum Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Pi, 1:e1– 77, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Sch17] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' ´etale cohomology of diamonds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/1709.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='07343, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Sch22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Scholze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Six-functor formalisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='mpim-bonn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='mpg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='de/scholze/SixFunctors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [SGA IV] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Grothendieck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Th´eorie des topos et cohomologie ´etale des sch´emas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Lecture Notes in Mathematics, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' 269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Springer-Verlag, Berlin-New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' S´eminaire de G´eom´etrie Alg´ebrique du Bois-Marie 1963–1964 (SGA 4), Dirig´e par M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Artin, et J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Verdier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Avec la collaboration de N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Bourbaki, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Deligne et B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Saint-Donat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Sta23] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Stacks project authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' The stacks project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='columbia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='edu, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Tan22] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Syntomic cycle classes and prismatic poincar´e duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='14279, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Zav21a] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zavyalov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Almost coherent modules and almost coherent sheaves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='10773, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Zav21b] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zavyalov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Mod-p poincar´e duality in p-adic analytic geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='01830, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Zav21c] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zavyalov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Quotients of admissible formal schemes and adic space by finite groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='org/abs/2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='02762, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' [Zav23] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Zavyalov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' Notes on adic geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content=' https://bogdanzavyalov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='com/refs/adic_notes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} +page_content='pdf, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/O9E2T4oBgHgl3EQfVQdj/content/2301.03821v1.pdf'} diff --git a/P9E5T4oBgHgl3EQfZA8C/content/2301.05577v1.pdf b/P9E5T4oBgHgl3EQfZA8C/content/2301.05577v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..abf83bb6b2d755f3d07cecde89f0868eb9d175fa --- /dev/null +++ b/P9E5T4oBgHgl3EQfZA8C/content/2301.05577v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:131a9798ee4fa13356146b31f3d2e3d115621e1a9bfb9e278aef821276eac7a3 +size 742437 diff --git a/P9E5T4oBgHgl3EQfZA8C/vector_store/index.faiss b/P9E5T4oBgHgl3EQfZA8C/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..092e01fbd9c80af6d63135a95a7a644c9295eb28 --- /dev/null +++ b/P9E5T4oBgHgl3EQfZA8C/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94235544fb9566223c7faf55546be552a9e940d8318b96920c7c9d71b88087c0 +size 3735597 diff --git a/P9E5T4oBgHgl3EQfZA8C/vector_store/index.pkl b/P9E5T4oBgHgl3EQfZA8C/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..c9ac02407b420ede53339e935005a3b0db0f7a52 --- /dev/null +++ b/P9E5T4oBgHgl3EQfZA8C/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fc267422cfcfcd549857a8d8a53b7b357e58f3198b2f3778282a41ff7efd754 +size 115533 diff --git a/Q9E4T4oBgHgl3EQfKgxv/content/tmp_files/2301.04930v1.pdf.txt b/Q9E4T4oBgHgl3EQfKgxv/content/tmp_files/2301.04930v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ada99195cd4fb20994f0c4b91ea45e00268da5f4 --- /dev/null +++ b/Q9E4T4oBgHgl3EQfKgxv/content/tmp_files/2301.04930v1.pdf.txt @@ -0,0 +1,694 @@ +MNRAS 000, 1–7 (2022) +Preprint 13 January 2023 +Compiled using MNRAS LATEX style file v3.0 +Closed field line vortices in planetary magnetospheres +Zoltan Nemeth,1★ +1Wigner Research Centre for Physics, Konkoly-Thege Miklós út 29-33., Budapest H-1121, Hungary +Accepted XXX. Received YYY; in original form ZZZ +ABSTRACT +In a rotation-dominated magnetosphere, there is a region where closed field lines rotate around the planet, and also a region where +the open field lines stretch away from the planet, forming the lobes of the magnetotail. This paper shows that there could be a +third, significantly different region, where the closed field lines form twisted vortex structures anchored in the magnetotail. Such +patterns form when there are significant plasma sources inside the magnetosphere and the time scale of the plasmoid formation +process is substantially larger than the planetary rotation period. In the presence of vortices, the Dungey and Vasyliunas cycles act +differently. The Dungey flow does not penetrate the central region of the polar cap. Tail reconnection events are rare, thus leaving +the plasma time enough to participate in the essentially 3-dimensional vortex-forming plasma motion. The above conditions are +fulfilled for Saturn. We discovered vortex-like patterns in the plasma and magnetic field data measured by the Cassini spacecraft +in the nightside magnetosphere of Saturn. The plasma whirling around in these vortices never reaches the dayside, instead, it +performs a retrograde motion in the high latitude regions of the magnetotail. Low-energy plasma data suggest that the observed +patterns correspond to the closed field line vortices. +Key words: magnetic fields – plasmas – planets and satellites: general – planets and satellites: magnetic fields +1 INTRODUCTION +It is generally accepted that the magnetospheres of giant planets +consist of two topologically distinct regions: the region of closed +field in the equatorial and mid-latitudes, where both ends of the field +lines are connected to the planet; and the region of open field, where +the field lines have only one foot-point anchored in the ionosphere, +either in the northern or in the southern polar cap. The plasma content +of the closed field region is thought to co-rotate with the planet, +exhibiting some extent of lag (sub-corotation) (Bunce et al. 2003; +Cowley & Bunce 2003; Cowley et al. 2004). The field and plasma +form more-or-less axisymmetric shells in this region, similar to the L- +shells of a (quasi-)dipole field configuration. According to the current +models, these shells move as rigid bodies: their angular velocity is +constant along the field lines from the northern hemisphere through +the magnetic equator to the southern hemisphere. The amount of lag +(sub-corotation) is a function of the latitude only (or equivalently: +a function of the flux-function describing the axisymmetric field +(Cowley & Bunce 2003; Cowley et al. 2004), and the gradient of +this function determines the nature of the magnetosphere-ionosphere +interaction, for example, the position and properties of the auroral +ovals (Cowley & Bunce 2001; Bunce et al. 2003; Cowley & Bunce +2003; Cowley et al. 2004). +The foot-points of the open field lines also (sub)co-rotate with the +planet; this motion twists the field lines into a spiral pattern (Isbell +et al. 1984; Vasyliunas 1983). The mechanism of this field line twist +and the formation of the Parker spiral can be described in a common +framework, as was shown by Vasyliunas (1983). +★ E-mail: nemeth.zoltan@wigner.hu +Two important properties differentiate giant planet magneto- +spheres from that of terrestrial planets: their fast rotation rate and +the existence of intense plasma sources inside the magnetospheres. +Due to their fast rotation, these are entirely rotation-dominated mag- +netospheres (Brice & Ioannidis 1970); they lack the convection- +dominated closed field region found outside the plasmasphere in the +terrestrial magnetosphere. The intense plasma sources inflict signifi- +cant loading on the field lines, which (together with the fast rotation) +leads to field line deformation and the formation of a dense plasma +sheet near the magnetic equator (see e.g. Gledhill 1967; Hill 1974; +Hill et al. 1974; Acuna et al. 1983; Persoon et al. 2005; Gombosi +et al. 2009; Arridge et al. 2007, 2008; Nemeth et al. 2011). It also +explains the above-mentioned co-rotation lag, since the ionospheric +interaction needs to accelerate the new material continuously intro- +duced to the field lines (Hill 1979, 1980). The loading and stretching +of field lines also give rise to a so-called planetary wind (Hill 1974; +Michel & Sturrock 1974), in which the loaded field lines undergo +centrifugal instability and move outward from the planet. This is the +basis of the Vasyliunas cycle (Vasyliunas 1983), in which a plasmoid +forming reconnection process removes the excess plasma from the +loaded closed field lines. The reconnection also shortens the emp- +tied field lines, which then return to the vicinity of the planet thus +enabling the cycle to start over. +In the open field region, the Dungey cycle (Dungey 1961) governs +the dynamics, in which closed field lines at the dayside magnetopause +are opened up and connected to interplanetary magnetic field lines +of the solar wind in a reconnection process. These opened-up field +lines are convected tailward by the solar wind flow and form the +northern and southern open lobes of the magnetospheric tail. A tail- +© 2022 The Authors +arXiv:2301.04930v1 [astro-ph.EP] 12 Jan 2023 + +2 +Z. Nemeth +side reconnection and the subsequent motion of the newly closed +filed lines towards the dayside close the cycle. +The above-summarized picture of giant planet magnetospheres +rests on several implicit assumptions. The first is that the magne- +tosphere is essentially axisymmetric – meaning that the obvious +deviations from axial symmetry do not essentially alter the nature +of the flow patterns around the planet. Another assumption is that +the flow pattern in the ionosphere alone determines the plasma flow +everywhere in the magnetosphere. A third important assumption is +that (in a steady state) in every planetary period the mass-loaded +closed field lines are emptied by some process, and thus can revert +to the essentially co-rotating behavior of empty field lines. +In this paper, we will examine the validity of the above assump- +tions, and the consequences of deviations from the assumed behavior +for the global structure of giant planet magnetospheres. Since these +consequences have experimentally verifiable aspects, we compare +those with data measured by the Cassini spacecraft while orbiting +planet Saturn. +2 THEORY +Planetary magnetospheres are manifestly not axisymmetric. The vec- +tors of the solar wind velocity and the planetary magnetic dipole rep- +resent two nonparallel preferred directions, the existence of which is +incompatible with axial symmetry. (In theory, the planetary dipole +may lie in the ecliptic plane, but even in such a system the two vec- +tors coincide only at summer and winter solstice.) In other words, the +effect of the solar wind deforms the magnetosphere, which manifests +as a compression on the dayside and a relative elongation on the +nightside. If the nature of the plasma flow in the magnetosphere re- +mains essentially the same as in the axisymmetric case, all the closed +field lines still rotate around the planet, although they are somewhat +deformed during their round tour – compressed on the dayside and +expanded on the nightside. Is this really the only effect of the symme- +try breaking on closed field lines? In order to answer this question, +we need to investigate the geometry of open and closed field lines in +more detail. +It is generally assumed that those lines which are anchored at the +dayside poleward of the cusps, are open field lines. These are the field +lines that initially point towards the Sun, but bend back (tailward) +after a while. (In stricter terms, the sign of the X component of their +tangent vector changes in the polar region of the magnetosphere in +a coordinate system in which the X-axis is parallel to the Sun-planet +line.) Are they necessarily open field lines? If we start with a simple +planetary dipole and disturb it only with Chapman-Ferraro currents, +which constrain the planetary field in a cavity inside a perfectly +conducting flow, we find that the asymmetry is already present, but +all the field lines are closed (Mead 1964). Such a configuration can +be seen in fig 4 of Mead (1964). There is a critical latitude, above +which field lines originating on the dayside pass over the poles and +cross the magnetic equator on the nightside. If we add a current sheet +(finite in the X direction, very narrow in the direction (Z) of the +dipole moment vector, and infinite in the direction (Y) perpendicular +to both), we find that the field lines are still closed (Fig. 1). If we +add a homogenous northward field (representing the Interplanetary +Magnetic Field (IMF)), the field lines are still closed, and the volume +of the magnetosphere is more constrained. Only adding a southward +IMF will create open field lines. If we consider the dynamics of this +process, we arrive at the Dungey cycle: the southward-directed IMF +field lines arriving at the magnetopause reconnect with the closed +field lines, and thus open magnetospheric field lines are created. In +Figure 1. Magnetic field of a dipole with Chapman-Ferraro currents and a +tail current sheet. All field lines connected to the planet are closed. Planetary +rotation move the field line in position 1 to position 5, and vice versa. +other words, it is the Dungey cycle, which dynamically creates the +open field. Without it (if e.g. the IMF remains northward for a longer +time) the magnetosphere is closed, which includes the bent-back +field lines originating on the dayside above the critical latitude and +closing on the nightside. What happens with these field lines during +the rotation of the planet? +Since the footpoints of these field lines are anchored in the iono- +sphere, these footpoints circle around the planet together with the +ionospheric plasma. Although the plasma in and near the polar cap +can exhibit significant sub-corotation (Stallard et al. 2004; Stallard +et al. 2019), it is impossible for the footpoints not to rotate with +the planet. However small the pro-grade plasma motion in the iono- +sphere, it will carry the footpoints around the planet. On the contrary, +the middle point of these field lines anchored in the dense equatorial +plasma sheet will always remain on the nightside (even when the +footpoints lie on the noon meridian). This suggests a plasma motion +fundamentally different from the axisymmetric case: since the mid- +dle points remain on the nightside while the northern and southern +ionospheric footpoints travel around the planet, and the field lines are +continuous through this motion, there must be parts of the field line, +which (instead of rotating around the planet) sweep over the poles +(see Fig. 1). This kind of motion, of course, requires that the field +line remain continuous during most of a planetary rotation (or more), +and thus the plasmoid forming tail reconnection rate should be less +than 1 per planetary day. As was shown by Cowley et al. (2015), this +really is the case for Saturn, where the characteristic time between +plasmoids is 35-45 h, which is much longer than the 10.5 h planetary +rotation period. (Notice that the observed flow violates another of +the above-mentioned common assumptions – namely that in every +planetary period the mass-loaded closed field lines are disconnected +from the planet.) +The planetary rotation will move the field line from position 1 in +Fig. 1. to position 5, and vice versa. This means that some part of +field line 5 (the part close to the middle point, which lies inside and +near the equatorial plasma sheet) must first move dawnward, move +away from the planet after that, travel duskward in the far tail, then +move towards the planet near the dusk side of the magnetopause. It is +possible that the middle point cannot complete this circle before a tail +reconnection severs the field line, since plasma motion in the far tail +can be even slower than that indicated by the angular velocity of the +ionospheric footpoints. Still, by and large, it seems that the middle +points of closed field lines, which originate from high latitudes of +MNRAS 000, 1–7 (2022) + +4 +YClosed field line vortices in magnetospheres +3 +Figure 2. Schematic of the cross-tail plasma flows carrying the field lines in +the closed-field vortices. +the ionosphere, circle around a point in the far tail, and not around +the planet. How is that possible? +First, we should note that the force which balances the centrifugal +force that the dense equatorial plasma exerts on the field line, is the +𝑗 × 𝐵 force associated with the sharp bend in the field line inside +the current sheet (see Fig. 1). When field line 5 moves towards the +dayside, the tight horizontal V shape, formed by the loaded field line +when its footpoints are near midnight, will open up to be able to +move above and below the polar regions. This lowers the curvature +force (magnetic tension) exerted by that field line in the plasma sheet +region, which upsets the balance of centrifugal and magnetic forces. +Thus the plasma will move away from the planet under centrifugal +forcing. Similarly, when the footpoints move towards the nightside +near dusk, the northern and southern parts of the field line move +closer together enhancing the magnetic tension – thus the plasma in +the plasma sheet pierced by this field line moves towards the planet. +The retrograde motion in the far tail plasma sheet is simply because +the Y component of the 𝑗 × 𝐵 force points in the retrograde direction +while the footpoints move on the dayside from dawn to dusk. +Although this circular motion in the far tail plasma sheet might +not be able to complete in the available time (before reconnection +severs the magnetic connection between the footpoints and the far +tail plasma sheet), the phenomenon also influences plasma motion +in the tail lobes closer to the planet. Field lines that have ionospheric +footpoints above the critical latitude and middle points anchored in +the far tail, cut out elliptical paths from planes in the middle mag- +netosphere perpendicular to the X-axis (Fig. 2). The totality of these +field lines forms two giant vortices, one in the northern and one in +the southern magnetospheric lobe. The plasma velocities in these +two vortices are more or less independent of each other. They ap- +proach the same value in the far tail plasma sheet (near the middle +points of the field lines), but closer to the planet, in the magneto- +spheric lobes, the velocities in the northern lobe are determined by +the motion patterns of the northern ionosphere, while the southern +lobe is governed by the southern ionosphere. Since for a gas giant +there could be significant, measurable differences between the ve- +locities of ionospheric flows of mirror latitudes in the northern and +southern hemispheres, the characteristic periodicities in the northern +and southern magnetospheric lobes can also be different. In contrast, +in a quasi-axisymmetric flow pattern, in which the L-shells move as +rigid bodies, the periodicities observable in the northern and southern +magnetospheric lobes should be the same. Otherwise, the differen- +tial rotation of the northern and southern parts of the same L-shell +would stretch its field lines indefinitely in the azimuthal direction +in the vicinity of the plasma sheet. Observations reporting slightly +different periodicities in the two lobes for various magnetospheric +phenomena (Andrews et al. 2008, 2010, 2012; Provan et al. 2012, +2019, 2021; Szego et al. 2012, 2013) suggest that the flow patterns +in the two lobes are independent. Another important aspect to be +considered about these planetary period oscillations (PPOs) is that +the vortex model decouples the observed periodicity and the plasma +speed. Since in the outer magnetosphere the plasma is significantly +slower than the speed required for rigid corotation, plasma rotating +around the planet with this lagging speed would show periodicities +much longer than the planetary period but this is not the case. On the +other hand, in the vortex model, the plasma rotates not around the +planet but around the vortex core. This path is significantly shorter +than that going all the way around the planet, and thus it requires +much lower speeds to keep up with the planetary periodicity. +Investigating the flow in these hypothetical lobe vortices shows +that the plasma exhibits a rapid pro-grade azimuthal motion in the +plasma sheet, which is slower and slower as we move away from the +magnetic equator towards the vortex core. There is a distance where +the azimuthal motion stops outright (in the core of the vortex), and if +we move even farther from the magnetic equator, we should observe +a retrograde motion of the tenuous plasma of the high-Z lobes. Such +a flow pattern should be observable in the plasma measurements. +Nemeth et al. (2015) already identified such a flow pattern in the ion +measurements of the Cassini spacecraft, but attributed the decreasing +azimuthal velocity to sub-corotation intensifying for larger L values, +and did not offer an explanation for the observed retrograde motion. +In the next section, we revisit these observations in more detail, +extending them to latitudinal as well as azimuthal flow patterns and +showing how the experimental data support the existence of giant +closed field line vortices in the tail lobes. +Another aspect of the theory is how the Dungey cycle and the open +field lines fit into this picture. If we simply suppose that the open +field lines are those closest to the poles (as in the terrestrial magneto- +sphere), we find that the field lines of the closed field vortices should +wind over and around the open field. In other words, the spatially +bounded volume of the closed field vortices should encompass the +spatially infinite open field lines, which is impossible. To resolve +this seeming controversy, we should consider the dynamics of the +process: how the Dungey cycle opens up the originally closed field +lines of the dayside magnetosphere. At the moment of the dayside +reconnection, the footpoints of the reconnecting closed field lines in- +tersect the ionosphere at the critical latitude. Once the reconnection +opened up the field line, the convection associated with the Dungey +cycle starts to move the footpoint towards the nightside, which at first +means a poleward motion. At the same time, the ionospheric plasma +rotates around the planet, which adds an azimuthal component to +the velocity. Cowley et al. (2004) estimate the speed of the poleward +motion to be 200 m/s in the case of Saturn, while the rotation speed +is around 500-700 m/s. Considering these two motions together, we +find that the footpoints of the opened-up field line never penetrate the +core of the polar cap. Before that could happen, planetary rotation +moves the footpoint onto the nightside, where the Dungey cycle flow +acts to move the footpoint farther away from the pole. In Fig. 3 the +most extreme case of the footpoint motion is shown, where the field +line is opened up at 6 a.m. local time, and thus can penetrate a good +portion of the polar cap, but still not all the way to the pole. This +means that the open field lines lie on (and near) the outer surface of +the closed field vortices, it is the open field that winds around the +MNRAS 000, 1–7 (2022) + +dawn +dusk4 +Z. Nemeth +Figure 3. Schematic of the ionospheric plasma flow in the polar cap. Due to +the fast rotation, the Dungey-cycle flow cannot penetrate the entire polar cap. +closed field vortex, and not the other way around. Close to the poles +reside the undisturbed cores of the open field vortices. This may +be related to the decreased corotation lag observed near the poles +(Stallard et al. 2019). +Thus the following picture describes the magnetospheric structure +of giant planets, provided that they are fast rotators, there are signifi- +cant plasma sources inside the magnetosphere, and the characteristic +time, during which the far tail plasma sheet remains connected to +the planet, is longer than the planetary period: Close to the planet, at +equatorial and mid-latitudes, there still is a region where the plasma +bound to closed field lines rotate around the planet. At a critical lati- +tude, depending on solar wind conditions (most notably on the orien- +tation of the IMF) the field line topology changes (open-closed field +boundary). Near this latitude, the footpoints of open field lines circle +around the planet. On even higher latitudes we again find closed field +lines; here the giant closed field line vortices connect to the planet. +The middle points of field lines in these vortices are anchored in +the far tail plasma sheet. The rotation of the plasma in the vortices +forms a distinctive velocity pattern in the tail lobes, characterized by +retrograde plasma motion far away from the magnetic equator. The +open field lines wind around the closed field vortices. For periods +of rapid plasmoid formation and strong southward-directed IMF, the +vortices may disappear, in which case the classic picture describes the +magnetospheric structure. Although it is difficult to judge the global +structure (magnetic connectedness) from local measurements, it will +be shown in the next section that, in the case of Saturn, the measured +flow patterns and field directions support the magnetic vortex picture. +3 DATA +Saturn is a fast-rotating gas giant with an extended magnetosphere. +The Pioneer and Voyager probes performed the first in situ measure- +ments in the Kronian magnetosphere, as they flew by the planet in +1979, 1980, and 1981. Further data were provided by the Cassini or- +biter between 2004 and 2017. The analysis of these measurements re- +Figure 4. Cross-tail map of the azimuthal plasma speed. The inset shows the +pattern expected in a vortex pattern. +vealed the unique and complex structure of Saturn’s magnetosphere, +the results of which are summarized in several review studies (Gom- +bosi et al. 2009; Mitchell et al. 2009; Mauk et al. 2009). +In this section, we investigate in situ data measured by the Cassini +spacecraft in the nightside outer magnetosphere of Saturn, includ- +ing magnetic field data provided by the Cassini Magnetometer +(Dougherty et al. 2004) (MAG) and the azimuthal and latitudinal +components of the plasma velocities (H+ and water group ions) from +the LANMOM numerical ion moments derived by Thomsen et al. +(2010) from the measurements of the Cassini Plasma Spectrometer +(CAPS) (Young et al. 2004). +Our analysis is based on data from 2006 and 2009 in the southern +summer period, as the spacecraft in these 2 years spent a significant +amount of time exploring the nightside outer magnetosphere of Sat- +urn. We analyze orbit segments containing Titan encounters because +these segments provide the best latitudinal scans of the tail region +together with relatively small radial motion. We use data in which +the Kronian local time (LT) is less than 3 hours from midnight and +where the distance of Cassini from Saturn is 20±4 Saturn radii (𝑅𝑆). +Fig. 4 shows a cross-tail map of the azimuthal plasma speed, pro- +jected onto a plane perpendicular to the Sun-Saturn direction and +crossing the tail at the position of Titan. Cold colors (dark blue and +purple) represent retrograde plasma motion. The inset shows the +velocity pattern expected if two vortices (one in each tail lobe) de- +termine the plasma motion. We do not expect one-to-one correspon- +dence, since the data set covering two complete Earth years suffers +from significant time variability (the most prominent of which is the +“flapping” of the Kronian magnetodisk (Simon et al. 2010; Arridge +et al. 2011; Szego et al. 2012, 2013). Despite this, the overall resem- +blance is quite prominent, the model describes the measurements +much better than the rotating shell picture, in which the map would +feature continuous stripes parallel to the equator and no retrograde +motion at all. +Fig. 5 shows a similar map of the latitudinal plasma speed, with the +expected velocity pattern shown in the inset. It is clearly evident that +MNRAS 000, 1–7 (2022) + +Polar cap +closed flux +Dungey-cycle +flow +SunVβ map +300 +10 +V. [km/s] +1000 +.0061 +400 4 180101 +N +- +-10 +-100 +-10 +0 +Y [Rs]Closed field line vortices in magnetospheres +5 +Figure 5. Cross-tail map of the latitudinal plasma speed. The inset shows the +pattern expected in a vortex pattern. +the plasma moves away from the equatorial plane on the dawn side +of the tail in accordance with our expectations. We expect plasma +motion towards the equatorial plane on the dusk side of the tail. This +is not so readily apparent in the measurements, although the data is +compatible with this notion as well. +It is also apparent from both figures that the center of the plasma +sheet is offset towards the northern lobe. This is a consequence of +the magnetodisk being deformed by solar wind loading as shown by +Arridge et al. (2008). +The last supporting evidence is the distribution of the magnetic +field direction during this time period. The root cause of the magne- +tospheric plasma motion in the rotation-dominated magnetospheres +of giant planets is the force that the ionosphere exerts on the plasma +through the magnetic field. In other words, the ionospheric motion +drags the magnetospheric plasma by means of field-line tension. +Since the bulk of the plasma resides in the plasma sheet, the force +density – and thus the corresponding field line curvature – is the +largest there. Evidently, the sign of the radial magnetic field reverses +inside the current sheet, but the curvature force corresponding to +this directional variation is not related to azimuthal forcing, it bal- +ances the centrifugal force. The force which accelerates the plasma +in the azimuthal direction corresponds to a direction change of the +azimuthal component of the magnetic field. For rigid corotation, +the field lines lie inside a radial-latitudinal plane. If the plasma of +the current sheet lags behind the ionospheric plasma, the magnetic +field direction outside the magnetic equator deviates from the radial- +latitudinal plane. Near the magnetic equator, the plasma is dragged +in the pro-grade direction, which means that the field line deviation +is also pro-grade with respect to the radial-latitudinal plane. At the +magnetic equator, the field has only a latitudinal component. As a +first approximation, we expect a linear increase of the azimuthal field +component with the distance from the magnetic equator. That must +be true for all models in the vicinity of the magnetic equator, but the +model predictions deviate for larger distances. For co-rotating shells, +there is a monotonic, although diminishing increase with distance, +Figure 6. Azimuthal magnetic field 𝐵𝜑 as a function of the residual radial +magnetic field 𝐵𝑟. As one moves away from the magnetic equator, 𝐵𝑟 in- +creases monotonically, 𝐵𝜑 has a maximum and turns around in a vortex +pattern. +since the ionosphere always precedes the plasma sheet in the pro- +grade direction. For the closed field line vortex model, after reaching +a maximum, the azimuthal component starts to decrease. At the cen- +ter of the vortex, where we encounter a field line that magnetically +connects the plasma sheet to the noon meridian, the azimuthal field +component is zero. Farther away from the magnetic equator, in the +region of retrograde plasma motion, the field lines deviate from the +radial-latitudinal plane in the retrograde direction, thus the azimuthal +component reverses there. In Fig. 6 we show the azimuthal magnetic +field as a function of the radial magnetic field component, the latter +representing the distance from the magnetic equator (since it is a +monotonic function of said distance, and we can eliminate the effects +of magnetodisk flapping this way (see Nemeth et al. 2015, 2016). We +can see a sinusoidal field behavior, which agrees with the closed field +line vortex model – linear increase, decrease, zero, and a directional +change as one moves away from the magnetic equator. +It is an important and difficult question whether the plasma and +field measurements discussed in this section correspond to closed +field lines, or whether the measurements show the whirling of plasma +in open field lobes – similar to the phenomenon first detected by Isbell +et al. (1984) in the Terrestrial magnetosphere. In-situ magnetic field +measurements cannot provide conclusive answers about the global +topology of a field line. The plasma content provides important clues +about connectedness, although it still remains a difficult question, +even for Earth (Chisham et al. 2004), where a multitude of space- +craft provide a wealth of relevant data. On closed field lines, one +expects to find the plasma of the internal magnetospheric sources; +open field lines, on the other hand, should be depleted of magneto- +spheric plasma but may contain solar wind particles. The field lines +discussed in our analysis contain a significant amount of magne- +tospheric plasma, even those, characterized by retrograde motion, +carry heavy water-group ions. The plasma content at higher latitudes +is more dilute than that of the central plasma sheet, but an exponen- +tial decrease is expected due to centrifugal confinement (Sergis et al. +2011; Persoon et al. 2020). The plasma density changes smoothly in +the region in question, in accordance with an initially exponential fall- +off. There are no detectable boundaries where the magnetospheric +MNRAS 000, 1–7 (2022) + +Ve map +150 +10 +Ve [km/s] +0 +0 +0.004400101 +N +-10 +150 +-10 +0 +Y [Rs]B ++ +0 ++ +B.6 +Z. Nemeth +plasma abruptly disappears (see Nemeth et al. 2015). Thus the behav- +ior of the thermal plasma supports the notion that these are indeed +closed-field vortices. +If one examines, however, the hot electron population (see e.g. +Bunce et al. 2008), it turns out that the high latitude field lines are +empty of those few hundred eV electrons, which are present on lower +latitudes. Due to their low mass and high energy, these hot electrons +cannot be confined centrifugally – thus several authors interpret their +absence as proof that these field lines are open. One possibility to +resolve this conflict would be to take into account the effects of the +polarization (ambipolar) electric field. In such a scenario, the cen- +trifugally confined cold heavy ions exert an electric force on the hot +electron component, achieving their confinement in the vicinity of +the plasma sheet. One can even argue that the measurements show a +deceleration of the hot electrons as one moves away from the equato- +rial regions – their energy distribution shifts towards lower energies +as expected from an electrically confined particle population (see +e.g. the first panel of fig 4 in Bunce et al. 2008). Unfortunately, mod- +els computing the magnitude of the ambipolar electric potential in +the Kronian magnetosphere report more than one order of magni- +tude lower values than that necessary to confine the hot electrons. +(Maurice et al. (1997) report 30V, and Persoon et al. (2020) report +10-20V). It is outside of the scope of this paper to discuss numerical +plasma models of the Kronian magnetosphere, but we should note +that Maurice et al. (1997) find that the electric field can be as high as +80 V/𝑅𝑆 if there is a cold oxygen component present, but they left out +this possibility from their final simulation because the pre-Cassini +state of the art (Richardson 1995) did not know about cold water- +group ions in the outer magnetosphere. Since there is a cold water +component, and the field should be integrated over tens of 𝑅𝑆 there, +it is entirely possible that a more accurate simulation would result +in a potential drop of several hundred Volts. The simulations of Per- +soon et al. (2020), on the other hand, focused entirely on the thermal +plasma, their initial assumptions left out the hot electron component, +which is one of the crucial ingredients to having a sizable ambipolar +field. Thus the possibility of electric confinement of hot electrons +cannot be ruled out. +In summary, the evidence coming from the hot electrons and that +coming from cold plasma seem to contradict each other, but some +effects can dampen the hot electron population on the high latitude +regions of closed field lines. Thus, based on the evidence of a signif- +icant amount of water-group ions being present on these field lines +and moving according to the vortex pattern (even performing retro- +grade motion), the observed vortices are most probably composed of +closed field lines. +4 CONCLUSIONS +We have presented theoretical considerations showing that the coex- +istence of several conditions eventuates the presence of giant closed +field line vortices in planetary magnetospheres. The first two con- +ditions are the presence of significant plasma sources inside the +magnetosphere and the planet being a fast rotator. Together these +conditions ensure that the magnetosphere possesses a centrifugally +forced dense equatorial plasma sheet. A third (although not com- +pletely independent) condition is that the time scale of the periodic +process, responsible for emptying the mass-loaded closed field lines, +is substantially larger than the planetary rotation period, and thus +the tail field lines remain connected to the planet during (at least) a +full rotation. The fourth and last condition, which is always satisfied +for the solar-wind-loaded magnetospheres of the solar system, is that +some effect breaks the axial symmetry of the system, introducing a +strong day-night asymmetry, and creating field lines, which connect +the dayside ionosphere to the nightside plasma sheet. These con- +ditions ensure that the ionospheric footpoints of certain field lines +perform (at least) a full rotation around the planet, while their middle +(equatorial) point is anchored in the sub-corotating nightside plasma +sheet. Instead of rotating around the planet, the plasma trapped on +these field lines rotates around a vortex line, which connects the pole +with a point in the nightside plasma sheet, thus forming two vortices, +one in each tail lobes. The motion patterns of these two vortices are +more or less independent, they are only connected in the far tail equa- +torial region. Thus the periodicities of the plasma motion in the two +lobes are independent of each other as well. This allows the plasma +properties in the nightside magnetosphere to have dual periodicities, +both close to the planetary period. +It turns out that the magnetosphere of Saturn satisfies the above- +mentioned conditions, and thus it is a valid question whether the Kro- +nian magnetosphere features giant closed field line vortices. Careful +examination of the plasma velocities and the magnetic field direction +reveals vortex patterns in the nightside outer magnetosphere of Sat- +urn. The supporting evidence includes retrograde plasma motion far +from the magnetic equator, flow towards and away from the plasma +sheet on the dusk- and dawnside respectively, independent periodic- +ities in the northern and southern lobes, and the field line geometry +showing vortex-like characteristics. +The newly discovered vortex pattern is either evidence of open- +field vortices (similar to that observed in the Terrestrial magnetotail +(Isbell et al. 1984)), or that of the closed field line vortices. Based on +thermal plasma measurements, we argue that there are closed-field +vortices in the Kronian magnetosphere. +In the vortex model, the Dungey and Vasyliunas cycles act some- +what differently. The Dungey flow does not penetrate the central +region of the polar cap due to the fast rotation of the ionosphere. +Thus the open field lines reside in the outer layer of the vortices. +The plasmoid-forming tail reconnection events necessary to close +the Vasyliunas cycle are rare, thus leaving the plasma time enough to +participate in the essentially 3-dimensional vortex-forming plasma +motion. +ACKNOWLEDGEMENTS +The author would like to thank Stan Cowley for the helpful discus- +sions. This work was supported by the ÚNKP-18-4 New National +Excellence Program of the Ministry of Human Capacities and the +János Bolyai Research Scholarship of the Hungarian Academy of +Sciences. +DATA AVAILABILITY +Calibrated magnetic field data and Cassini ion moments from the +Cassini mission are available from the NASA Planetary Data System +(https://pds.nasa.gov/). +REFERENCES +Acuna M. H., Behannon K. W., Connerney J. E. P., 1983, in Dessler A. J., ed., +, Physics of the Jovian magnetosphere. Cambridge Univ. Press, London, +pp 1–50 +Andrews D. J., Bunce E. J., Cowley S. W. H., Dougherty M. K., Provan G., +Southwood D. J., 2008, Journal of Geophysical Research, 113, 9205 +MNRAS 000, 1–7 (2022) + +Closed field line vortices in magnetospheres +7 +Andrews D. J., Cowley S. W. H., Dougherty M. K., Provan G., 2010, Journal +of Geophysical Research, 115, A04212 +Andrews D. J., Cowley S. W. H., Dougherty M. K., Larmy L., Provan G., +Southwood D. J., 2012, Journal of Geophysical Research, 117, 4224 +Arridge C. S., Russell C. T., Khurana K. K., Achilleos N., André N., Rymer +A. M., Dougherty M. K., Coates A. J., 2007, Geophysical Research +Letters, 34, L09108 +Arridge C. S., Khurana K. K., Russell C. T., Southwood D. J., Achilleos +N., Dougherty M. K., Coates A. J., Leinweber H. K., 2008, Journal of +Geophysical Research, 113, 8217 +Arridge C. S., et al., 2011, Journal of Geophysical Research, 116, 11205 +Brice N. M., Ioannidis G., 1970, Icarus, 13, 173 +Bunce E. J., Cowley S. W. H., Wild J. A., 2003, Annales Geophysicae, 21, +1709 +Bunce E. J., et al., 2008, Journal of Geophysical Research (Space Physics), +113, A09209 +Chisham G., Abel G., Milan S., 2004, Astronomy & Geophysics, 45, 3.36 +Cowley S. W. H., Bunce E. J., 2001, Planetary and Space Science, 49, 1067 +Cowley S. W. H., Bunce E. J., 2003, Annales Geophysicae, 21, 1691 +Cowley S. W. H., Bunce E. J., O’Rourke J. M., 2004, Journal of Geophysical +Research, 109, A05212 +Cowley S. W., Nichols J., Jackman C., 2015, Journal of Geophysical Research: +Space Physics, 120, 6347 +Dougherty M. K., et al., 2004, Space Science Reviews, 114, 331 +Dungey J. W., 1961, Physical Review Letters, 6, 47 +Gledhill J., 1967, Nature, 214, 155 +Gombosi T. I., Armstrong C. S., Khurana K. K., Krimigis S., M. K., N. P., A. +M., Thomsen M. F., 2009, in Dougherty M. K., Esposito L. W., Krimigis +S. M., eds, , Saturn from Cassini-Huygens. Springer, Netherlands, pp +203–255 +Hill T. W., 1974, Reviews of Geophysics, 12, 379 +Hill T. W., 1979, Journal of Geophysical Research: Space Physics, 84, 6554 +Hill T. W., 1980, Science, 207, 301 +Hill T. W., Dessler A. J., Michel F. C., 1974, Geophysical Research Letters, +1, 3 +Isbell J., Dessler A. J., Waite J. H., 1984, Journal of Geophysical Research, +89, 10716 +Mauk B. H., et al., 2009, in Dougherty M. K., Esposito L. W., Krimigis S. M., +eds, , Saturn from Cassini-Huygens. Springer, Netherlands, pp 281–331 +Maurice S., Blanc M., Prangé R., Sittler E. C., 1997, Planetary and Space +Science, 45, 1449 +Mead G. D., 1964, J. Geophys. Res., 69, 1181 +Michel F. C., Sturrock P. A., 1974, Planetary and Space Science, 22, 1501 +Mitchell D. G., Carbary J. F., Cowley S. W. H., Hill T. W., Zarka P., 2009, +in Dougherty M. K., Esposito L. W., Krimigis S. M., eds, , Saturn from +Cassini-Huygens. Springer, Netherlands, pp 257–279, doi:10.1007/978- +1-4020-9217-6_10 +Nemeth Z., et al., 2011, Journal of Geophysical Research, 116, 9212 +Nemeth Z., et al., 2015, Annales Geophysicae, 33, 1195 +Nemeth Z., Szego K., Foldy L., Cowley S. W. H., Provan G., M. T., 2016, +Planetary and Space Science, 130, 54 +Persoon A. M., Gurnett D. A., Kurth W. S., Hospodarsky G. B., Groene +J. B., Canu P., Dougherty M. K., 2005, Geophysical Research Letters, +32, L23105 +Persoon A. M., et al., 2020, Journal of Geophysical Research (Space Physics), +125, e27545 +Provan G., Andrews D. J., Arridge C. S., Coates A. J., Cowley S. W. H., +Cox G., Dougherty M. K., Jackman C. M., 2012, Journal of Geophysical +Research, 117, 1209 +Provan G., Lamy L., Cowley S. W. H., Bunce E. J., 2019, Journal of Geo- +physical Research: Space Physics, 124, 1157 +Provan G., Cowley S. W. H., Bunce E. J., Milan S. E., Persoon A. M., +Gurnett D. A., 2021, Journal of Geophysical Research: Space Physics, +126, e2021JA029332 +Richardson J. D., 1995, Geophysical Research Letters, 22, 1177 +Sergis N., et al., 2011, Journal of Geophysical Research, 116, A04203 +Simon S., Wennmacher A., Neubauer F. M., Bertucci C. L., Kriegel H., Saur +J., Russell C. T., Dougherty M. K., 2010, Planetary and Space Science, +58, 1230 +Stallard T. S., Miller S., Trafton L. M., Geballe T. R., Joseph R. D., 2004, +Icarus, 167, 204 +Stallard T. S., et al., 2019, Philosophical Transactions of the Royal Society of +London Series A, 377, 20180405 +Szego K., Nemeth Z., Erdos G., Foldy L., Bebesi Z., Thomsen M., Delapp +D., 2012, Journal of Geophysical Research, 117, 9225 +Szego K., Nemeth Z., Foldy L., Cowley S. W. H., Provan G., 2013, Journal +of Geophysical Research, 118, 2883 +Thomsen M. F., et al., 2010, Journal of Geophysical Research, 115, A10220 +Vasyliunas V. M., 1983, in Dessler A., ed., , Physics of the Jovian magneto- +sphere.. Cambridge Univ. Press„ London, pp 395–453 +Young D. T., Berthelier J. J., Blanc M., et al., 2004, Space Science Reviews, +114, 1 +This paper has been typeset from a TEX/LATEX file prepared by the author. +MNRAS 000, 1–7 (2022) + diff --git a/Q9E4T4oBgHgl3EQfKgxv/content/tmp_files/load_file.txt b/Q9E4T4oBgHgl3EQfKgxv/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ea68385f65da4ade3c8d853d0905cc07b6980da5 --- /dev/null +++ b/Q9E4T4oBgHgl3EQfKgxv/content/tmp_files/load_file.txt @@ -0,0 +1,646 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf,len=645 +page_content='MNRAS 000, 1–7 (2022) Preprint 13 January 2023 Compiled using MNRAS LATEX style file v3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='0 Closed field line vortices in planetary magnetospheres Zoltan Nemeth,1★ 1Wigner Research Centre for Physics, Konkoly-Thege Miklós út 29-33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Budapest H-1121, Hungary Accepted XXX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Received YYY;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' in original form ZZZ ABSTRACT In a rotation-dominated magnetosphere, there is a region where closed field lines rotate around the planet, and also a region where the open field lines stretch away from the planet, forming the lobes of the magnetotail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This paper shows that there could be a third, significantly different region, where the closed field lines form twisted vortex structures anchored in the magnetotail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Such patterns form when there are significant plasma sources inside the magnetosphere and the time scale of the plasmoid formation process is substantially larger than the planetary rotation period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In the presence of vortices, the Dungey and Vasyliunas cycles act differently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The Dungey flow does not penetrate the central region of the polar cap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Tail reconnection events are rare, thus leaving the plasma time enough to participate in the essentially 3-dimensional vortex-forming plasma motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The above conditions are fulfilled for Saturn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' We discovered vortex-like patterns in the plasma and magnetic field data measured by the Cassini spacecraft in the nightside magnetosphere of Saturn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasma whirling around in these vortices never reaches the dayside, instead, it performs a retrograde motion in the high latitude regions of the magnetotail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Low-energy plasma data suggest that the observed patterns correspond to the closed field line vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Key words: magnetic fields – plasmas – planets and satellites: general – planets and satellites: magnetic fields 1 INTRODUCTION It is generally accepted that the magnetospheres of giant planets consist of two topologically distinct regions: the region of closed field in the equatorial and mid-latitudes, where both ends of the field lines are connected to the planet;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' and the region of open field, where the field lines have only one foot-point anchored in the ionosphere, either in the northern or in the southern polar cap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasma content of the closed field region is thought to co-rotate with the planet, exhibiting some extent of lag (sub-corotation) (Bunce et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cowley & Bunce 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cowley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The field and plasma form more-or-less axisymmetric shells in this region, similar to the L- shells of a (quasi-)dipole field configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' According to the current models, these shells move as rigid bodies: their angular velocity is constant along the field lines from the northern hemisphere through the magnetic equator to the southern hemisphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The amount of lag (sub-corotation) is a function of the latitude only (or equivalently: a function of the flux-function describing the axisymmetric field (Cowley & Bunce 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cowley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004), and the gradient of this function determines the nature of the magnetosphere-ionosphere interaction, for example, the position and properties of the auroral ovals (Cowley & Bunce 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Bunce et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cowley & Bunce 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cowley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The foot-points of the open field lines also (sub)co-rotate with the planet;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' this motion twists the field lines into a spiral pattern (Isbell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1984;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Vasyliunas 1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The mechanism of this field line twist and the formation of the Parker spiral can be described in a common framework, as was shown by Vasyliunas (1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' ★ E-mail: nemeth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='zoltan@wigner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='hu Two important properties differentiate giant planet magneto- spheres from that of terrestrial planets: their fast rotation rate and the existence of intense plasma sources inside the magnetospheres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Due to their fast rotation, these are entirely rotation-dominated mag- netospheres (Brice & Ioannidis 1970);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' they lack the convection- dominated closed field region found outside the plasmasphere in the terrestrial magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The intense plasma sources inflict signifi- cant loading on the field lines, which (together with the fast rotation) leads to field line deformation and the formation of a dense plasma sheet near the magnetic equator (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Gledhill 1967;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Hill 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Hill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Acuna et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Persoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Gombosi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Arridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2007, 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Nemeth et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It also explains the above-mentioned co-rotation lag, since the ionospheric interaction needs to accelerate the new material continuously intro- duced to the field lines (Hill 1979, 1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The loading and stretching of field lines also give rise to a so-called planetary wind (Hill 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Michel & Sturrock 1974), in which the loaded field lines undergo centrifugal instability and move outward from the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This is the basis of the Vasyliunas cycle (Vasyliunas 1983), in which a plasmoid forming reconnection process removes the excess plasma from the loaded closed field lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The reconnection also shortens the emp- tied field lines, which then return to the vicinity of the planet thus enabling the cycle to start over.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In the open field region, the Dungey cycle (Dungey 1961) governs the dynamics, in which closed field lines at the dayside magnetopause are opened up and connected to interplanetary magnetic field lines of the solar wind in a reconnection process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' These opened-up field lines are convected tailward by the solar wind flow and form the northern and southern open lobes of the magnetospheric tail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A tail- © 2022 The Authors arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='04930v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='EP] 12 Jan 2023 2 Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Nemeth side reconnection and the subsequent motion of the newly closed filed lines towards the dayside close the cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The above-summarized picture of giant planet magnetospheres rests on several implicit assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The first is that the magne- tosphere is essentially axisymmetric – meaning that the obvious deviations from axial symmetry do not essentially alter the nature of the flow patterns around the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Another assumption is that the flow pattern in the ionosphere alone determines the plasma flow everywhere in the magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A third important assumption is that (in a steady state) in every planetary period the mass-loaded closed field lines are emptied by some process, and thus can revert to the essentially co-rotating behavior of empty field lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In this paper, we will examine the validity of the above assump- tions, and the consequences of deviations from the assumed behavior for the global structure of giant planet magnetospheres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Since these consequences have experimentally verifiable aspects, we compare those with data measured by the Cassini spacecraft while orbiting planet Saturn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2 THEORY Planetary magnetospheres are manifestly not axisymmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The vec- tors of the solar wind velocity and the planetary magnetic dipole rep- resent two nonparallel preferred directions, the existence of which is incompatible with axial symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (In theory, the planetary dipole may lie in the ecliptic plane, but even in such a system the two vec- tors coincide only at summer and winter solstice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=') In other words, the effect of the solar wind deforms the magnetosphere, which manifests as a compression on the dayside and a relative elongation on the nightside.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If the nature of the plasma flow in the magnetosphere re- mains essentially the same as in the axisymmetric case, all the closed field lines still rotate around the planet, although they are somewhat deformed during their round tour – compressed on the dayside and expanded on the nightside.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Is this really the only effect of the symme- try breaking on closed field lines?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In order to answer this question, we need to investigate the geometry of open and closed field lines in more detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It is generally assumed that those lines which are anchored at the dayside poleward of the cusps, are open field lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' These are the field lines that initially point towards the Sun, but bend back (tailward) after a while.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (In stricter terms, the sign of the X component of their tangent vector changes in the polar region of the magnetosphere in a coordinate system in which the X-axis is parallel to the Sun-planet line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=') Are they necessarily open field lines?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If we start with a simple planetary dipole and disturb it only with Chapman-Ferraro currents, which constrain the planetary field in a cavity inside a perfectly conducting flow, we find that the asymmetry is already present, but all the field lines are closed (Mead 1964).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Such a configuration can be seen in fig 4 of Mead (1964).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' There is a critical latitude, above which field lines originating on the dayside pass over the poles and cross the magnetic equator on the nightside.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If we add a current sheet (finite in the X direction, very narrow in the direction (Z) of the dipole moment vector, and infinite in the direction (Y) perpendicular to both), we find that the field lines are still closed (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If we add a homogenous northward field (representing the Interplanetary Magnetic Field (IMF)), the field lines are still closed, and the volume of the magnetosphere is more constrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Only adding a southward IMF will create open field lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If we consider the dynamics of this process, we arrive at the Dungey cycle: the southward-directed IMF field lines arriving at the magnetopause reconnect with the closed field lines, and thus open magnetospheric field lines are created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Magnetic field of a dipole with Chapman-Ferraro currents and a tail current sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' All field lines connected to the planet are closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Planetary rotation move the field line in position 1 to position 5, and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' other words, it is the Dungey cycle, which dynamically creates the open field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Without it (if e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' the IMF remains northward for a longer time) the magnetosphere is closed, which includes the bent-back field lines originating on the dayside above the critical latitude and closing on the nightside.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' What happens with these field lines during the rotation of the planet?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Since the footpoints of these field lines are anchored in the iono- sphere, these footpoints circle around the planet together with the ionospheric plasma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Although the plasma in and near the polar cap can exhibit significant sub-corotation (Stallard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Stallard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2019), it is impossible for the footpoints not to rotate with the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' However small the pro-grade plasma motion in the iono- sphere, it will carry the footpoints around the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' On the contrary, the middle point of these field lines anchored in the dense equatorial plasma sheet will always remain on the nightside (even when the footpoints lie on the noon meridian).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This suggests a plasma motion fundamentally different from the axisymmetric case: since the mid- dle points remain on the nightside while the northern and southern ionospheric footpoints travel around the planet, and the field lines are continuous through this motion, there must be parts of the field line, which (instead of rotating around the planet) sweep over the poles (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This kind of motion, of course, requires that the field line remain continuous during most of a planetary rotation (or more), and thus the plasmoid forming tail reconnection rate should be less than 1 per planetary day.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' As was shown by Cowley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2015), this really is the case for Saturn, where the characteristic time between plasmoids is 35-45 h, which is much longer than the 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='5 h planetary rotation period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (Notice that the observed flow violates another of the above-mentioned common assumptions – namely that in every planetary period the mass-loaded closed field lines are disconnected from the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=') The planetary rotation will move the field line from position 1 in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' to position 5, and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This means that some part of field line 5 (the part close to the middle point, which lies inside and near the equatorial plasma sheet) must first move dawnward, move away from the planet after that, travel duskward in the far tail, then move towards the planet near the dusk side of the magnetopause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It is possible that the middle point cannot complete this circle before a tail reconnection severs the field line, since plasma motion in the far tail can be even slower than that indicated by the angular velocity of the ionospheric footpoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Still, by and large, it seems that the middle points of closed field lines, which originate from high latitudes of MNRAS 000, 1–7 (2022) 4 YClosed field line vortices in magnetospheres 3 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Schematic of the cross-tail plasma flows carrying the field lines in the closed-field vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' the ionosphere, circle around a point in the far tail, and not around the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' How is that possible?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' First, we should note that the force which balances the centrifugal force that the dense equatorial plasma exerts on the field line, is the 𝑗 × 𝐵 force associated with the sharp bend in the field line inside the current sheet (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' When field line 5 moves towards the dayside, the tight horizontal V shape, formed by the loaded field line when its footpoints are near midnight, will open up to be able to move above and below the polar regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This lowers the curvature force (magnetic tension) exerted by that field line in the plasma sheet region, which upsets the balance of centrifugal and magnetic forces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus the plasma will move away from the planet under centrifugal forcing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Similarly, when the footpoints move towards the nightside near dusk, the northern and southern parts of the field line move closer together enhancing the magnetic tension – thus the plasma in the plasma sheet pierced by this field line moves towards the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The retrograde motion in the far tail plasma sheet is simply because the Y component of the 𝑗 × 𝐵 force points in the retrograde direction while the footpoints move on the dayside from dawn to dusk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Although this circular motion in the far tail plasma sheet might not be able to complete in the available time (before reconnection severs the magnetic connection between the footpoints and the far tail plasma sheet), the phenomenon also influences plasma motion in the tail lobes closer to the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Field lines that have ionospheric footpoints above the critical latitude and middle points anchored in the far tail, cut out elliptical paths from planes in the middle mag- netosphere perpendicular to the X-axis (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The totality of these field lines forms two giant vortices, one in the northern and one in the southern magnetospheric lobe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasma velocities in these two vortices are more or less independent of each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' They ap- proach the same value in the far tail plasma sheet (near the middle points of the field lines), but closer to the planet, in the magneto- spheric lobes, the velocities in the northern lobe are determined by the motion patterns of the northern ionosphere, while the southern lobe is governed by the southern ionosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Since for a gas giant there could be significant, measurable differences between the ve- locities of ionospheric flows of mirror latitudes in the northern and southern hemispheres, the characteristic periodicities in the northern and southern magnetospheric lobes can also be different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In contrast, in a quasi-axisymmetric flow pattern, in which the L-shells move as rigid bodies, the periodicities observable in the northern and southern magnetospheric lobes should be the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Otherwise, the differen- tial rotation of the northern and southern parts of the same L-shell would stretch its field lines indefinitely in the azimuthal direction in the vicinity of the plasma sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Observations reporting slightly different periodicities in the two lobes for various magnetospheric phenomena (Andrews et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2008, 2010, 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Provan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2012, 2019, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Szego et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2012, 2013) suggest that the flow patterns in the two lobes are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Another important aspect to be considered about these planetary period oscillations (PPOs) is that the vortex model decouples the observed periodicity and the plasma speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Since in the outer magnetosphere the plasma is significantly slower than the speed required for rigid corotation, plasma rotating around the planet with this lagging speed would show periodicities much longer than the planetary period but this is not the case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' On the other hand, in the vortex model, the plasma rotates not around the planet but around the vortex core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This path is significantly shorter than that going all the way around the planet, and thus it requires much lower speeds to keep up with the planetary periodicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Investigating the flow in these hypothetical lobe vortices shows that the plasma exhibits a rapid pro-grade azimuthal motion in the plasma sheet, which is slower and slower as we move away from the magnetic equator towards the vortex core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' There is a distance where the azimuthal motion stops outright (in the core of the vortex), and if we move even farther from the magnetic equator, we should observe a retrograde motion of the tenuous plasma of the high-Z lobes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Such a flow pattern should be observable in the plasma measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Nemeth et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2015) already identified such a flow pattern in the ion measurements of the Cassini spacecraft, but attributed the decreasing azimuthal velocity to sub-corotation intensifying for larger L values, and did not offer an explanation for the observed retrograde motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In the next section, we revisit these observations in more detail, extending them to latitudinal as well as azimuthal flow patterns and showing how the experimental data support the existence of giant closed field line vortices in the tail lobes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Another aspect of the theory is how the Dungey cycle and the open field lines fit into this picture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If we simply suppose that the open field lines are those closest to the poles (as in the terrestrial magneto- sphere), we find that the field lines of the closed field vortices should wind over and around the open field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In other words, the spatially bounded volume of the closed field vortices should encompass the spatially infinite open field lines, which is impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' To resolve this seeming controversy, we should consider the dynamics of the process: how the Dungey cycle opens up the originally closed field lines of the dayside magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' At the moment of the dayside reconnection, the footpoints of the reconnecting closed field lines in- tersect the ionosphere at the critical latitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Once the reconnection opened up the field line, the convection associated with the Dungey cycle starts to move the footpoint towards the nightside, which at first means a poleward motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' At the same time, the ionospheric plasma rotates around the planet, which adds an azimuthal component to the velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cowley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2004) estimate the speed of the poleward motion to be 200 m/s in the case of Saturn, while the rotation speed is around 500-700 m/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Considering these two motions together, we find that the footpoints of the opened-up field line never penetrate the core of the polar cap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Before that could happen, planetary rotation moves the footpoint onto the nightside, where the Dungey cycle flow acts to move the footpoint farther away from the pole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 3 the most extreme case of the footpoint motion is shown, where the field line is opened up at 6 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' local time, and thus can penetrate a good portion of the polar cap, but still not all the way to the pole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This means that the open field lines lie on (and near) the outer surface of the closed field vortices, it is the open field that winds around the MNRAS 000, 1–7 (2022) dawn dusk4 Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Nemeth Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Schematic of the ionospheric plasma flow in the polar cap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Due to the fast rotation, the Dungey-cycle flow cannot penetrate the entire polar cap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' closed field vortex, and not the other way around.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Close to the poles reside the undisturbed cores of the open field vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This may be related to the decreased corotation lag observed near the poles (Stallard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus the following picture describes the magnetospheric structure of giant planets, provided that they are fast rotators, there are signifi- cant plasma sources inside the magnetosphere, and the characteristic time, during which the far tail plasma sheet remains connected to the planet, is longer than the planetary period: Close to the planet, at equatorial and mid-latitudes, there still is a region where the plasma bound to closed field lines rotate around the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' At a critical lati- tude, depending on solar wind conditions (most notably on the orien- tation of the IMF) the field line topology changes (open-closed field boundary).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Near this latitude, the footpoints of open field lines circle around the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' On even higher latitudes we again find closed field lines;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' here the giant closed field line vortices connect to the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The middle points of field lines in these vortices are anchored in the far tail plasma sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The rotation of the plasma in the vortices forms a distinctive velocity pattern in the tail lobes, characterized by retrograde plasma motion far away from the magnetic equator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The open field lines wind around the closed field vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' For periods of rapid plasmoid formation and strong southward-directed IMF, the vortices may disappear, in which case the classic picture describes the magnetospheric structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Although it is difficult to judge the global structure (magnetic connectedness) from local measurements, it will be shown in the next section that, in the case of Saturn, the measured flow patterns and field directions support the magnetic vortex picture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 3 DATA Saturn is a fast-rotating gas giant with an extended magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The Pioneer and Voyager probes performed the first in situ measure- ments in the Kronian magnetosphere, as they flew by the planet in 1979, 1980, and 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Further data were provided by the Cassini or- biter between 2004 and 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The analysis of these measurements re- Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cross-tail map of the azimuthal plasma speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The inset shows the pattern expected in a vortex pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' vealed the unique and complex structure of Saturn’s magnetosphere, the results of which are summarized in several review studies (Gom- bosi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Mitchell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Mauk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In this section, we investigate in situ data measured by the Cassini spacecraft in the nightside outer magnetosphere of Saturn, includ- ing magnetic field data provided by the Cassini Magnetometer (Dougherty et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004) (MAG) and the azimuthal and latitudinal components of the plasma velocities (H+ and water group ions) from the LANMOM numerical ion moments derived by Thomsen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2010) from the measurements of the Cassini Plasma Spectrometer (CAPS) (Young et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Our analysis is based on data from 2006 and 2009 in the southern summer period, as the spacecraft in these 2 years spent a significant amount of time exploring the nightside outer magnetosphere of Sat- urn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' We analyze orbit segments containing Titan encounters because these segments provide the best latitudinal scans of the tail region together with relatively small radial motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' We use data in which the Kronian local time (LT) is less than 3 hours from midnight and where the distance of Cassini from Saturn is 20±4 Saturn radii (𝑅𝑆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 4 shows a cross-tail map of the azimuthal plasma speed, pro- jected onto a plane perpendicular to the Sun-Saturn direction and crossing the tail at the position of Titan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cold colors (dark blue and purple) represent retrograde plasma motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The inset shows the velocity pattern expected if two vortices (one in each tail lobe) de- termine the plasma motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' We do not expect one-to-one correspon- dence, since the data set covering two complete Earth years suffers from significant time variability (the most prominent of which is the “flapping” of the Kronian magnetodisk (Simon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Arridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Szego et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2012, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Despite this, the overall resem- blance is quite prominent, the model describes the measurements much better than the rotating shell picture, in which the map would feature continuous stripes parallel to the equator and no retrograde motion at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 5 shows a similar map of the latitudinal plasma speed, with the expected velocity pattern shown in the inset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It is clearly evident that MNRAS 000, 1–7 (2022) Polar cap closed flux Dungey-cycle flow SunVβ map 300 10 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' [km/s] 1000 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='0061 400 4 180101 N 10 100 10 0 Y [Rs]Closed field line vortices in magnetospheres 5 Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cross-tail map of the latitudinal plasma speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The inset shows the pattern expected in a vortex pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' the plasma moves away from the equatorial plane on the dawn side of the tail in accordance with our expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' We expect plasma motion towards the equatorial plane on the dusk side of the tail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This is not so readily apparent in the measurements, although the data is compatible with this notion as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It is also apparent from both figures that the center of the plasma sheet is offset towards the northern lobe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This is a consequence of the magnetodisk being deformed by solar wind loading as shown by Arridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The last supporting evidence is the distribution of the magnetic field direction during this time period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The root cause of the magne- tospheric plasma motion in the rotation-dominated magnetospheres of giant planets is the force that the ionosphere exerts on the plasma through the magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In other words, the ionospheric motion drags the magnetospheric plasma by means of field-line tension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Since the bulk of the plasma resides in the plasma sheet, the force density – and thus the corresponding field line curvature – is the largest there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Evidently, the sign of the radial magnetic field reverses inside the current sheet, but the curvature force corresponding to this directional variation is not related to azimuthal forcing, it bal- ances the centrifugal force.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The force which accelerates the plasma in the azimuthal direction corresponds to a direction change of the azimuthal component of the magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' For rigid corotation, the field lines lie inside a radial-latitudinal plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If the plasma of the current sheet lags behind the ionospheric plasma, the magnetic field direction outside the magnetic equator deviates from the radial- latitudinal plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Near the magnetic equator, the plasma is dragged in the pro-grade direction, which means that the field line deviation is also pro-grade with respect to the radial-latitudinal plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' At the magnetic equator, the field has only a latitudinal component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' As a first approximation, we expect a linear increase of the azimuthal field component with the distance from the magnetic equator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' That must be true for all models in the vicinity of the magnetic equator, but the model predictions deviate for larger distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' For co-rotating shells, there is a monotonic, although diminishing increase with distance, Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Azimuthal magnetic field 𝐵𝜑 as a function of the residual radial magnetic field 𝐵𝑟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' As one moves away from the magnetic equator, 𝐵𝑟 in- creases monotonically, 𝐵𝜑 has a maximum and turns around in a vortex pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' since the ionosphere always precedes the plasma sheet in the pro- grade direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' For the closed field line vortex model, after reaching a maximum, the azimuthal component starts to decrease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' At the cen- ter of the vortex, where we encounter a field line that magnetically connects the plasma sheet to the noon meridian, the azimuthal field component is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Farther away from the magnetic equator, in the region of retrograde plasma motion, the field lines deviate from the radial-latitudinal plane in the retrograde direction, thus the azimuthal component reverses there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 6 we show the azimuthal magnetic field as a function of the radial magnetic field component, the latter representing the distance from the magnetic equator (since it is a monotonic function of said distance, and we can eliminate the effects of magnetodisk flapping this way (see Nemeth et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2015, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' We can see a sinusoidal field behavior, which agrees with the closed field line vortex model – linear increase, decrease, zero, and a directional change as one moves away from the magnetic equator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It is an important and difficult question whether the plasma and field measurements discussed in this section correspond to closed field lines, or whether the measurements show the whirling of plasma in open field lobes – similar to the phenomenon first detected by Isbell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (1984) in the Terrestrial magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In-situ magnetic field measurements cannot provide conclusive answers about the global topology of a field line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasma content provides important clues about connectedness, although it still remains a difficult question, even for Earth (Chisham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2004), where a multitude of space- craft provide a wealth of relevant data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' On closed field lines, one expects to find the plasma of the internal magnetospheric sources;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' open field lines, on the other hand, should be depleted of magneto- spheric plasma but may contain solar wind particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The field lines discussed in our analysis contain a significant amount of magne- tospheric plasma, even those, characterized by retrograde motion, carry heavy water-group ions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasma content at higher latitudes is more dilute than that of the central plasma sheet, but an exponen- tial decrease is expected due to centrifugal confinement (Sergis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Persoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasma density changes smoothly in the region in question, in accordance with an initially exponential fall- off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' There are no detectable boundaries where the magnetospheric MNRAS 000, 1–7 (2022) Ve map 150 10 Ve [km/s] 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='004400101 N 10 150 10 0 Y [Rs]B + 0 + B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='6 Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Nemeth plasma abruptly disappears (see Nemeth et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus the behav- ior of the thermal plasma supports the notion that these are indeed closed-field vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' If one examines, however, the hot electron population (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Bunce et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2008), it turns out that the high latitude field lines are empty of those few hundred eV electrons, which are present on lower latitudes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Due to their low mass and high energy, these hot electrons cannot be confined centrifugally – thus several authors interpret their absence as proof that these field lines are open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' One possibility to resolve this conflict would be to take into account the effects of the polarization (ambipolar) electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In such a scenario, the cen- trifugally confined cold heavy ions exert an electric force on the hot electron component, achieving their confinement in the vicinity of the plasma sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' One can even argue that the measurements show a deceleration of the hot electrons as one moves away from the equato- rial regions – their energy distribution shifts towards lower energies as expected from an electrically confined particle population (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' the first panel of fig 4 in Bunce et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Unfortunately, mod- els computing the magnitude of the ambipolar electric potential in the Kronian magnetosphere report more than one order of magni- tude lower values than that necessary to confine the hot electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (Maurice et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (1997) report 30V, and Persoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2020) report 10-20V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It is outside of the scope of this paper to discuss numerical plasma models of the Kronian magnetosphere, but we should note that Maurice et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (1997) find that the electric field can be as high as 80 V/𝑅𝑆 if there is a cold oxygen component present, but they left out this possibility from their final simulation because the pre-Cassini state of the art (Richardson 1995) did not know about cold water- group ions in the outer magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Since there is a cold water component, and the field should be integrated over tens of 𝑅𝑆 there, it is entirely possible that a more accurate simulation would result in a potential drop of several hundred Volts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The simulations of Per- soon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' (2020), on the other hand, focused entirely on the thermal plasma, their initial assumptions left out the hot electron component, which is one of the crucial ingredients to having a sizable ambipolar field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus the possibility of electric confinement of hot electrons cannot be ruled out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In summary, the evidence coming from the hot electrons and that coming from cold plasma seem to contradict each other, but some effects can dampen the hot electron population on the high latitude regions of closed field lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus, based on the evidence of a signif- icant amount of water-group ions being present on these field lines and moving according to the vortex pattern (even performing retro- grade motion), the observed vortices are most probably composed of closed field lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 4 CONCLUSIONS We have presented theoretical considerations showing that the coex- istence of several conditions eventuates the presence of giant closed field line vortices in planetary magnetospheres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The first two con- ditions are the presence of significant plasma sources inside the magnetosphere and the planet being a fast rotator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Together these conditions ensure that the magnetosphere possesses a centrifugally forced dense equatorial plasma sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A third (although not com- pletely independent) condition is that the time scale of the periodic process, responsible for emptying the mass-loaded closed field lines, is substantially larger than the planetary rotation period, and thus the tail field lines remain connected to the planet during (at least) a full rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The fourth and last condition, which is always satisfied for the solar-wind-loaded magnetospheres of the solar system, is that some effect breaks the axial symmetry of the system, introducing a strong day-night asymmetry, and creating field lines, which connect the dayside ionosphere to the nightside plasma sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' These con- ditions ensure that the ionospheric footpoints of certain field lines perform (at least) a full rotation around the planet, while their middle (equatorial) point is anchored in the sub-corotating nightside plasma sheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Instead of rotating around the planet, the plasma trapped on these field lines rotates around a vortex line, which connects the pole with a point in the nightside plasma sheet, thus forming two vortices, one in each tail lobes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The motion patterns of these two vortices are more or less independent, they are only connected in the far tail equa- torial region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus the periodicities of the plasma motion in the two lobes are independent of each other as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This allows the plasma properties in the nightside magnetosphere to have dual periodicities, both close to the planetary period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' It turns out that the magnetosphere of Saturn satisfies the above- mentioned conditions, and thus it is a valid question whether the Kro- nian magnetosphere features giant closed field line vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Careful examination of the plasma velocities and the magnetic field direction reveals vortex patterns in the nightside outer magnetosphere of Sat- urn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The supporting evidence includes retrograde plasma motion far from the magnetic equator, flow towards and away from the plasma sheet on the dusk- and dawnside respectively, independent periodic- ities in the northern and southern lobes, and the field line geometry showing vortex-like characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The newly discovered vortex pattern is either evidence of open- field vortices (similar to that observed in the Terrestrial magnetotail (Isbell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' 1984)), or that of the closed field line vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Based on thermal plasma measurements, we argue that there are closed-field vortices in the Kronian magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' In the vortex model, the Dungey and Vasyliunas cycles act some- what differently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The Dungey flow does not penetrate the central region of the polar cap due to the fast rotation of the ionosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Thus the open field lines reside in the outer layer of the vortices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' The plasmoid-forming tail reconnection events necessary to close the Vasyliunas cycle are rare, thus leaving the plasma time enough to participate in the essentially 3-dimensional vortex-forming plasma motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' ACKNOWLEDGEMENTS The author would like to thank Stan Cowley for the helpful discus- sions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' This work was supported by the ÚNKP-18-4 New National Excellence Program of the Ministry of Human Capacities and the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' DATA AVAILABILITY Calibrated magnetic field data and Cassini ion moments from the Cassini mission are available from the NASA Planetary Data System (https://pds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='nasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='gov/).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' REFERENCES Acuna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Behannon K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Connerney J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1983, in Dessler A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', , Physics of the Jovian magnetosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Cambridge Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Press, London, pp 1–50 Andrews D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Southwood D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2008, Journal of Geophysical Research, 113, 9205 MNRAS 000, 1–7 (2022) Closed field line vortices in magnetospheres 7 Andrews D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2010, Journal of Geophysical Research, 115, A04212 Andrews D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Larmy L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Southwood D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2012, Journal of Geophysical Research, 117, 4224 Arridge C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Russell C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Khurana K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Achilleos N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', André N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Rymer A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Coates A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2007, Geophysical Research Letters, 34, L09108 Arridge C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Khurana K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Russell C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Southwood D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Achilleos N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Coates A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Leinweber H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2008, Journal of Geophysical Research, 113, 8217 Arridge C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2011, Journal of Geophysical Research, 116, 11205 Brice N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Ioannidis G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1970, Icarus, 13, 173 Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Wild J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2003, Annales Geophysicae, 21, 1709 Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2008, Journal of Geophysical Research (Space Physics), 113, A09209 Chisham G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Abel G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Milan S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2004, Astronomy & Geophysics, 45, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='36 Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2001, Planetary and Space Science, 49, 1067 Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2003, Annales Geophysicae, 21, 1691 Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', O’Rourke J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2004, Journal of Geophysical Research, 109, A05212 Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Nichols J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Jackman C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2015, Journal of Geophysical Research: Space Physics, 120, 6347 Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2004, Space Science Reviews, 114, 331 Dungey J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1961, Physical Review Letters, 6, 47 Gledhill J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1967, Nature, 214, 155 Gombosi T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Armstrong C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Khurana K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Krimigis S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Thomsen M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2009, in Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Esposito L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Krimigis S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', eds, , Saturn from Cassini-Huygens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Springer, Netherlands, pp 203–255 Hill T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1974, Reviews of Geophysics, 12, 379 Hill T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1979, Journal of Geophysical Research: Space Physics, 84, 6554 Hill T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1980, Science, 207, 301 Hill T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dessler A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Michel F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1974, Geophysical Research Letters, 1, 3 Isbell J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dessler A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Waite J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1984, Journal of Geophysical Research, 89, 10716 Mauk B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2009, in Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Esposito L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Krimigis S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', eds, , Saturn from Cassini-Huygens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Springer, Netherlands, pp 281–331 Maurice S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Blanc M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Prangé R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Sittler E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1997, Planetary and Space Science, 45, 1449 Mead G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1964, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Geophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 69, 1181 Michel F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Sturrock P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1974, Planetary and Space Science, 22, 1501 Mitchell D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Carbary J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Hill T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Zarka P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2009, in Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Esposito L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Krimigis S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', eds, , Saturn from Cassini-Huygens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Springer, Netherlands, pp 257–279, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='1007/978- 1-4020-9217-6_10 Nemeth Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2011, Journal of Geophysical Research, 116, 9212 Nemeth Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2015, Annales Geophysicae, 33, 1195 Nemeth Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Szego K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Foldy L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2016, Planetary and Space Science, 130, 54 Persoon A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Gurnett D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Kurth W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Hospodarsky G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Groene J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Canu P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2005, Geophysical Research Letters, 32, L23105 Persoon A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2020, Journal of Geophysical Research (Space Physics), 125, e27545 Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Andrews D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Arridge C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Coates A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cox G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Jackman C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2012, Journal of Geophysical Research, 117, 1209 Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Lamy L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2019, Journal of Geo- physical Research: Space Physics, 124, 1157 Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bunce E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Milan S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Persoon A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Gurnett D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2021, Journal of Geophysical Research: Space Physics, 126, e2021JA029332 Richardson J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1995, Geophysical Research Letters, 22, 1177 Sergis N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2011, Journal of Geophysical Research, 116, A04203 Simon S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Wennmacher A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Neubauer F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bertucci C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Kriegel H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Saur J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Russell C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Dougherty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2010, Planetary and Space Science, 58, 1230 Stallard T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Miller S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Trafton L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Geballe T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Joseph R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2004, Icarus, 167, 204 Stallard T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2019, Philosophical Transactions of the Royal Society of London Series A, 377, 20180405 Szego K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Nemeth Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Erdos G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Foldy L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Bebesi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Thomsen M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Delapp D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2012, Journal of Geophysical Research, 117, 9225 Szego K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Nemeth Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Foldy L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Cowley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Provan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2013, Journal of Geophysical Research, 118, 2883 Thomsen M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2010, Journal of Geophysical Research, 115, A10220 Vasyliunas V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 1983, in Dessler A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', , Physics of the Jovian magneto- sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content='. Cambridge Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' Press„ London, pp 395–453 Young D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Berthelier J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', Blanc M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=', 2004, Space Science Reviews, 114, 1 This paper has been typeset from a TEX/LATEX file prepared by the author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} +page_content=' MNRAS 000, 1–7 (2022)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Q9E4T4oBgHgl3EQfKgxv/content/2301.04930v1.pdf'} diff --git a/QtE1T4oBgHgl3EQfHQOA/content/2301.02924v1.pdf b/QtE1T4oBgHgl3EQfHQOA/content/2301.02924v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c758b7ee3aee1dc46f4af2802589bff74eef20ee --- /dev/null +++ b/QtE1T4oBgHgl3EQfHQOA/content/2301.02924v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c86b19dcfd2e13bb4bfcdab1c464f271dae1d5b53cd3aae0c77d4ef45548b04a +size 694138 diff --git a/QtE1T4oBgHgl3EQfHQOA/vector_store/index.faiss b/QtE1T4oBgHgl3EQfHQOA/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..435b8f618e701aee762dd9ec5e0422a36ffcf6f1 --- /dev/null +++ b/QtE1T4oBgHgl3EQfHQOA/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbd88f77b3dcf368e71384b0858027ebfb99f67eef74ccd04360d2eeb18cb62e +size 2031661 diff --git a/QtE1T4oBgHgl3EQfHQOA/vector_store/index.pkl b/QtE1T4oBgHgl3EQfHQOA/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..23a8ce54e19f3b31501190cd4cf6172942d42382 --- /dev/null +++ b/QtE1T4oBgHgl3EQfHQOA/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e11091daa57e63a9610b7bdf6a5458648b881ceda85a25c1cb46b9c13980e5a +size 68439 diff --git a/RNFRT4oBgHgl3EQfKzd9/vector_store/index.pkl b/RNFRT4oBgHgl3EQfKzd9/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..99f6ba865792ad2102e80085fb8e978d7c76dae3 --- /dev/null +++ b/RNFRT4oBgHgl3EQfKzd9/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c951247eb1b7af6e5accd428cbb6a0cf6d44fd7962aa3b65d9f0e052d35a24e5 +size 158949 diff --git a/RdFQT4oBgHgl3EQfajZo/content/2301.13320v1.pdf b/RdFQT4oBgHgl3EQfajZo/content/2301.13320v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f5d869ce0b22764720201761b8c36fa65e7f45db --- /dev/null +++ b/RdFQT4oBgHgl3EQfajZo/content/2301.13320v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99f2aac17e36794a963a593f5cb48d626ad5757c7b9b852e04b63c5597ad0e36 +size 1963467 diff --git a/RdFQT4oBgHgl3EQfajZo/vector_store/index.pkl b/RdFQT4oBgHgl3EQfajZo/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..5ee12e4afb68822d9c6d0a1af37716b987daf0f2 --- /dev/null +++ b/RdFQT4oBgHgl3EQfajZo/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06698fd1067c0a5143cf683ba5fbd0b7e4299436960d378f3694da1d657c0ebd +size 126157 diff --git a/SNFJT4oBgHgl3EQfLSwz/content/2301.11468v1.pdf b/SNFJT4oBgHgl3EQfLSwz/content/2301.11468v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1fd5ae0d587c0e5feaa6d20f6ecd98d3618d4c17 --- /dev/null +++ b/SNFJT4oBgHgl3EQfLSwz/content/2301.11468v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f907b06c1f8daa275e470e1ae4a1731465a1de847f1fee7dd319fbced9ffd592 +size 554369 diff --git a/SNFJT4oBgHgl3EQfLSwz/vector_store/index.faiss b/SNFJT4oBgHgl3EQfLSwz/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..33884ba8f8556d3ca074ed35b2fd8f1c43587c63 --- /dev/null +++ b/SNFJT4oBgHgl3EQfLSwz/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faaecf3e57ff076600479645529e9207de007e7f6815b25004f7f96677a52d98 +size 1769517 diff --git a/TdE3T4oBgHgl3EQfzguF/content/2301.04729v1.pdf b/TdE3T4oBgHgl3EQfzguF/content/2301.04729v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f03145fe17db0156c24312925228fc9556601224 --- /dev/null +++ b/TdE3T4oBgHgl3EQfzguF/content/2301.04729v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd29783248651a8c246103c9c72229b9209ecbda8e860416ec9352840d6f65b0 +size 241705 diff --git a/TdE3T4oBgHgl3EQfzguF/vector_store/index.faiss b/TdE3T4oBgHgl3EQfzguF/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..51062777e0c111457ba3509d0e444939ed97e20c --- /dev/null +++ b/TdE3T4oBgHgl3EQfzguF/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7fea76dc4351915ad8052eb503828fe899511ad961145c755510b77c1864b5e +size 2097197 diff --git a/TdE3T4oBgHgl3EQfzguF/vector_store/index.pkl b/TdE3T4oBgHgl3EQfzguF/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..201f948df836298e42c3f0215cb4cea7bd18d396 --- /dev/null +++ b/TdE3T4oBgHgl3EQfzguF/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f8b83f1a937fd630b3cb13c281fc7df28401896e5e3e78608686366b7c9c49d +size 91006 diff --git a/TtAzT4oBgHgl3EQf0v7t/content/tmp_files/2301.01790v1.pdf.txt b/TtAzT4oBgHgl3EQf0v7t/content/tmp_files/2301.01790v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..d53c912352cf9d92f481134a5e0ea7a6bf6ec691 --- /dev/null +++ b/TtAzT4oBgHgl3EQf0v7t/content/tmp_files/2301.01790v1.pdf.txt @@ -0,0 +1,1079 @@ +Smooth forecasting with the smooth package in R +Ivan Svetunkova +aCentre for Marketing Analytics and Forecasting, +Management Science Department, Lancaster University, UK +Abstract +There are many forecasting related packages in R with varied popularity, +the most famous of all being forecast, which implements several important +forecasting approaches, such as ARIMA, ETS, TBATS and others. How- +ever, the main issue with the existing functionality is the lack of flexibility +for research purposes, when it comes to modifying the implemented models. +The R package smooth introduces a new approach to univariate forecasting, +implementing ETS and ARIMA models in Single Source of Error (SSOE) +state space form and implementing an advanced functionality for experi- +ments and time series analysis. It builds upon the SSOE model and extends +it by including explanatory variables, multiple frequencies, and introducing +advanced forecasting instruments. In this paper, we explain the philosophy +behind the package and show how the main functions work. +Keywords: +forecasting, exponential smoothing, ets, arima, adam, R +1. Introduction +R (R Core Team, 2022), being one of the most popular programming lan- +guages in academia, has many forecasting-related packages, implementing a +variety of approaches. Among the well known ones, is the forecast package +(Hyndman and Khandakar, 2008), which implements classical statistical fore- +casting models, such as ETS (Error, Trend, Seasonality model based on the +single source of error state space framework underlying exponential smooth- +ing, Hyndman et al., 2002, 2008), Theta (Assimakopoulos and Nikolopoulos, +∗Correspondance: I. Svetunkov, Centre for Marketing Analytics and Forecasting, Lan- +caster University Management School, Lancaster, Lancashire, LA1 4YX, UK. +Email address: i.svetunkov@lancaster.ac.uk (Ivan Svetunkov) +Published online at www.openforecast.org +January 6, 2023 +arXiv:2301.01790v1 [stat.ME] 4 Jan 2023 + +2000), TBATS (De Livera et al., 2011) and others. Some of these functions +have also been implemented in fable package (O’Hara-Wild et al., 2021). +It also implements the auto.arima() function for automatic selection of +ARIMA orders. There are other packages implementing ARIMA, including +stats (R Core Team, 2022), robustarima (Kaluzny and TIBCO Software +Inc., 2021), tfarima (Gallego, 2021) and fable (O’Hara-Wild et al., 2021). +All these packages implement ready-to-use functions for specific situations +and have been proven to work very well, but they do not have flexibility +necessary for research purposes in the area of dynamic models and do not +present a holistic approach to univariate forecasting models. +In order to address these issues, back in 2016, I developed a smooth +package, which implemented models in the Single Source of Error framework +and introduced flexibility allowing to conduct advanced experiments in the +area of univariate forecasting models (e.g. using advanced losses, introduc- +ing explanatory variables, changing structures of models etc). This paper +explains the main idea behind the smooth functions, summarises what they +are created for and how to use them in forecasting and analytics. +2. Single Source of Error framework +The main model underlying the smooth functions is explained in detail in +Svetunkov (2022) monograph. It builds upon Hyndman et al. (2008) model. +Here we summarise only the main ideas. We start with the most popular +pure additive model, underlying the majority of functions of smooth package +(Svetunkov, 2023). This is formulated as: +yt =w′vt−l + ϵt +vt =Fvt−l + gϵt +, +(1) +where w is the measurement vector, F is the transition matrix, g is the +persistence vector, vt−l is the vector of lagged components and l is the vector +of lags, defining how each of the components of vt needs to be shifted in +time. Unlike the conventional state space model of Hyndman et al. (2008), +the one implemented in smooth relies on lagged components rather than their +transition. In the conventional case, the vector of states vt always depends +on the value of the vector on the previous observation, where the transition +of signal happens from one component to another according to the matrix +F. +Both the conventional and the proposed frameworks underlie exactly +2 + +the same ETS and ARIMA models, but our approach simplifies calculations. +For example, consider the ETS(A,A,A) model, which is written as (Hyndman +et al., 2008): +yt =lt−1 ++bt−1 ++st−m ++ϵt +lt =lt−1 ++bt−1 ++αϵt +bt = +bt−1 ++βϵt +st = +st−m ++γϵt +. +(2) +where yt is the actual value, lt−1 is the level, bt−1 is the trend, st−m is the +seasonal component with periodicity m (e.g. 12 for months of year data, +implying that something is repeated every 12 months), α, β and γ are the +smoothing parameters and ϵt is an i.i.d. error term. According to (1), the +model (2) can be written as: +yt = +� +1 +1 +1 +� +� +� +lt−1 +bt−1 +st−m +� +� + ϵt +� +� +lt +bt +st +� +� = +� +� +1 +1 +0 +0 +1 +0 +0 +0 +1 +� +� +� +� +lt−1 +bt−1 +st−m +� +� + +� +� +α +β +γ +� +� ϵt, +(3) +while in the (Hyndman et al., 2008) form it would be: +yt = +� +1 +1 +1 +0 +. . . +0 +� +� +� +� +� +� +� +� +� +� +lt−1 +bt−1 +s1,t−1 +s2,t−1 +... +sm,t−1 +� +� +� +� +� +� +� +� +� ++ ϵt +� +� +� +� +� +� +� +lt +bt +s1,t +... +sm,t +� +� +� +� +� +� +� += +� +� +� +� +1 +1 +0′ +m−1 +0 +0 +1 +0′ +m−1 +0 +0 +0 +0′ +m−1 +1 +0m−1 +0m−1 +Im−1 +0m−1 +� +� +� +� +� +� +� +� +� +� +� +lt−1 +bt−1 +s1,t−1 +... +sm,t−1 +� +� +� +� +� +� +� ++ +� +� +� +� +� +� +� +� +� +α +β +γ +0 +... +0 +� +� +� +� +� +� +� +� +� +ϵt. +(4) +Because of the size of matrices in (4) and the recursive nature of the model, +applying it on seasonal data with m higher than 24 becomes computationally +3 + +expensive due to the multiplication of large matrices. The problem becomes +even more serious when a model with multiple seasonal components is needed +(see, for example, Taylor, 2003), because it then introduces several seasonal +indices, increasing to the size of matrices in (4) even further. This issue is +resolved in (1), because the introduction of additional components leads to +increase of dimensionality proportional to the number of added components. +Note that any extension of the conventional state space model results in the +increase of its dimensionality, inevitably leading to computational difficulties. +This is why the proposed model (1) is more viable, flexible and this is why +it was used in the development of smooth functions. +There is also a more general state space form, covering not only pure +additive, but also pure multiplicative and mixed models. This is discussed in +Chapter 4 of Hyndman et al. (2008) and in Section 7.1 of Svetunkov (2022). +We do not discuss this model here but only point out that the multiplicative +and mixed ETS models implemented in smooth are based on it. +Furthermore, Snyder (1985) showed that ARIMA model can also be writ- +ten in the SSOE state space form, this was then discussed in more detail in +Chapter 11 of Hyndman et al. (2008) and afterwards used by Svetunkov and +Boylan (2020) to implement ARIMA in (ssarima() and auto.ssarima() +functions from the smooth package) and apply it in supply chain context. +Building upon that, Svetunkov (2022) implemented ARIMA (Chapter 9) in +the state space form (1). +So, the proposed framework does not stop on +ETS models, it can be extended, for example, to include a combination of +ETS+ARIMA. +The final piece of the puzzle is the regression model. +It can also be +represented in the SSOE state space form (1), as, for example, discussed +in Koehler et al. (2012). This is discussed in Chapter 9 of Hyndman et al. +(2008) and Chapter 10 of Svetunkov (2022). +All of this means that the state space model (1) presents a unified frame- +work for working with ETS, ARIMA, regression and any of their combina- +tions. This is implemented in adam() function of smooth package (Svetunkov, +2023), supporting the following functionality: +1. ETS; +2. ARIMA; +3. Regression; +4. Time-varying parameters regression; +5. Combination of (1), (2) and either (3), or (4); +4 + +6. Automatic selection/combination of states of ETS; +7. Automatic order selection for ARIMA; +8. Variables selection for regression; +9. Normal and non-normal distributions of the error term; +10. Automatic selection of the most suitable distribution; +11. Multiple seasonality; +12. Occurrence part of the model to handle zeroes in data (in case of in- +termittent demand); +13. Modelling scale of distribution (GARCH-style models, see for example, +Engle, 1982); +14. Handling uncertainty of estimates of parameters; +15. Forecasting using any of the elements above. +All these topics are covered in Svetunkov (2022), so we will not focus on +adam function here. However, there are special cases of the model (1), imple- +menting specific functionality in the smooth package. We discuss the most +important of them in this paper. +3. Time series decomposition +While stats package already implements the classical decomposition +function, I have created a new one, which would handle the multiple sea- +sonal data. This is called msdecompose(). It has exactly the same logic as +the classical decomposition (Warren M. Persons, 1919), but can be applied +to the data with multiple seasonal cycles. A user can define what cycles +there are in the data by setting the parameter lags, and choosing the type +of seasonality via the parameter type. For example, msdecompose() can +be applied to half-hourly electricity demand data taylor from forecast +package in the following way: +taylorDecomp <- msdecompose(taylor, lags=c(48,336), +type="m") +which will result in an object that can be used for further analysis. Pro- +ducing a plot from it would generate several figures (see documentation for +plot.smooth() for details), the most interesting of which, the plot of time +series components, is shown in Figure 1. +The classical decomposition does not typically produce clear components +– the residuals in Figure 1 demonstrate the presence of seasonality because +5 + +20000 +30000 +Actuals +28000 +29000 +Trend +0.8 +0.9 +1.0 +1.1 +1.2 +2 +4 +6 +8 +10 +12 +Seasonal 1 +Time +0.85 +0.90 +0.95 +1.00 +1.05 +Seasonal 2 +−0.15 +−0.05 +0.05 +2 +4 +6 +8 +10 +12 +Residuals +Time +Decomposition of taylor +Figure 1: Decomposition of multiple seasonal time series according to msdecompose() +function. +the approach assumes that the seasonal components are constant. Nonethe- +less, it can be a starting point for time series analysis. In smooth, it is used +for the initialisation of states of ETS in case of seasonal data and is mainly +needed when working with multiple seasonal data. However, if a researcher is +interested in forecasting with seasonal decomposition, the function produces +the object of the class smooth that supports the forecast() method, pro- +ducing forecasts for the trend component of the decomposed data and then +reconstructing the series based on the estimated seasonal components. +4. Exponential smoothing +Exponential smoothing is one of the most popular forecasting methods +used in demand planning (Weller and Crone, 2012). As mentioned earlier, +ETS underlies all exponential smoothing methods, and it is considered as +an academic standard in forecasting (it has been used in all the major fore- +casting competitions over the years, including Makridakis and Hibon, 2000; +Athanasopoulos et al., 2011; Makridakis et al., 2020, 2022). The conventional +ETS model, developed by Hyndman et al. (2008) assumes that the error term +6 + +follows normal distribution. It is implemented in functions ets() from the +forecast package and ETS() from the fable. Their counterpart in smooth +package is called es(), but it is based on the state space model (1) rather +then the conventional one. Furthermore, while ets() supports only 15 ETS +models, the es() implements all the theoretically possible 30 ETS models. +The function also support fine tuning of parameters of model, allowing set- +ting smoothing parameters values via persistence variable, initial values +via initial and seasonal indices via initialSeason and pre-defining the +values of parameters for the optimisation via the B parameter. Furthermore, +the function supports explanatory variables via the xreg parameter, similar +to how it is done in arima function from the stats package, allowing tuning +the coefficients for regressors via initialX and selecting the most appropri- +ate ones based on information criteria via the Sagaert and Svetunkov (2022) +algorithm applied to residuals of the ETS model using the regressors pa- +rameter. +In terms of the ETS components selection, the mechanism used by default +in es() can be summarised in the following steps: +1. Apply ETS(A,N,N) to the data, calculate an information criterion (IC); +2. Apply ETS(A,N,A) to the data, calculate IC. If it is lower than (1), +then this means that there is some seasonal component in the data, +move to step (3). Otherwise, go to (4); +3. Apply ETS(M,N,M) model and calculate IC. If it is lower than the +previous one, then the data exhibits multiplicative seasonality. Go to +(4); +4. Fit the model with the additive trend component and the seasonal +component selected from the previous steps, which can be either “N”, +“A”, or “M”. Calculate IC for the new model and compare it with the +best IC so far. If it is lower than the criteria of previously applied +models, then there is a trend component in the data. If it is not, then +the trend component is not needed; +5. Form the pool of models based on steps (1) - (4), apply models and +select the one with the lowest IC. +This approach for components selection can be called branch-and-bound +because instead of going through all possible models, it considers branches of +models. For example, if there is no seasonality, then the respective component +can be set to “N”, thus removing the branch of seasonal models and reducing +7 + +the pool of models to test from 30 to only 10 (including models already tested +on the four steps). +Similarly to any other smooth function, es() supports several methods, +including plot() for visual diagnostics of model and forecast() for fore- +casting. To demonstrate how they work, we apply es() to AirPassengers +data from the datasets package. +AirPassengersETS <- es(AirPassengers, h=12, holdout=TRUE) +In the code above, we have specified the forecast horizon of 12 steps ahead +and asked to exclude the last 12 observations from the training of the model, +thus creating a test set (holdout) to see how the model performs in that part +of the data. We can do diagnostics of the model in order to see if it has any +obvious issues that could be resolved: +par(mfcol=c(2,2)) +plot(AirPassengersETS, c(1,2,4,6)) +100 +200 +300 +400 +500 +100 +200 +300 +400 +500 +Actuals vs Fitted +Fitted +Actuals +100 +200 +300 +400 +500 +−3 +−2 +−1 +0 +1 +2 +Standardised Residuals vs Fitted +Fitted +Standardised Residuals +30 +54 +117 +100 +200 +300 +400 +500 +0 +5 +10 +15 +20 +25 +|Residuals| vs Fitted +Fitted +|Residuals| +−2 +−1 +0 +1 +2 +−20 +−10 +0 +10 +QQ plot of normal distribution +Theoretical Quantile +Actual Quantile +Figure +2: +Diagnostics +plots +for +ETS(A,M,M) +model +selected +automatically +on +AirPassengers data by es() function. +8 + +The resulting plot is shown in Figure 2. We do not aim to resolve the +issues of the model in this paper, we merely demonstrate what can be done us- +ing smooth functions. The plots allow analysing the residuals for the possible +issues related to heteroscedasticity, autocorrelation, outliers, wrong specifi- +cation etc. Fixing the issues can be done by including explanatory variables +and/or changing the transformations used in the model. After fixing the po- +tential issues a researcher can produce forecasts from the estimated model, +which is done using the forecast() method from generics package (Wick- +ham et al., 2022). But unlike the forecasting for the ets() and ETS(), the one +from smooth supports several options, allowing choosing between a variety +of prediction intervals (see documentation of forecast.smooth() method), +allowing to produce one-sided interval (which is useful in case of pure mul- +tiplicative models on low-volume data, where the lower bound is typically +equal to zero) and generating cumulative forecasts (which is useful in case of +safety stock calculation in inventory management). We will use the default +values of parameters, producing the parametric prediction interval: +plot(forecast(AirPassengersETS, h=12)) +The code above will result in the plot in Figure 3. +0 +0 +Series +Fitted values +Point forecast +95% prediction intervals +Forecast origin +1950 +1952 +1954 +1956 +1958 +1960 +100 +200 +300 +400 +500 +600 +Figure 3: Forecast for AirPassengers data produced by es() function. +Figure 3 shows how the selected model fits the data, what point forecast +it produces (solid bold blue line in the holdout part) and what prediction +intervals it generated (a grey area in the holdout). +9 + +Continuing the theme of exponential smoothing, smooth also implements +Complex Exponential Smoothing of Svetunkov et al. (2022) via the ces() +function, which has the functionality similar to es() and supports the same +set of methods. +Finally, as mentioned earlier, adam() implements ETS model as well and +supports much more functionality. The main difference between the default +ETS in adam() and es() is that the former supports distributions other than +normal and, by default, uses Gamma distribution in case of multiplicative +error models. +5. ARIMA +Another important model, which is used often in forecasting, is ARIMA +(Box and Jenkins, 1976). There are several functions implementing ARIMA +in SSOE state space form in the smooth package. +The ssarima() (State Space ARIMA) function implements a state space +ARIMA in the form discussed in Chapter 11 of Hyndman et al. (2008). The +function that implements the order selection for State Space ARIMA is called +auto.ssarima(). It does not rely on any statistical tests and selects orders +based on information criteria. Both the model and the selection mechanism +are explained in Svetunkov and Boylan (2020). +The msarima() (Multiple Seasonal ARIMA) function relies on the state +space model (1), introducing lagged components and thus substantially re- +ducing the size of the transition matrix. This allows applying large multiple +seasonal ARIMA models to the data. A thing to note is that because of +this, the transition matrix, measurement, and state vectors of this model are +formed differently than in Hyndman et al. (2008). In a general case, they are +(Svetunkov, 2022, Chapter 9): +F = +� +� +� +� +� +η1 +η1 +. . . +η1 +η2 +η2 +. . . +η2 +... +... +... +... +ηK +ηK +. . . +ηK +� +� +� +� +� ,w = +� +� +� +� +� +1 +1 +... +1 +� +� +� +� +� , +g = +� +� +� +� +� +η1 + θ1 +η2 + θ2 +... +ηK + θK +� +� +� +� +� ,vt = +� +� +� +� +� +v1,t +v2,t +... +vK,t +� +� +� +� +� , +l = +� +� +� +� +� +1 +2 +... +K +� +� +� +� +� +, +(5) +10 + +where ηj is jth polynomial for the ARI part of the model, θj is the jth MA +parameter and K is the number of ARI/MA polynomials (whichever is the +highest). To better understand how this model is formulated, consider an +example of ARIMA(1,1,2), which can be written as: +(1 − φ1B)(1 − B)yt = (1 + θ1B + θ2B2)ϵt, +(6) +where B is the backshift operator. This model can be written in the state +space form (see Chapter 9 of Svetunkov, 2022, for derivations): +yt = v1,t−1 + v2,t−2 + ϵt +v1,t = (1 + φ1)(v1,t−1 + v2,t−2) + (1 + φ1 + θ1)ϵt +v2,t = −φ1(vj,t−j + v2,t−2) + (−φ1 + θ2)ϵt +. +(7) +In order to see that the model (7) can be represented in the form (1), we +need to set the following matrices and vectors: +F = +�1 + φ1 +1 + φ1 +−φ1 +−φ1 +� +,w = +�1 +1 +� +, +g = +�1 + φ1 + θ1 +−φ1 + θ2 +� +,vt = +�v1,t +v2,t +� +, +l = +�1 +2 +�. +(8) +Finally, as mentioned earlier, adam() function supports ARIMA1 as well, +in the same form as msarima(). All the three functions have similar syntax +for ARIMA, where a user needs to defined seasonal lags of model via lags +vector, listing all seasonal frequencies that a model should have, and orders of +model via orders variable, which in general accepts a named list of the style +orders=list(ar=c(1,2,3),i=c(1,2,3),ma=c(1,2,3)), defining the order +of AR, I and MA parts of the model for the respective lags. The ARIMA +orders are designed this way to allow researchers to introduce as many lags +as they need, supporting, for example, double and triple seasonal ARIMA. +Note that due to its formulation ssarima() cannot handle high-frequency +data and will slow down with the increase of the seasonal lag m. +Here is an example of a user defined SARIMA(0,2,2)(0,2,2)12 model ap- +plied to the same AirPassengers data: +1Note that in order to switch off the ETS part of the model in adam(), a user needs to +specify model="NNN". +11 + +AirPassengersARIMA <- msarima(AirPassengers, lags=c(1,12), +orders=list(i=c(2,2),ma=c(2,2)), +h=12, holdout=TRUE) +In order to see how the model fits the data we can use the plot function, +specifying which=7: +plot(AirPassengersARIMA,7) +after which we will get the plot shown in Figure 4. +SARIMA(0,2,2)[1](0,2,2)[12] +1950 +1952 +1954 +1956 +1958 +1960 +100 +200 +300 +400 +500 +600 +Figure 4: Forecast for AirPassengers data produced by msarima() function. +Furthermore, all the smooth functions support one of the three mecha- +nisms of initialisation: +1. Optimisation – the initial values of the state vector are estimated during +the optimisation stage; +2. Backcasting – the initial values are produced via applying the model +with optimised parameters to the reversed data, going recursively from +the last observation to the first one; +3. Manual – the initials are provided by a user. +These are regulated via the initial parameter in the functions. +In the +case of ARIMA, given the complexity of the task, initial="backcasting" +typically works faster and more efficiently than the other two approaches. +12 + +If a researcher needs to have an ARIMA model with automatically se- +lected orders, they can use auto.ssarima(), auto.msarima(), which will +do that minimising the selected information criterion using the procedure +described in Svetunkov and Boylan (2020) and in Section 15.2 of Svetunkov +(2022). In case of adam(), the automatic selection mechanism is switched on +via addition of variable select=TRUE in the list for the orders parameter. +ARIMA models produced using the three functions above supports all the +methods available for other smooth functions, including plot(), actuals(), +fitted(), residuals() and forecast(). +6. Simulation functions +Another important set of functions supported by the smooth package +is the simulation functions. They allow generating data from an assumed +model. There are several functions in the package: +1. sim.es() allows generating data from the selected ETS model with +defined persistence, initial and initialSeason parameters; +2. sim.ces() generates data from Complex Exponential Smoothing DGP; +3. sim.ssarima() generates data from ARIMA model, allowing defining +order of the model, AR, MA parameters and the value of the constant +term (either intercept or drift, depending on the order of differences). +If the parameters are not specified, they will be picked at random. All the +functions above support a variety of distributions for the error term, allowing +also to apply manually created ones. Here is an example of how to do the +latter: +customFunction <- function(n, mu, sd){ +return(log(abs(rnorm(n, mu, sd)))); +} +x <- sim.es("ANN", obs=100, +randomizer="customFunction", mu=0, sd=1) +The simulation functions allow generating as many series as needed, which +is regulated via nsim parameter. +Finally, the package also implements simulate() method, which ex- +tracts the parameters from the already estimated model to generate sim- +ulated data from it. In order to see how it works, we generate data from the +AirPassengersETS model, estimated in Section 4: +13 + +x <- simulate(AirPassengersETS, obs=120, nsim=5) +plot(x) +The code above will generate five time series, and each one of them would +look similar to the one shown in Figure 5. +ETS(AMM) +Time +Series N4 +2 +4 +6 +8 +10 +100 +200 +300 +400 +500 +Figure 5: Simulated data from the AirPassengersETS model. +As can be seen from the plot in Figure 5, the generated time series exhibits +behaviour similar to the original time series. It even has a similar seasonal +shape, but it has a different trend, increasing slower than in the original data. +7. Other functions +There are several other functions implemented in the package that are +outside of the scope of this paper. +Nonetheless, two of them are worth +mentioning. +There is a Simple Moving Averages (SMA) function, sma(), implemented +in state space model (1). +This is based on the paper of Svetunkov and +Petropoulos (2018) who showed that SMA(p) has an underlying AR(p) pro- +cess with parameters restricted to φj = 1 +p for all j = 1, . . . , p. The function +also supports automatic order selection via information criteria as discussed +in the original paper. +Another important function is oes(), which implements the occurrence +part of model in case of intermittent demand. This is discussed in detail +in Chapter 13 of Svetunkov (2022) and is based on Svetunkov and Boylan +(2019). +14 + +Last but not least, smooth package has extensive vignettes with examples +of application of almost all functions. It is available, for instance, on CRAN: +https://cran.r-project.org/package=smooth. +8. Benchmarking of smooth functions +Finally, to demonstrate how the smooth functions work, we conduct an +experiment on M1 (Makridakis et al., 1982), M3 (Makridakis and Hibon, +2000) and Tourism (Athanasopoulos et al., 2011) competition data, where +we evaluate seven models: +1. ADAM ETS – ETS model estimated via adam() function; +2. ADAM ARIMA – ARIMA model estimated via adam() function. We +set model="NNN" to switch off ETS part of the model and use the +following command to set the maximum ARIMA order to check: +order=list(ar=c(3,2), i=c(2,1), ma=c(3,2), select=TRUE); +3. ES – ETS model implemented in es() function, which is just a wrapper +of adam(); +4. SSARIMA – State Space ARIMA model estimated via +auto.ssarima(); +5. CES – Complex Exponential Smoothing implemented in auto.ces() +function from smooth package; +6. ETS – ETS model implemented in ets() function from forecast pack- +age; +7. ARIMA – ARIMA selected using auto.arima() function from +forecast package. +We do not include msarima() in the experiment because the datasets un- +der consideration do not have multiple seasonal time series. We have used +the default values of parameters in all the functions. +The forecasts were +produced for each time series in the datasets for the horizons used in the +original competitions to the part of the data not visible to the models. We +produced point forecasts and 95% prediction intervals to that part of series +and evaluated the performance of models using the following measures: +• MASE – Mean Absolute Scaled Error by Hyndman and Koehler (2006); +• RMSSE – Root Mean Scaled Squared Error introduced in Makridakis +et al. (2022); +15 + +• Coverage – percentage of observations in the holdout lying in the pro- +duced 95% prediction interval; +• sMIS – scaled Mean Interval Score from Makridakis et al. (2022); +• Time – computational time in seconds spent for estimation and fore- +casts generation for each series. +For MASE, RMSSE, sMIS and time, the lower the value is, the better it is. +For the coverage, the closer the value is to the nominal 95%, the better it is. +The results of this experiment are summarised in Table 1. Note that +they might vary from one run to another because forecasts from some of the +functions rely on simulations. +MASE +RMSSE +Coverage +sMIS +Time +ADAM ETS +2.222 +1.935 +0.885 +2.122 +0.386 +ES +2.224 +1.939 +0.898 +2.196 +0.477 +CES +2.271 +1.958 +0.812 +3.465 +0.236 +ETS +2.263 +1.970 +0.882 +2.258 +0.409 +ARIMA +2.300 +1.987 +0.834 +3.007 +1.425 +ADAM ARIMA +2.371 +2.048 +0.843 +3.126 +1.376 +SSARIMA +2.480 +2.133 +0.802 +3.356 +1.811 +Table 1: Error measures for each of the model evaluated on M1, M3 and Tourism competi- +tions, aggregated using mean values. The boldface indicates the best performing models, +while the italic indicates the second best ones. +As can be seen from Table 1, ADAM ETS outperforms all other mod- +els in terms of MASE, RMSSE and sMIS, although the difference between +it and other ETS implementations does not look substantial. Note that it +works slightly slower than CES. The ETS from forecast package performs +slightly worse than the smooth implementations on these datasets. Compar- +ing ARIMA implementations, the one from auto.arima() is more accurate +and faster than ADAM ARIMA and SSARIMA, although it was not able to +beat the ETS models. Note however that ARIMA produces lower coverage +than ADAM ARIMA does and works slower. +This example demonstrates that the developed functions work efficiently +and can be applied to a wide variety of time series. Table 1 summarises +an overall aggregate performance, which does not mean that the winning +16 + +models always perform the best. Their performance will vary from one series +to another, and in some instances, the models that performed poorly in this +experiment would perform much better (for example, SSARIMA performed +very well on supply chain data with a short history as discussed in Svetunkov +and Boylan, 2020). +9. Conclusions +In this paper, I have discussed the philosophy behind the models imple- +mented in the smooth package for R. The state space model used in the +functions differs from the conventional one, allowing to introduce more com- +ponents and using more complex models efficiently. We have discussed how +ETS and ARIMA are implemented in this framework and what an analyst +can achieve with them. Finally, we have demonstrated how the models im- +plemented in the smooth functions perform on an example of M1, M3 and +Tourism competitions data. +This paper merely introduced the framework, the models and the func- +tions. +As mentioned earlier, the main idea of the smooth functions is to +give a researcher flexibility. A reader interested in learning more about the +framework is advised to read the online monograph of Svetunkov (2022) and +to study examples in the vignettes of the smooth package in R (Svetunkov, +2023). +References +Assimakopoulos, V., Nikolopoulos, K., 2000. The theta model: a decom- +position approach to forecasting. International Journal of Forecasting 16, +521–530. +Athanasopoulos, G., Hyndman, R. J., Song, H., Wu, D. C., 2011. The tourism +forecasting competition. International Journal of Forecasting 27 (3), 822– +844. +Box, G., Jenkins, G., 1976. Time series analysis: forecasting and control. +Holden-day, Oakland, California. +De Livera, A. M., Hyndman, R. J., Snyder, R. D., 2011. Forecasting Time +Series With Complex Seasonal Patterns Using Exponential Smoothing. +Journal of the American Statistical Association 106 (496), 1513–1527. +17 + +Engle, R. F., jul 1982. Autoregressive Conditional Heteroscedasticity with Es- +timates of the Variance of United Kingdom Inflation. Econometrica 50 (4), +987. +Gallego, J. L., 2021. tfarima: Transfer Function and ARIMA Models. R +package version 0.2.1. +URL https://CRAN.R-project.org/package=tfarima +Hyndman, R. J., Khandakar, Y., 2008. Automatic time series forecasting: +the forecast package for R. Journal of Statistical Software 26 (3), 1–22. +Hyndman, R. J., Koehler, A. B., 2006. Another look at measures of forecast +accuracy. International Journal of Forecasting 22 (4), 679–688. +Hyndman, R. J., Koehler, A. B., Ord, J. K., Snyder, R. D., 2008. Forecasting +with Exponential Smoothing. Springer Berlin Heidelberg. +Hyndman, R. J., Koehler, A. B., Snyder, R. D., Grose, S., 2002. A state +space framework for automatic forecasting using exponential smoothing +methods. International Journal of Forecasting 18 (3), 439–454. +Kaluzny, S., TIBCO Software Inc., 2021. robustarima: Robust ARIMA Mod- +eling. R package version 0.2.6. +URL https://CRAN.R-project.org/package=robustarima +Koehler, A. B., Snyder, R. D., Ord, J. K., Beaumont, A., 2012. A study +of outliers in the exponential smoothing approach to forecasting. Interna- +tional Journal of Forecasting 28 (2), 477–484. +Makridakis, S., Andersen, A. P., Carbone, R., Fildes, R., Hibon, M., +Lewandowski, R., Newton, J., Parzen, E., Winkler, R. L., 1982. The ac- +curacy of extrapolation (time series) methods: Results of a forecasting +competition. Journal of Forecasting 1 (2), 111–153. +Makridakis, S., Hibon, M., 2000. The M3-Competition: results, conclusions +and implications. International Journal of Forecasting 16, 451–476. +Makridakis, S., Spiliotis, E., Assimakopoulos, V., 2020. The M4 Competition: +100,000 time series and 61 forecasting methods. International Journal of +Forecasting 36 (1), 54–74. +18 + +Makridakis, S., Spiliotis, E., Assimakopoulos, V., oct 2022. M5 accuracy +competition: Results, findings, and conclusions. International Journal of +Forecasting 38 (4), 1346–1364. +O’Hara-Wild, M., Hyndman, R., Wang, E., 2021. fable: Forecasting Models +for Tidy Time Series. R package version 0.3.1. +URL https://CRAN.R-project.org/package=fable +R Core Team, 2022. R: A Language and Environment for Statistical Com- +puting. R Foundation for Statistical Computing, Vienna, Austria. +URL https://www.R-project.org/ +Sagaert, Y., Svetunkov, I., 2022. Trace Forward Stepwise: Automatic Selec- +tion of Variables in No Time. +Snyder, R. D., 1985. Recursive Estimation of Dynamic Linear Models. Jour- +nal of the Royal Statistical Society, Series B (Methodological) 47 (2), 272– +276. +Svetunkov, I., 2022. Forecasting and analytics with adam. Monograph. Open- +Forecast, (version: 2022-04-18). +URL https://openforecast.org/adam/ +Svetunkov, I., 2023. smooth: Forecasting Using State Space Models. R pack- +age version 3.2.0. +URL https://github.com/config-i1/smooth +Svetunkov, I., Boylan, J., 2019. Multiplicative state-space models for inter- +mittent time series. +Svetunkov, I., Boylan, J. E., 2020. State-space ARIMA for supply-chain fore- +casting. International Journal of Production Research 58 (3), 818–827. +Svetunkov, I., Kourentzes, N., Ord, J. K., 8 2022. Complex exponential +smoothing. Naval Research Logistics (NRL), 31. +Svetunkov, I., Petropoulos, F., 2018. Old dog, new tricks: a modelling view +of simple moving averages. International Journal of Production Research +56 (18), 6034–6047. +19 + +Taylor, J. W., 2003. Short-term electricity demand forecasting using dou- +ble seasonal exponential smoothing. Journal of the Operational Research +Society 54 (8), 799–805. +Warren M. Persons, 1919. General Considerations and Assumptions. The +Review of Economics and Statistics 1 (1), 5–107. +Weller, M., Crone, S. F., November 2012. Supply chain forecasting: Best +practices & benchmarking study. Tech. rep., Lancaster Centre for Fore- +casting. +Wickham, H., Kuhn, M., Vaughan, D., 2022. generics: Common S3 Generics +not Provided by Base R Methods Related to Model Fitting. R package +version 0.1.2. +URL https://CRAN.R-project.org/package=generics +20 + diff --git a/TtAzT4oBgHgl3EQf0v7t/content/tmp_files/load_file.txt b/TtAzT4oBgHgl3EQf0v7t/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..3a5f9b5abcd6353b2f47e328a05ffd822d127e45 --- /dev/null +++ b/TtAzT4oBgHgl3EQf0v7t/content/tmp_files/load_file.txt @@ -0,0 +1,606 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf,len=605 +page_content='Smooth forecasting with the smooth package in R Ivan Svetunkova aCentre for Marketing Analytics and Forecasting, Management Science Department, Lancaster University, UK Abstract There are many forecasting related packages in R with varied popularity, the most famous of all being forecast, which implements several important forecasting approaches, such as ARIMA, ETS, TBATS and others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' How- ever, the main issue with the existing functionality is the lack of flexibility for research purposes, when it comes to modifying the implemented models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The R package smooth introduces a new approach to univariate forecasting, implementing ETS and ARIMA models in Single Source of Error (SSOE) state space form and implementing an advanced functionality for experi- ments and time series analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It builds upon the SSOE model and extends it by including explanatory variables, multiple frequencies, and introducing advanced forecasting instruments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In this paper, we explain the philosophy behind the package and show how the main functions work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Keywords: forecasting, exponential smoothing, ets, arima, adam, R 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Introduction R (R Core Team, 2022), being one of the most popular programming lan- guages in academia, has many forecasting-related packages, implementing a variety of approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Among the well known ones, is the forecast package (Hyndman and Khandakar, 2008), which implements classical statistical fore- casting models, such as ETS (Error, Trend, Seasonality model based on the single source of error state space framework underlying exponential smooth- ing, Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2002, 2008), Theta (Assimakopoulos and Nikolopoulos, ∗Correspondance: I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Svetunkov, Centre for Marketing Analytics and Forecasting, Lan- caster University Management School, Lancaster, Lancashire, LA1 4YX, UK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Email address: i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='svetunkov@lancaster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='uk (Ivan Svetunkov) Published online at www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='openforecast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org January 6, 2023 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='01790v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ME] 4 Jan 2023 2000), TBATS (De Livera et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2011) and others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Some of these functions have also been implemented in fable package (O’Hara-Wild et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It also implements the auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='arima() function for automatic selection of ARIMA orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' There are other packages implementing ARIMA, including stats (R Core Team, 2022), robustarima (Kaluzny and TIBCO Software Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2021), tfarima (Gallego, 2021) and fable (O’Hara-Wild et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' All these packages implement ready-to-use functions for specific situations and have been proven to work very well, but they do not have flexibility necessary for research purposes in the area of dynamic models and do not present a holistic approach to univariate forecasting models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In order to address these issues, back in 2016, I developed a smooth package, which implemented models in the Single Source of Error framework and introduced flexibility allowing to conduct advanced experiments in the area of univariate forecasting models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' using advanced losses, introduc- ing explanatory variables, changing structures of models etc).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This paper explains the main idea behind the smooth functions, summarises what they are created for and how to use them in forecasting and analytics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Single Source of Error framework The main model underlying the smooth functions is explained in detail in Svetunkov (2022) monograph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It builds upon Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008) model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Here we summarise only the main ideas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We start with the most popular pure additive model, underlying the majority of functions of smooth package (Svetunkov, 2023).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is formulated as: yt =w′vt−l + ϵt vt =Fvt−l + gϵt , (1) where w is the measurement vector, F is the transition matrix, g is the persistence vector, vt−l is the vector of lagged components and l is the vector of lags, defining how each of the components of vt needs to be shifted in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Unlike the conventional state space model of Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008), the one implemented in smooth relies on lagged components rather than their transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In the conventional case, the vector of states vt always depends on the value of the vector on the previous observation, where the transition of signal happens from one component to another according to the matrix F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Both the conventional and the proposed frameworks underlie exactly 2 the same ETS and ARIMA models, but our approach simplifies calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' For example, consider the ETS(A,A,A) model, which is written as (Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2008): yt =lt−1 +bt−1 +st−m +ϵt lt =lt−1 +bt−1 +αϵt bt = bt−1 +βϵt st = st−m +γϵt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2) where yt is the actual value, lt−1 is the level, bt−1 is the trend, st−m is the seasonal component with periodicity m (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 12 for months of year data, implying that something is repeated every 12 months), α, β and γ are the smoothing parameters and ϵt is an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' error term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' According to (1), the model (2) can be written as: yt = � 1 1 1 � � � lt−1 bt−1 st−m � � + ϵt � � lt bt st � � = � � 1 1 0 0 1 0 0 0 1 � � � � lt−1 bt−1 st−m � � + � � α β γ � � ϵt, (3) while in the (Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2008) form it would be: yt = � 1 1 1 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 0 � � � � � � � � � � lt−1 bt−1 s1,t−1 s2,t−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sm,t−1 � � � � � � � � � + ϵt � � � � � � � lt bt s1,t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sm,t � � � � � � � = � � � � 1 1 0′ m−1 0 0 1 0′ m−1 0 0 0 0′ m−1 1 0m−1 0m−1 Im−1 0m−1 � � � � � � � � � � � lt−1 bt−1 s1,t−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sm,t−1 � � � � � � � + � � � � � � � � � α β γ 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 0 � � � � � � � � � ϵt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (4) Because of the size of matrices in (4) and the recursive nature of the model, applying it on seasonal data with m higher than 24 becomes computationally 3 expensive due to the multiplication of large matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The problem becomes even more serious when a model with multiple seasonal components is needed (see, for example, Taylor, 2003), because it then introduces several seasonal indices, increasing to the size of matrices in (4) even further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This issue is resolved in (1), because the introduction of additional components leads to increase of dimensionality proportional to the number of added components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Note that any extension of the conventional state space model results in the increase of its dimensionality, inevitably leading to computational difficulties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is why the proposed model (1) is more viable, flexible and this is why it was used in the development of smooth functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' There is also a more general state space form, covering not only pure additive, but also pure multiplicative and mixed models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is discussed in Chapter 4 of Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008) and in Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='1 of Svetunkov (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We do not discuss this model here but only point out that the multiplicative and mixed ETS models implemented in smooth are based on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Furthermore, Snyder (1985) showed that ARIMA model can also be writ- ten in the SSOE state space form, this was then discussed in more detail in Chapter 11 of Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008) and afterwards used by Svetunkov and Boylan (2020) to implement ARIMA in (ssarima() and auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ssarima() functions from the smooth package) and apply it in supply chain context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Building upon that, Svetunkov (2022) implemented ARIMA (Chapter 9) in the state space form (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' So, the proposed framework does not stop on ETS models, it can be extended, for example, to include a combination of ETS+ARIMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The final piece of the puzzle is the regression model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It can also be represented in the SSOE state space form (1), as, for example, discussed in Koehler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is discussed in Chapter 9 of Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008) and Chapter 10 of Svetunkov (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' All of this means that the state space model (1) presents a unified frame- work for working with ETS, ARIMA, regression and any of their combina- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is implemented in adam() function of smooth package (Svetunkov, 2023), supporting the following functionality: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ETS;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ARIMA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Regression;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Time-varying parameters regression;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Combination of (1), (2) and either (3), or (4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Automatic selection/combination of states of ETS;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Automatic order selection for ARIMA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Variables selection for regression;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Normal and non-normal distributions of the error term;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Automatic selection of the most suitable distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Multiple seasonality;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Occurrence part of the model to handle zeroes in data (in case of in- termittent demand);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Modelling scale of distribution (GARCH-style models, see for example, Engle, 1982);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Handling uncertainty of estimates of parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Forecasting using any of the elements above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' All these topics are covered in Svetunkov (2022), so we will not focus on adam function here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' However, there are special cases of the model (1), imple- menting specific functionality in the smooth package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We discuss the most important of them in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Time series decomposition While stats package already implements the classical decomposition function, I have created a new one, which would handle the multiple sea- sonal data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is called msdecompose().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It has exactly the same logic as the classical decomposition (Warren M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Persons, 1919), but can be applied to the data with multiple seasonal cycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' A user can define what cycles there are in the data by setting the parameter lags, and choosing the type of seasonality via the parameter type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' For example, msdecompose() can be applied to half-hourly electricity demand data taylor from forecast package in the following way: taylorDecomp <- msdecompose(taylor, lags=c(48,336), type="m") which will result in an object that can be used for further analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Pro- ducing a plot from it would generate several figures (see documentation for plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='smooth() for details), the most interesting of which, the plot of time series components, is shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The classical decomposition does not typically produce clear components – the residuals in Figure 1 demonstrate the presence of seasonality because 5 20000 30000 Actuals 28000 29000 Trend 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2 2 4 6 8 10 12 Seasonal 1 Time 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='05 Seasonal 2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='15 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='05 2 4 6 8 10 12 Residuals Time Decomposition of taylor Figure 1: Decomposition of multiple seasonal time series according to msdecompose() function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' the approach assumes that the seasonal components are constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Nonethe- less, it can be a starting point for time series analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In smooth, it is used for the initialisation of states of ETS in case of seasonal data and is mainly needed when working with multiple seasonal data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' However, if a researcher is interested in forecasting with seasonal decomposition, the function produces the object of the class smooth that supports the forecast() method, pro- ducing forecasts for the trend component of the decomposed data and then reconstructing the series based on the estimated seasonal components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Exponential smoothing Exponential smoothing is one of the most popular forecasting methods used in demand planning (Weller and Crone, 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' As mentioned earlier, ETS underlies all exponential smoothing methods, and it is considered as an academic standard in forecasting (it has been used in all the major fore- casting competitions over the years, including Makridakis and Hibon, 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Athanasopoulos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Makridakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2020, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The conventional ETS model, developed by Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008) assumes that the error term 6 follows normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It is implemented in functions ets() from the forecast package and ETS() from the fable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Their counterpart in smooth package is called es(), but it is based on the state space model (1) rather then the conventional one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Furthermore, while ets() supports only 15 ETS models, the es() implements all the theoretically possible 30 ETS models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The function also support fine tuning of parameters of model, allowing set- ting smoothing parameters values via persistence variable, initial values via initial and seasonal indices via initialSeason and pre-defining the values of parameters for the optimisation via the B parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Furthermore, the function supports explanatory variables via the xreg parameter, similar to how it is done in arima function from the stats package, allowing tuning the coefficients for regressors via initialX and selecting the most appropri- ate ones based on information criteria via the Sagaert and Svetunkov (2022) algorithm applied to residuals of the ETS model using the regressors pa- rameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In terms of the ETS components selection, the mechanism used by default in es() can be summarised in the following steps: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Apply ETS(A,N,N) to the data, calculate an information criterion (IC);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Apply ETS(A,N,A) to the data, calculate IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' If it is lower than (1), then this means that there is some seasonal component in the data, move to step (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Otherwise, go to (4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Apply ETS(M,N,M) model and calculate IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' If it is lower than the previous one, then the data exhibits multiplicative seasonality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Go to (4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Fit the model with the additive trend component and the seasonal component selected from the previous steps, which can be either “N”, “A”, or “M”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Calculate IC for the new model and compare it with the best IC so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' If it is lower than the criteria of previously applied models, then there is a trend component in the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' If it is not, then the trend component is not needed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Form the pool of models based on steps (1) - (4), apply models and select the one with the lowest IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This approach for components selection can be called branch-and-bound because instead of going through all possible models, it considers branches of models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' For example, if there is no seasonality, then the respective component can be set to “N”, thus removing the branch of seasonal models and reducing 7 the pool of models to test from 30 to only 10 (including models already tested on the four steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Similarly to any other smooth function, es() supports several methods, including plot() for visual diagnostics of model and forecast() for fore- casting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' To demonstrate how they work, we apply es() to AirPassengers data from the datasets package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' AirPassengersETS <- es(AirPassengers, h=12, holdout=TRUE) In the code above, we have specified the forecast horizon of 12 steps ahead and asked to exclude the last 12 observations from the training of the model, thus creating a test set (holdout) to see how the model performs in that part of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We can do diagnostics of the model in order to see if it has any obvious issues that could be resolved: par(mfcol=c(2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2)) plot(AirPassengersETS,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' c(1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='6)) 100 200 300 400 500 100 200 300 400 500 Actuals vs Fitted Fitted Actuals 100 200 300 400 500 −3 −2 −1 0 1 2 Standardised Residuals vs Fitted Fitted Standardised Residuals 30 54 117 100 200 300 400 500 0 5 10 15 20 25 |Residuals| vs Fitted Fitted |Residuals| −2 −1 0 1 2 −20 −10 0 10 QQ plot of normal distribution Theoretical Quantile Actual Quantile Figure 2: Diagnostics plots for ETS(A,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='M,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='M) model selected automatically on AirPassengers data by es() function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 8 The resulting plot is shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We do not aim to resolve the issues of the model in this paper, we merely demonstrate what can be done us- ing smooth functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The plots allow analysing the residuals for the possible issues related to heteroscedasticity, autocorrelation, outliers, wrong specifi- cation etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Fixing the issues can be done by including explanatory variables and/or changing the transformations used in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' After fixing the po- tential issues a researcher can produce forecasts from the estimated model, which is done using the forecast() method from generics package (Wick- ham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' But unlike the forecasting for the ets() and ETS(), the one from smooth supports several options, allowing choosing between a variety of prediction intervals (see documentation of forecast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='smooth() method), allowing to produce one-sided interval (which is useful in case of pure mul- tiplicative models on low-volume data, where the lower bound is typically equal to zero) and generating cumulative forecasts (which is useful in case of safety stock calculation in inventory management).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We will use the default values of parameters, producing the parametric prediction interval: plot(forecast(AirPassengersETS, h=12)) The code above will result in the plot in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 0 0 Series Fitted values Point forecast 95% prediction intervals Forecast origin 1950 1952 1954 1956 1958 1960 100 200 300 400 500 600 Figure 3: Forecast for AirPassengers data produced by es() function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Figure 3 shows how the selected model fits the data, what point forecast it produces (solid bold blue line in the holdout part) and what prediction intervals it generated (a grey area in the holdout).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 9 Continuing the theme of exponential smoothing, smooth also implements Complex Exponential Smoothing of Svetunkov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2022) via the ces() function, which has the functionality similar to es() and supports the same set of methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Finally, as mentioned earlier, adam() implements ETS model as well and supports much more functionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The main difference between the default ETS in adam() and es() is that the former supports distributions other than normal and, by default, uses Gamma distribution in case of multiplicative error models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ARIMA Another important model, which is used often in forecasting, is ARIMA (Box and Jenkins, 1976).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' There are several functions implementing ARIMA in SSOE state space form in the smooth package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The ssarima() (State Space ARIMA) function implements a state space ARIMA in the form discussed in Chapter 11 of Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The function that implements the order selection for State Space ARIMA is called auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ssarima().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It does not rely on any statistical tests and selects orders based on information criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Both the model and the selection mechanism are explained in Svetunkov and Boylan (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The msarima() (Multiple Seasonal ARIMA) function relies on the state space model (1), introducing lagged components and thus substantially re- ducing the size of the transition matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This allows applying large multiple seasonal ARIMA models to the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' A thing to note is that because of this, the transition matrix, measurement, and state vectors of this model are formed differently than in Hyndman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In a general case, they are (Svetunkov, 2022, Chapter 9): F = � � � � � η1 η1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' η1 η2 η2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' η2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ηK ηK .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ηK � � � � � ,w = � � � � � 1 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 1 � � � � � , g = � � � � � η1 + θ1 η2 + θ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ηK + θK � � � � � ,vt = � � � � � v1,t v2,t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' vK,t � � � � � , l = � � � � � 1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' K � � � � � , (5) 10 where ηj is jth polynomial for the ARI part of the model, θj is the jth MA parameter and K is the number of ARI/MA polynomials (whichever is the highest).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' To better understand how this model is formulated, consider an example of ARIMA(1,1,2), which can be written as: (1 − φ1B)(1 − B)yt = (1 + θ1B + θ2B2)ϵt, (6) where B is the backshift operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This model can be written in the state space form (see Chapter 9 of Svetunkov, 2022, for derivations): yt = v1,t−1 + v2,t−2 + ϵt v1,t = (1 + φ1)(v1,t−1 + v2,t−2) + (1 + φ1 + θ1)ϵt v2,t = −φ1(vj,t−j + v2,t−2) + (−φ1 + θ2)ϵt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (7) In order to see that the model (7) can be represented in the form (1), we need to set the following matrices and vectors: F = �1 + φ1 1 + φ1 −φ1 −φ1 � ,w = �1 1 � , g = �1 + φ1 + θ1 −φ1 + θ2 � ,vt = �v1,t v2,t � , l = �1 2 �.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (8) Finally, as mentioned earlier, adam() function supports ARIMA1 as well, in the same form as msarima().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' All the three functions have similar syntax for ARIMA, where a user needs to defined seasonal lags of model via lags vector, listing all seasonal frequencies that a model should have, and orders of model via orders variable, which in general accepts a named list of the style orders=list(ar=c(1,2,3),i=c(1,2,3),ma=c(1,2,3)), defining the order of AR, I and MA parts of the model for the respective lags.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The ARIMA orders are designed this way to allow researchers to introduce as many lags as they need, supporting, for example, double and triple seasonal ARIMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Note that due to its formulation ssarima() cannot handle high-frequency data and will slow down with the increase of the seasonal lag m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Here is an example of a user defined SARIMA(0,2,2)(0,2,2)12 model ap- plied to the same AirPassengers data: 1Note that in order to switch off the ETS part of the model in adam(), a user needs to specify model="NNN".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 11 AirPassengersARIMA <- msarima(AirPassengers, lags=c(1,12), orders=list(i=c(2,2),ma=c(2,2)), h=12, holdout=TRUE) In order to see how the model fits the data we can use the plot function, specifying which=7: plot(AirPassengersARIMA,7) after which we will get the plot shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' SARIMA(0,2,2)[1](0,2,2)[12] 1950 1952 1954 1956 1958 1960 100 200 300 400 500 600 Figure 4: Forecast for AirPassengers data produced by msarima() function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Furthermore, all the smooth functions support one of the three mecha- nisms of initialisation: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Optimisation – the initial values of the state vector are estimated during the optimisation stage;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Backcasting – the initial values are produced via applying the model with optimised parameters to the reversed data, going recursively from the last observation to the first one;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Manual – the initials are provided by a user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' These are regulated via the initial parameter in the functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In the case of ARIMA, given the complexity of the task, initial="backcasting" typically works faster and more efficiently than the other two approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 12 If a researcher needs to have an ARIMA model with automatically se- lected orders, they can use auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ssarima(), auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='msarima(), which will do that minimising the selected information criterion using the procedure described in Svetunkov and Boylan (2020) and in Section 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2 of Svetunkov (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In case of adam(), the automatic selection mechanism is switched on via addition of variable select=TRUE in the list for the orders parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ARIMA models produced using the three functions above supports all the methods available for other smooth functions, including plot(), actuals(), fitted(), residuals() and forecast().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Simulation functions Another important set of functions supported by the smooth package is the simulation functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' They allow generating data from an assumed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' There are several functions in the package: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='es() allows generating data from the selected ETS model with defined persistence, initial and initialSeason parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ces() generates data from Complex Exponential Smoothing DGP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ssarima() generates data from ARIMA model, allowing defining order of the model, AR, MA parameters and the value of the constant term (either intercept or drift, depending on the order of differences).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' If the parameters are not specified, they will be picked at random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' All the functions above support a variety of distributions for the error term, allowing also to apply manually created ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Here is an example of how to do the latter: customFunction <- function(n, mu, sd){ return(log(abs(rnorm(n, mu, sd))));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' } x <- sim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='es("ANN", obs=100, randomizer="customFunction", mu=0, sd=1) The simulation functions allow generating as many series as needed, which is regulated via nsim parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Finally, the package also implements simulate() method, which ex- tracts the parameters from the already estimated model to generate sim- ulated data from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' In order to see how it works, we generate data from the AirPassengersETS model, estimated in Section 4: 13 x <- simulate(AirPassengersETS, obs=120, nsim=5) plot(x) The code above will generate five time series, and each one of them would look similar to the one shown in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ETS(AMM) Time Series N4 2 4 6 8 10 100 200 300 400 500 Figure 5: Simulated data from the AirPassengersETS model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' As can be seen from the plot in Figure 5, the generated time series exhibits behaviour similar to the original time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It even has a similar seasonal shape, but it has a different trend, increasing slower than in the original data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Other functions There are several other functions implemented in the package that are outside of the scope of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Nonetheless, two of them are worth mentioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' There is a Simple Moving Averages (SMA) function, sma(), implemented in state space model (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is based on the paper of Svetunkov and Petropoulos (2018) who showed that SMA(p) has an underlying AR(p) pro- cess with parameters restricted to φj = 1 p for all j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' , p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The function also supports automatic order selection via information criteria as discussed in the original paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Another important function is oes(), which implements the occurrence part of model in case of intermittent demand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This is discussed in detail in Chapter 13 of Svetunkov (2022) and is based on Svetunkov and Boylan (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 14 Last but not least, smooth package has extensive vignettes with examples of application of almost all functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' It is available, for instance, on CRAN: https://cran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='r-project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/package=smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Benchmarking of smooth functions Finally, to demonstrate how the smooth functions work, we conduct an experiment on M1 (Makridakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 1982), M3 (Makridakis and Hibon, 2000) and Tourism (Athanasopoulos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2011) competition data, where we evaluate seven models: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ADAM ETS – ETS model estimated via adam() function;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ADAM ARIMA – ARIMA model estimated via adam() function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We set model="NNN" to switch off ETS part of the model and use the following command to set the maximum ARIMA order to check: order=list(ar=c(3,2), i=c(2,1), ma=c(3,2), select=TRUE);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ES – ETS model implemented in es() function, which is just a wrapper of adam();' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' SSARIMA – State Space ARIMA model estimated via auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ssarima();' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' CES – Complex Exponential Smoothing implemented in auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='ces() function from smooth package;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ETS – ETS model implemented in ets() function from forecast pack- age;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' ARIMA – ARIMA selected using auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='arima() function from forecast package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We do not include msarima() in the experiment because the datasets un- der consideration do not have multiple seasonal time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We have used the default values of parameters in all the functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The forecasts were produced for each time series in the datasets for the horizons used in the original competitions to the part of the data not visible to the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We produced point forecasts and 95% prediction intervals to that part of series and evaluated the performance of models using the following measures: MASE – Mean Absolute Scaled Error by Hyndman and Koehler (2006);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' RMSSE – Root Mean Scaled Squared Error introduced in Makridakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 15 Coverage – percentage of observations in the holdout lying in the pro- duced 95% prediction interval;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' sMIS – scaled Mean Interval Score from Makridakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Time – computational time in seconds spent for estimation and fore- casts generation for each series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' For MASE, RMSSE, sMIS and time, the lower the value is, the better it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' For the coverage, the closer the value is to the nominal 95%, the better it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The results of this experiment are summarised in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Note that they might vary from one run to another because forecasts from some of the functions rely on simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' MASE RMSSE Coverage sMIS Time ADAM ETS 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='222 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='935 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='885 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='122 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='386 ES 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='224 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='939 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='898 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='196 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='477 CES 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='271 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='958 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='812 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='465 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='236 ETS 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='263 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='970 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='882 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='258 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='409 ARIMA 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='300 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='987 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='834 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='007 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='425 ADAM ARIMA 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='371 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='048 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='843 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='126 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='376 SSARIMA 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='480 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='133 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='802 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='356 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='811 Table 1: Error measures for each of the model evaluated on M1, M3 and Tourism competi- tions, aggregated using mean values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The boldface indicates the best performing models, while the italic indicates the second best ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' As can be seen from Table 1, ADAM ETS outperforms all other mod- els in terms of MASE, RMSSE and sMIS, although the difference between it and other ETS implementations does not look substantial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Note that it works slightly slower than CES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The ETS from forecast package performs slightly worse than the smooth implementations on these datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Compar- ing ARIMA implementations, the one from auto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='arima() is more accurate and faster than ADAM ARIMA and SSARIMA, although it was not able to beat the ETS models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Note however that ARIMA produces lower coverage than ADAM ARIMA does and works slower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This example demonstrates that the developed functions work efficiently and can be applied to a wide variety of time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Table 1 summarises an overall aggregate performance, which does not mean that the winning 16 models always perform the best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Their performance will vary from one series to another, and in some instances, the models that performed poorly in this experiment would perform much better (for example, SSARIMA performed very well on supply chain data with a short history as discussed in Svetunkov and Boylan, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Conclusions In this paper, I have discussed the philosophy behind the models imple- mented in the smooth package for R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The state space model used in the functions differs from the conventional one, allowing to introduce more com- ponents and using more complex models efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' We have discussed how ETS and ARIMA are implemented in this framework and what an analyst can achieve with them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Finally, we have demonstrated how the models im- plemented in the smooth functions perform on an example of M1, M3 and Tourism competitions data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' This paper merely introduced the framework, the models and the func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' As mentioned earlier, the main idea of the smooth functions is to give a researcher flexibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' A reader interested in learning more about the framework is advised to read the online monograph of Svetunkov (2022) and to study examples in the vignettes of the smooth package in R (Svetunkov, 2023).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' References Assimakopoulos, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Nikolopoulos, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The theta model: a decom- position approach to forecasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 16, 521–530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Athanasopoulos, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Song, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Wu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The tourism forecasting competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 27 (3), 822– 844.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Box, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Jenkins, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Time series analysis: forecasting and control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Holden-day, Oakland, California.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' De Livera, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Snyder, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Forecasting Time Series With Complex Seasonal Patterns Using Exponential Smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Journal of the American Statistical Association 106 (496), 1513–1527.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 17 Engle, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', jul 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Autoregressive Conditional Heteroscedasticity with Es- timates of the Variance of United Kingdom Inflation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Econometrica 50 (4), 987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Gallego, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' tfarima: Transfer Function and ARIMA Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R package version 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://CRAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='R-project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/package=tfarima Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Khandakar, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Automatic time series forecasting: the forecast package for R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Journal of Statistical Software 26 (3), 1–22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Koehler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Another look at measures of forecast accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 22 (4), 679–688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Koehler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Ord, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Snyder, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Forecasting with Exponential Smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Springer Berlin Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Koehler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Snyder, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Grose, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' A state space framework for automatic forecasting using exponential smoothing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 18 (3), 439–454.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Kaluzny, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', TIBCO Software Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' robustarima: Robust ARIMA Mod- eling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R package version 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://CRAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='R-project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/package=robustarima Koehler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Snyder, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Ord, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Beaumont, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' A study of outliers in the exponential smoothing approach to forecasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Interna- tional Journal of Forecasting 28 (2), 477–484.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Makridakis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Andersen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Carbone, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Fildes, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Hibon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Lewandowski, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Newton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Parzen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Winkler, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The ac- curacy of extrapolation (time series) methods: Results of a forecasting competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Journal of Forecasting 1 (2), 111–153.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Makridakis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Hibon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The M3-Competition: results, conclusions and implications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 16, 451–476.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Makridakis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Spiliotis, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Assimakopoulos, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The M4 Competition: 100,000 time series and 61 forecasting methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 36 (1), 54–74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 18 Makridakis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Spiliotis, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Assimakopoulos, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', oct 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' M5 accuracy competition: Results, findings, and conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Forecasting 38 (4), 1346–1364.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' O’Hara-Wild, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Hyndman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Wang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' fable: Forecasting Models for Tidy Time Series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R package version 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://CRAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='R-project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/package=fable R Core Team, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R: A Language and Environment for Statistical Com- puting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R Foundation for Statistical Computing, Vienna, Austria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='R-project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/ Sagaert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Trace Forward Stepwise: Automatic Selec- tion of Variables in No Time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Snyder, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 1985.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Recursive Estimation of Dynamic Linear Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Jour- nal of the Royal Statistical Society, Series B (Methodological) 47 (2), 272– 276.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Forecasting and analytics with adam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Monograph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Open- Forecast, (version: 2022-04-18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://openforecast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/adam/ Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' smooth: Forecasting Using State Space Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R pack- age version 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='com/config-i1/smooth Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Boylan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Multiplicative state-space models for inter- mittent time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Boylan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' State-space ARIMA for supply-chain fore- casting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Production Research 58 (3), 818–827.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Kourentzes, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Ord, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 8 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Complex exponential smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Naval Research Logistics (NRL), 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Svetunkov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Petropoulos, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Old dog, new tricks: a modelling view of simple moving averages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' International Journal of Production Research 56 (18), 6034–6047.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' 19 Taylor, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Short-term electricity demand forecasting using dou- ble seasonal exponential smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Journal of the Operational Research Society 54 (8), 799–805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Warren M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Persons, 1919.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' General Considerations and Assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' The Review of Economics and Statistics 1 (1), 5–107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Weller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Crone, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', November 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Supply chain forecasting: Best practices & benchmarking study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Lancaster Centre for Fore- casting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' Wickham, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Kuhn, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', Vaughan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' generics: Common S3 Generics not Provided by Base R Methods Related to Model Fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' R package version 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content=' URL https://CRAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='R-project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} +page_content='org/package=generics 20' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtAzT4oBgHgl3EQf0v7t/content/2301.01790v1.pdf'} diff --git a/TtAzT4oBgHgl3EQfXvxz/content/2301.01323v1.pdf b/TtAzT4oBgHgl3EQfXvxz/content/2301.01323v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..225a9a3cfcc7e18c6e36cf7da96a1b7df8d9deb5 --- /dev/null +++ b/TtAzT4oBgHgl3EQfXvxz/content/2301.01323v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ede8edaf432395b08ca91fb0618713686304a76e8d4f3ed34319b426d53c917d +size 464851 diff --git a/TtAzT4oBgHgl3EQfXvxz/vector_store/index.faiss b/TtAzT4oBgHgl3EQfXvxz/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..66f414f4b0cd00672d738e033455547350dada1f --- /dev/null +++ b/TtAzT4oBgHgl3EQfXvxz/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c39a7afd690f5dd5f8a9c31f15d924bc9d8ef4840d503d01e1066edf1a827b69 +size 5701677 diff --git a/UNAyT4oBgHgl3EQf8fqv/content/tmp_files/2301.00858v1.pdf.txt b/UNAyT4oBgHgl3EQf8fqv/content/tmp_files/2301.00858v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..103eccfe95e3cd2c9d09e38f455e79c9adb180f0 --- /dev/null +++ b/UNAyT4oBgHgl3EQf8fqv/content/tmp_files/2301.00858v1.pdf.txt @@ -0,0 +1,3246 @@ +Robust Average-Reward Markov Decision Processes +Yue Wang,1 Alvaro Velasquez, 2 George Atia, 3 Ashley Prater-Bennette, 4 Shaofeng Zou 1 +1 University at Buffalo, The State University of New York +2 Information Innovation Office, Defense Advanced Research Projects Agency +3 University of Central Florida +4 Air Force Research Laboratory +ywang294@buffalo.com, alvaro.velasquez@darpa.mil, george.atia@ucf.edu, ashley.prater-bennette@us.af.mil, +szou3@buffalo.edu +Abstract +In robust Markov decision processes (MDPs), the uncertainty +in the transition kernel is addressed by finding a policy that +optimizes the worst-case performance over an uncertainty set +of MDPs. While much of the literature has focused on dis- +counted MDPs, robust average-reward MDPs remain largely +unexplored. In this paper, we focus on robust average-reward +MDPs, where the goal is to find a policy that optimizes the +worst-case average reward over an uncertainty set. We first +take an approach that approximates average-reward MDPs +using discounted MDPs. We prove that the robust discounted +value function converges to the robust average-reward as the +discount factor γ goes to 1, and moreover, when γ is large, +any optimal policy of the robust discounted MDP is also an +optimal policy of the robust average-reward. We further design +a robust dynamic programming approach, and theoretically +characterize its convergence to the optimum. Then, we in- +vestigate robust average-reward MDPs directly without using +discounted MDPs as an intermediate step. We derive the robust +Bellman equation for robust average-reward MDPs, prove that +the optimal policy can be derived from its solution, and further +design a robust relative value iteration algorithm that provably +find its solution, or equivalently, the optimal robust policy. +Introduction +A Markov decision process (MDP) is an effective mathemat- +ical tool for sequential decision-making in stochastic envi- +ronments (Derman 1970; Puterman 1994). Solving an MDP +problem entails finding an optimal policy that maximizes a +cumulative reward according to a given criterion. However, +in practice there could exist a mismatch between the assumed +MDP model and the underlying environment due to various +factors, such as non-stationarity of the environment, model- +ing error, exogenous perturbation, partial observability, and +adversarial attacks. The ensuing model mismatch could result +in solution policies with poor performance. +This challenge spurred noteworthy efforts on developing +and analyzing a framework of robust MDPs e.g., (Bagnell, +Ng, and Schneider 2001; Nilim and El Ghaoui 2004; Iyengar +2005). Rather than adopting a fixed MDP model, in the robust +MDP setting, one seeks to optimize the worst-case perfor- +mance over an uncertainty set of possible MDP models. The +Copyright © 2023, Association for the Advancement of Artificial +Intelligence (www.aaai.org). All rights reserved. +solution to the robust MDP problem provides performance +guarantee for all uncertain MDP models, and is thus robust +to the model mismatch. +Robust MDP problems falling under different reward op- +timality criteria are fundamentally different. In robust dis- +counted MDPs, the goal is to find a policy that maximizes +the discounted cumulative reward in the worst case. In this +setting, as the agent interacts with the environment, the re- +ward received diminishes exponentially over time. Much of +the prior work in the robust setting has focused on the dis- +counted reward formulation. The model-based method, e.g., +(Iyengar 2005; Nilim and El Ghaoui 2004; Bagnell, Ng, and +Schneider 2001; Satia and Lave Jr 1973; Wiesemann, Kuhn, +and Rustem 2013; Tamar, Mannor, and Xu 2014; Lim and +Autef 2019; Xu and Mannor 2010; Yu and Xu 2015; Lim, Xu, +and Mannor 2013), where information about the uncertainty +set is assumed to be known to the learner, unveiled several +fundamental characterizations of robust discounted MDPs. +This was further extended to the more practical model-free +setting in which only samples from a simulator (the cen- +troid of the uncertainty set) are available to the learner. For +example, the value-based method (Roy, Xu, and Pokutta +2017; Badrinath and Kalathil 2021; Wang and Zou 2021; +Tessler, Efroni, and Mannor 2019; Zhou et al. 2021; Yang, +Zhang, and Zhang 2021; Panaganti and Kalathil 2021; Goyal +and Grand-Clement 2018; Kaufman and Schaefer 2013; Ho, +Petrik, and Wiesemann 2018, 2021; Si et al. 2020) optimizes +the worst-case performance using the robust value function +as an intermediate step; on the other hand, the model-free +policy-based method (Russel, Benosman, and Van Baar 2020; +Derman, Geist, and Mannor 2021; Eysenbach and Levine +2021; Wang and Zou 2022) directly optimizes the policy and +is thus scalable to large/continuous state and action spaces. +Although discounted MDPs induce an elegant Bellman op- +erator that is a contraction, and have been studied extensively, +the policy obtained usually has poor long-term performance +when a system operates for an extended period of time. When +the discount factor is very close to 1, the agent may prefer +to compare policies on the basis of their average expected +reward instead of their expected total discounted reward, e.g., +queueing control, inventory management in supply chains, +scheduling automatic guided vehicles and applications in +communication networks (Kober, Bagnell, and Peters 2013). +Therefore, it is also important to optimize the long-term aver- +arXiv:2301.00858v1 [cs.LG] 2 Jan 2023 + +age performance of a system. +However, robust MDPs under the average-reward crite- +rion are largely understudied. Compared to the discounted +setting, the average-reward setting depends on the limiting +behavior of the underlying stochastic process, and hence is +markedly more intricate. A recognized instance of such in- +tricacy concerns the one-to-one correspondence between the +stationary policies and the limit points of state-action frequen- +cies, which while true for discounted MDPs, breaks down +under the average-reward criterion even in the non-robust +setting except in some very special cases (Puterman 1994; +Atia et al. 2021). This is largely due to dependence of the +necessary conditions for establishing a contraction in average- +reward settings on the graph structure of the MDP, versus the +discounted-reward setting where it simply suffices to have +a discount factor that is strictly less than one. Heretofore, +only a handful of studies have considered average-reward +MDPs in the robust setting. The first work by (Tewari and +Bartlett 2007) considers robust average-reward MDPs un- +der a specific finite interval uncertainty set, but their method +is not easily applicable to other uncertainty sets. More re- +cently, (Lim, Xu, and Mannor 2013) proposed an algorithm +for robust average-reward MDPs under the ℓ1 uncertainty +set. However, obtaining fundamental characterizations of the +problem and convergence guarantee remains elusive. +Challenges and Contributions +In this paper, we derive characterizations of robust average- +reward MDPs with general uncertainty sets, and develop +model-based approaches with provable theoretical guarantee. +Our approach is fundamentally different from previous work +on robust discounted MDPs, robust and non-robust average- +reward MDPs. In particular, the key challenges and the main +contributions are summarized below. +• We characterize the limiting behavior of robust dis- +counted value function as the discount factor γ → 1. +For the standard non-robust setting and for a specific tran- +sition kernel, the discounted non-robust value function con- +verges to the average-reward non-robust value function as +γ → 1 (Puterman 1994). However, in the robust setting, we +need to consider the worst-case limiting behavior under all +possible transition kernels in the uncertainty set. Hence, the +previous point-wise convergence result (Puterman 1994) +cannot be directly applied. In (Tewari and Bartlett 2007), +a finite interval uncertainty set is studied, where due to its +special structure, the number of possible worst-case transi- +tion kernels of robust discounted MDPs is finite, and hence +the order of min (over transition kernel) and limγ→1 can be +exchanged, and therefore, the robust discounted value func- +tion converges to the robust average-reward value function. +This result, however, does not hold for general uncertainty +sets investigated in this paper. We first prove the uniform +convergence of discounted non-robust value function to +average-reward w.r.t. the transition kernels and policies. +Based on this uniform convergence, we show the conver- +gence of the robust discounted value function to the robust +average-reward. This uniform convergence result is the +first in the literature and is of key importance to motivate +our algorithm design and to guarantee convergence to the +optimal robust policy in the average-reward setting. +• We design algorithms for robust policy evaluation and +optimal control based on the limit method. Based on the +uniform convergence, we then use robust discounted MDPs +to approximate robust average-reward MDPs. We show that +when γ is large, any optimal policy of the robust discounted +MDP is also an optimal policy of the robust average-reward, +and hence solves the robust optimal control problem in the +average reward setting. This result is similar to the Black- +well optimality (Blackwell 1962; Hordijk and Yushkevich +2002) for the non-robust setting, however, our proof is fun- +damentally different. Technically, the proof in (Blackwell +1962; Hordijk and Yushkevich 2002) is based on the fact +that the difference between the discounted value functions +of two policies is a rational function of the discount factor, +which has a finite number of zeros. However, in the robust +setting with a general uncertainty set, the difference is no +longer a rational function due to the min over the transition +kernel. We construct a novel proof based on the limiting +behavior of robust discounted MDPs, and show that the +(optimal) robust discounted value function converges to the +(optimal) robust average-reward as γ → 1. Motivated by +these insights, we then design our algorithms by applying +a sequence of robust discounted Bellman operators while +increasing the discount factor at a certain rate. We prove +that our method can (i) evaluate the robust average-reward +for a given policy and; (ii) find the optimal robust value +function and, in turn, the optimal robust policy for general +uncertainty sets. +• We design a robust relative value iteration method +without using the discounted MDPs as an intermedi- +ate step. We further pursue a direct approach that solves +the robust average-reward MDPs without using the limit +method, i.e., without using discounted MDPs as an interme- +diate step. We derive a robust Bellman equation for robust +average-reward MDPs, and show that the pair of robust rel- +ative value function and robust average-reward is a solution +to the robust Bellman equation under the average-reward +setting. We further prove that if we can find any solution +to the robust Bellman equation, then the optimal policy +can be derived by a greedy approach. The problem hence +can be equivalently solved by solving the robust Bellman +equation. We then design a robust value iteration method +which provably converges to the solution of the robust Bell- +man equation, i.e., solve the optimal policy for the robust +average-reward MDP problem. +Related Work +Robust discounted MDPs. Model-based methods for robust +discounted MDPs were studied in (Iyengar 2005; Nilim and +El Ghaoui 2004; Bagnell, Ng, and Schneider 2001; Satia and +Lave Jr 1973; Wiesemann, Kuhn, and Rustem 2013; Lim and +Autef 2019; Xu and Mannor 2010; Yu and Xu 2015; Lim, Xu, +and Mannor 2013; Tamar, Mannor, and Xu 2014), where the +uncertainty set is assumed to be known, and the problem can +be solved using robust dynamic programming. Later, the stud- +ies were generalized to the model-free setting where stochas- + +tic samples from the centroid MDP of the uncertainty set are +available in an online fashion (Roy, Xu, and Pokutta 2017; +Badrinath and Kalathil 2021; Wang and Zou 2021, 2022; +Tessler, Efroni, and Mannor 2019) and an offline fashion +(Zhou et al. 2021; Yang, Zhang, and Zhang 2021; Panaganti +and Kalathil 2021; Goyal and Grand-Clement 2018; Kaufman +and Schaefer 2013; Ho, Petrik, and Wiesemann 2018, 2021; +Si et al. 2020). There are also empirical studies on robust +RL, e.g., (Vinitsky et al. 2020; Pinto et al. 2017; Abdullah +et al. 2019; Hou et al. 2020; Rajeswaran et al. 2017; Huang +et al. 2017; Kos and Song 2017; Lin et al. 2017; Pattanaik +et al. 2018; Mandlekar et al. 2017). For discounted MDPs, +the robust Bellman operator is a contraction, based on which +robust dynamic programming and value-based methods can +be designed. In this paper, we focus on robust average-reward +MDPs. However, the robust Bellman operator for average- +reward MDPs is not a contraction, and its fixed point may +not be unique. Moreover, the average-reward setting depends +on the limiting behavior of the underlying stochastic process, +which is thus more intricate. +Robust average-reward MDPs. Studies on robust average- +reward MDPs are quite limited in the literature. Robust +average-reward MDPs under a specific finite interval uncer- +tainty set was studied in (Tewari and Bartlett 2007), where the +authors showed the existence of a Blackwell optimal policy, +i.e., there exists some δ ∈ [0, 1), such that the optimal robust +policy exists and remains unchanged for any discount factor +γ ∈ [δ, 1). However, this result depends on the structure of +the uncertainty set. For general uncertainty sets, the existence +of a Blackwell optimal policy may not be guaranteed. More +recently, (Lim, Xu, and Mannor 2013) designed a model-free +algorithm for a specific ℓ1-norm uncertainty set and charac- +terized its regret bound. However, their method also relies on +the structure of the ℓ1-norm uncertainty set, and may not be +generalizable to other types of uncertainty sets. In this paper, +our results can be applied to various types of uncertainty sets, +and thus is more general. +Preliminaries and Problem Model +In this section, we introduce some preliminaries on dis- +counted MDPs, average-reward MDPs, and robust MDPs. +Discounted MDPs. A discounted MDP (S, A, P, r, γ) is +specified by: a state space S, an action space A, a transi- +tion kernel P = {pa +s ∈ ∆(S), a ∈ A, s ∈ S}1, where pa +s is +the distribution of the next state over S upon taking action a +in state s (with pa +s,s′ denoting the probability of transitioning +to s′), a reward function r : S × A → [0, 1], and a discount +factor γ ∈ [0, 1). At each time step t, the agent at state st +takes an action at, the environment then transitions to the +next state st+1 according to pat +st , and produces a reward sig- +nal r(st, at) ∈ [0, 1] to the agent. In this paper, we also write +rt = r(st, at) for convenience. +A stationary policy π : S → ∆(A) is a distribution over +A for any given state s, and the agent takes action a at state +s with probability π(a|s). The discounted value function of +a stationary policy π starting from s ∈ S is defined as the +1∆(S): the (|S| − 1)-dimensional probability simplex on S. +expected discounted cumulative reward by following policy +π: V π +P,γ(s) ≜ Eπ,P [�∞ +t=0 γtrt|S0 = s]. +Average-Reward MDPs. Different from discounted MDPs, +average-reward MDPs do not discount the reward over time, +and consider the behavior of the underlying Markov process +under the steady-state distribution. More specifically, under a +specific transition kernel P, the average-reward of a policy π +starting from s ∈ S is defined as +gπ +P(s) ≜ lim +n→∞ Eπ,P +� 1 +n +n−1 +� +t=0 +rt|S0 = s +� +, +(1) +which we also refer to in this paper as the average-reward +value function for convenience. +The average-reward value function can also be equiva- +lently written as follows: gπ +P = limn→∞ 1 +n +�n−1 +t=0 (Pπ)trπ ≜ +Pπ +∗rπ, where (Pπ)s,s′ +≜ � +a π(a|s)pa +s,s′ and rπ(s) ≜ +� +a π(a|s)r(s, a) are the transition matrix and reward func- +tion induced by π, and Pπ +∗ ≜ limn→∞ 1 +n +�n−1 +t=0 (Pπ)t is the +limit matrix of Pπ. +In the average-reward setting, we also define the following +relative value function +V π +P (s) ≜ Eπ,P +� ∞ +� +t=0 +(rt − gπ +P)|S0 = s +� +, +(2) +which is the cumulative difference over time between the +reward and the average value gπ +P. It has been shown that +(Puterman 1994): V π +P = Hπ +Prπ, where Hπ +P ≜ (I − Pπ + +Pπ +∗)−1(I − Pπ +∗) is defined as the deviation matrix of Pπ. +The relationship between the average-reward and the rel- +ative value functions can be characterized by the following +Bellman equation (Puterman 1994): +V π +P (s) = Eπ +� +r(s, A) − gπ +P(s) + +� +s′∈S +pA +s,s′V π +P (s′) +� +. +(3) +Robust discounted and average-reward MDPs. For robust +MDPs, the transition kernel is not fixed but belongs to some +uncertainty set P. After the agent takes an action, the envi- +ronment transits to the next state according to an arbitrary +transition kernel P ∈ P. In this paper, we focus on the (s, a)- +rectangular uncertainty set (Nilim and El Ghaoui 2004; Iyen- +gar 2005), i.e., P = � +s,a Pa +s, where Pa +s ⊆ ∆(S). We note +that there are also studies on relaxing the (s, a)-rectangular +uncertainty set to s-rectangular uncertainty set, which is not +the focus of this paper. +Under the robust setting, we consider the worst-case perfor- +mance over the uncertainty set of MDPs. More specifically, +the robust discounted value function of a policy π for a dis- +counted MDP is defined as +V π +P,γ(s) ≜ +min +κ∈� +t≥0 P Eπ,κ +� ∞ +� +t=0 +γtrt|S0 = s +� +, +(4) +where κ = (P0, P1...) ∈ � +t≥0 P. + +In this paper, we focus on the following worst-case average- +reward for a policy π: +gπ +P(s) ≜ +min +κ∈� +t≥0 P lim +n→∞ Eπ,κ +� +1 +n +n−1 +� +t=0 +rt|S0 = s +� +, +(5) +to which, for convenience, we refer as the robust average- +reward value function. +For robust discounted MDPs, it has been shown that the +robust discounted value function is the unique fixed-point of +the robust discounted Bellman operator (Nilim and El Ghaoui +2004; Iyengar 2005; Puterman 1994): +TπV (s) ≜ +� +a∈A +π(a|s) +� +r(s, a) + γσPas (V ) +� +, +(6) +where σPas (V ) ≜ minp∈Pas p⊤V is the support function of +V on Pa +s. Based on the contraction of Tπ, robust dynamic +programming approaches, e.g., robust value iteration, can +be designed (Nilim and El Ghaoui 2004; Iyengar 2005) (see +Appendix for a review of these methods). However, there is +no such contraction result for robust average-reward MDPs. +In this paper, our goal is to find a policy that optimizes the +robust average-reward value function: +max +π∈Π gπ +P(s), for any s ∈ S, +(7) +where Π is the set of all stationary policies, and we denote +by g∗ +P(s) ≜ maxπ gπ +P(s) the optimal robust average-reward. +Limit Approach for Robust Average-Reward +MDPs +We first take a limit approach to solve the problem of robust +average-reward MDPs in eq. (7). It is known that under the +non-robust setting, for any fixed π and P, the discounted value +function converges to the average-reward value function as +the discount factor γ approaches 1 (Puterman 1994), i.e., +lim +γ→1(1 − γ)V π +P,γ = gπ +P. +(8) +We take a similar idea, and show that the same result holds +in the robust case: limγ→1(1 − γ)V π +P,γ = gπ +P. Based on this +result, we further design algorithms (Algorithms 1 and 2) +that apply a sequence of robust discounted Bellman operators +while increasing the discount factor at a certain rate. We +then theoretically prove that our algorithms converge to the +optimal solutions. +In the following, we first show that the convergence +limγ→1(1 − γ)V π +P,γ = gπ +P is uniform on the set Π × P. +We make a mild assumption as follows. +Assumption 1. For any s ∈ S, a ∈ A, the uncertainty set +Pa +s is a compact subset of ∆(S). +The set Pa +s is compact if and only if it is bounded and +closed. Since Pa +s ⊆ ∆(S), it is clearly bounded. Hence, As- +sumption 1 amounts to assuming that the uncertainty set is +closed. We remark that many standard uncertainty sets sat- +isfy this assumption, e.g., those defined by ϵ-contamination +(Huber 1965), finite interval (Tewari and Bartlett 2007), total- +variation (Rahimian, Bayraksan, and De-Mello 2022) and +KL-divergence (Hu and Hong 2013). +In (Puterman 1994), the convergence limγ→1(1 − +γ)V π +P,γ = gπ +P for a fixed policy π and a fixed transition +kernel P (non-robust setting) is point-wise. However, such +point-wise convergence does not provide any convergence +guarantee on the robust discounted value function, as the +robust value function measures the worst-case performance +over the uncertainty set and the order of lim and min may not +be exchanged in general. In the following theorem, we prove +the uniform convergence of the discounted value function +under the foregoing assumption. +Theorem 1 (Uniform convergence). Under Assumption 1, +the discounted value function converges uniformly to the +average-reward value function on Π × P as γ → 1, i.e., +lim +γ→1(1 − γ)V π +P,γ = gπ +P, uniformly. +(9) +With uniform convergence in Theorem 1, the order of +the limit γ → 1 and min over P can be interchanged, then +the following convergence of the robust discounted value +function can be established. +Theorem 2. The robust discounted value function in eq. (4) +converges to the robust average-reward uniformly on Π: +lim +γ→1(1 − γ)V π +P,γ = gπ +P, uniformly. +(10) +We note that a similar convergence result is shown in +(Tewari and Bartlett 2007), but only for a special uncertainty +set of finite interval. Our Theorem 2 holds for general com- +pact uncertainty sets. Moreover, it is worth highlighting that +our proof technique is fundamentally different from the one +in (Tewari and Bartlett 2007). Specifically, under the finite +interval uncertainty set, the worst-case transition kernels are +from a finite set, i.e., V π +P,γ = minP∈M V π +P,γ for a finite set +M ⊆ P. This hence implies the interchangeability of lim and +min. However, for general uncertainty sets, the number of +worst-case transition kernels may not be finite. We demon- +strate the interchangeability via our uniform convergence +result in Theorem 1. +The convergence result in Theorem 2 is of key importance +to motivate the design of the following two algorithms, the ba- +sic idea of which is to apply a sequence of robust discounted +Bellman operators on an arbitrary initialization while increas- +ing the discount factor at a certain rate. +We first consider the robust policy evaluation problem, +which aims to estimate the robust average-reward gπ +P for a +fxied policy π. This problem for robust discounted MDPs +is well studied in the literature, however, results for robust +average-reward MDPs are quite limited except for the one +in (Tewari and Bartlett 2007) for a specific finite interval +uncertainty set. We present the a robust value iteration (robust +VI) algorithm for evaluating the robust average-reward with +general compact uncertainty sets in Algorithm 1. +At each time step t, the discount factor γt is set to t+1 +t+2, +which converges to 1 as t → ∞. Subsequently, a robust +Bellman operator w.r.t discount factor γt is applied on the +current estimate Vt of the robust discounted value function +(1 − γt)V π +P,γt. As the discount factor approaches 1, the es- +timated robust discounted value function converges to the +robust average-reward gπ +P by Theorem 2. + +Algorithm 1: Robust VI: Policy Evaluation +Input: π, V0(s) = 0, ∀s, T +1: for t = 0, 1, ..., T − 1 do +2: +γt ← t+1 +t+2 +3: +for all s ∈ S do +4: +Vt+1(s) ← Eπ[(1 − γt)r(s, A) + γtσPA +s (Vt)] +5: +end for +6: end for +7: return VT +Theorem 3. Algorithm 1 converges to robust average reward, +i.e., limT →∞ VT → gπ +P. +Theorem 3 shows that the output of Algorithm 1 converges +to the robust average-reward. +Besides the robust policy evaluation problem, it is also of +great practical importance to find an optimal policy that max- +imizes the worst-case average-reward, i.e., to solve eq. (7). +Based on a similar idea as the one of Algorithm 1, we ex- +tend our limit approach to solve the robust optimal control +problem in Algorithm 2. +Algorithm 2: Robust VI: Optimal Control +Input: V0(s) = 0, ∀s, T +1: for t = 0, 1, ..., T − 1 do +2: +γt ← t+1 +t+2 +3: +for all s ∈ S do +4: +Vt+1(s) ← max +a∈A +� +(1 − γt)r(s, a) + γtσPas (Vt) +� +5: +end for +6: end for +7: for s ∈ S do +8: +πT (s) ← arg maxa∈A +� +(1 − γt)r(s, a) + γtσPas (VT ) +� +9: end for +10: return VT , πT +Similar to Algorithm 1, at each time step, the discount fac- +tor γt is set to be closer to 1, and a one-step robust discounted +Bellman operator (for optimal control) w.r.t. γt is applied to +the current estimate Vt. The following theorem establishes +that VT in Algorithm 2 converges to the optimal robust value +function, hence can find the optimal robust policy. +Theorem 4. The output VT in Algorithm 2 converges to the +optimal robust average-reward g∗ +P: VT → g∗ +P as T → ∞. +As discussed in (Blackwell 1962; Hordijk and Yushkevich +2002), the average-reward criterion is insensitive and under +selective since it is only interested in the performance un- +der the steady-state distribution. For example, two policies +providing rewards: 100 + 0 + 0 + · · · and 0 + 0 + 0 + · · · +are equally good/bad. Towards this issue, for the non-robust +setting, a more sensitive term of optimality was introduced +by Blackwell (Blackwell 1962). More specifically, a policy +is said to be Blackwell optimal if it optimizes the discounted +value function for all discount factor γ ∈ (δ, 1) for some +δ ∈ (0, 1). Together with eq. (8), the optimal policy obtained +by taking γ → 1 is optimal not only for the average-reward +criterion, but also for the discounted criterion with large γ. +Intuitively, it is optimal under the average-reward setting, and +is sensitive to early rewards. +Following a similar idea, we justify that the obtained policy +from Algorithm 2 is not only optimal in the robust average- +reward setting, but also sensitive to early rewards. +Denote by Π∗ the set of all the optimal policies for robust +average-reward, i.e. Π∗ = {π : gπ +P = g∗ +P} . +Theorem 5 (Blackwell optimality). There exists 0 < δ < +1, such that for any γ > δ, the optimal robust policy for +robust discounted value function V ∗ +P,γ belongs to Π∗, i.e., +for any δ < γ < 1, ∃π∗ ∈ Π∗, s.t. V ∗ +P,γ = V π∗ +P,γ. Moreover, +when arg maxπ∈ΠD gπ +P is a singleton, there exists a unique +Blackwell optimal policy. +This result implies that using the limit method in this sec- +tion to find the optimal robust policy for average-reward +MDPs has an additional advantage that the policy it finds not +only optimizes the average reward in steady state, but also is +sensitive to early rewards. +It is worth highlighting the distinction of our results from +the technique used in the proof of Blackwell optimality +(Blackwell 1962). In the non-robust setting, the existence +of a stationary Blackwell optimal policy is proved via contra- +diction, where a difference function of two policies π and ν: +fπ,ν(γ) ≜ V π +P,γ − V µ +P,γ is used in the proof. It was shown by +contradiction that f has infinitely many zeros, which however +contradicts with the fact that f is a rational function of γ with +a finite number of zeros. A similar technique was also used in +(Tewari and Bartlett 2007) for the finite interval uncertainty +set. Specifically, in (Tewari and Bartlett 2007), it was shown +that the worst-case transition kernels for any π, γ are from a +finite set M, hence fπ,ν(γ) ≜ minP∈M V π +P,γ −minP∈M V µ +P,γ +can also be shown to be a rational function with a finite num- +ber of zeroes. For a general uncertainty set P, the difference +function fπ,ν(γ), however, may not be rational. This makes +the method in (Blackwell 1962; Tewari and Bartlett 2007) +inapplicable to our problem. +Direct Approach for Robust Average-Reward +MDPs +The limit approach in Section is based on the uniform conver- +gence of the discounted value function, and uses discounted +MDPs to approximate average-reward MDPs. In this section, +we develop a direct approach to solving the robust average- +reward MDPs that does not adopt discounted MDPs as inter- +mediate steps. +For average-reward MDPs, the relative value iteration +(RVI) approach (Puterman 1994) is commonly used since +it is numerically stable and has convergence guarantee. In +the following, we generalize the RVI algorithm to the robust +setting, and design the robust RVI algorithm in Algorithm 3. +We first generalize the relative value function in eq. (2) to +the robust relative value function. The robust relative value +function measures the difference between the worst-case +cumulative reward and the worst-case average-reward for a +policy π. + +Definition 1. The robust relative value function is defined as +V π +P (s) ≜ +min +κ∈� +t≥0 P Eκ,π +� ∞ +� +t=0 +(rt − gπ +P)|S0 = s +� +, +(11) +where gπ +P is the worst-case average-reward defined in eq. (5). +The following theorem presents a robust Bellman equation +for robust average-reward MDPs. +Theorem 6. For any s and π, (V π +P , gπ +P) is a solution to the +following robust Bellman equation: +V (s) + g = +� +a +π(a|s) +� +r(s, a) + σPas (V ) +� +. +(12) +It can be seen that the robust Bellman equation for average- +reward MDPs has a similar structure to the one for discounted +MDPs in eq. (6) except for a discount factor. This actually +reveals a fundamental difference between the robust Bellman +operator of the discounted MDPs and the average-reward +ones. For a discounted MDP, its robust Bellman operator is +a contraction with constant γ (Nilim and El Ghaoui 2004; +Iyengar 2005), and hence the fixed point is unique. Based on +this, the robust value function can be found by recursively ap- +plying the robust Bellman operator (see Appendix ). In sharp +contrast, in the average-reward setting, the robust Bellman +is not necessarily a contraction, and the fixed point may not +be unique. Therefore, repeatedly applying the robust Bell- +man operator in the average-reward setting may not even +converge, which underscores that the two problem settings +are fundamentally different. +Using the robust Bellman equation in Theorem 6, we de- +rive the following equivalent optimality condition for robust +average-reward MDPs. +Theorem 7. For any (g, V ) that is a solution to +max +a +� +r(s, a) − g + σPas (V ) − V (s) +� += 0, ∀s, +(13) +g = g∗ +P. If we further set +π∗(s) = arg max +a +� +r(s, a) + σPas (V ) +� +(14) +for any s ∈ S, then π∗ is an optimal robust policy. +Theorem 7 suggests that as long as we find a solution +(g, V ) to eq. (13), which though may not be unique, then +g is the optimal robust average-reward g∗ +P, and the greedy +policy π∗ is the optimal policy to our robust average-reward +MDP problem in eq. (7). Based on Theorem 7, our problem +in eq. (7) can be equivalently solved by finding a solution +to eq. (13). We note that eq. (12) holds for any π and if we +let the π in eq. (12) be the greedy policy, then eq. (12) and +eq. (13) are equivalent. +In the following, we generalize the RVI approach to the +robust setting, and design a robust RVI algorithm in Algo- +rithm 3. We will further show that the output of this algo- +rithm converges to a solution to eq. (13), and further the +optimal policy could be obtained by eq. (14). Here 1 de- +notes the all-ones vector, and sp denotes the span semi-norm: +sp(w) = maxs w(s)−mins w(s). Different from Algorithm +2, in Algorithm 3, we do not need to apply the robust dis- +counted Bellman operator. The method directly solves the +Algorithm 3: Robust RVI +Input: V0, ϵ and arbitrary s∗ ∈ S +1: w0 ← V0 − V0(s∗)1 +2: while sp(wt − wt+1) ≥ ϵ do +3: +for all s ∈ S do +4: +Vt+1(s) ← maxa(r(s, a) + σPas (wt)) +5: +wt+1(s) ← Vt+1(s) − Vt+1(s∗) +6: +end for +7: end while +8: return wt, Vt +robust optimal control problem for average-reward robust +MDPs. +In studies of average-reward MDPs, it is usually the case +that a certain class of MDPs are considered, e.g., unichain +and communicating (Wei et al. 2020; Zhang and Ross 2021; +Chen, Jain, and Luo 2022; Wan, Naik, and Sutton 2021). In +this paper, we focus on the unichain setting to highlight the +major technical novelty to achieve robustness. +Assumption 2. For any P = {pa +s ∈ ∆(S)} ∈ P and any +a ∈ A, s, s′ ∈ S, pa +s,s′ > 0, and the induced Markov process +is a unichain. +In the following theorem, we show that our Algorithm 3 +converges to a solution of eq. (13), hence according to The- +orem 7 if we set π according to (14), then π is the optimal +robust policy. +Theorem 8. (wt, Vt) converges to a solution (w, V ) to +eq. (13) as ϵ → 0, which satisfies +w(s) + max +a {r(s∗, a) + σPa +s∗ (w)} += max +a {r(s, a) + σPas (w)}. (15) +Remark 1. In this section, we mainly present the robust RVI +algorithm for the robust optimal control problem, and its con- +vergence and optimality guarantee. A robust RVI algorithm +for robust policy evaluation can be similarly designed by +replacing the max in line 4, Algorithm 3 with an expectation +w.r.t. π. The convergence results in Theorem 8 can also be +similarly derived. +Assumption 2 can be also replaced using some weaker +ones, e.g., Proposition 4.3.2 of (Bertsekas 2011), or be re- +moved by designing a variant of RVI, e.g., Proposition 4.3.4 +of (Bertsekas 2011). +Examples and Numerical Results +In this section, we study several commonly used uncertainty +set models, including contamination model, Kullback-Lerbler +(KL) divergence defined model and total-variation defined +model. +As can be observed from Algorithms 1 to 3, for different +uncertainty sets, the only difference lies in how the support +function σPas (V ) is calculated. In the sequel, we discuss +how to efficiently calculate the support function for various +uncertainty sets. +We numerically compare our robust (relative) value itera- +tion methods v.s. non-robust (relative) value iteration method + +on different uncertainty sets. Our experiments are based on +the Garnet problem G(20, 40) (Archibald, McKinnon, and +Thomas 1995). More specifically, there are 20 states and +30 actions; the nominal transition kernel P = {pa +s ∈ ∆(S)} +is randomly generated according to the uniform distribu- +tion, and the reward functions r(s, a) ∼ N(0, σs,a), where +σs,a ∼ Uniform[0, 1]. In our experiments, the uncertainty +sets are designed to be centered at the nominal transition ker- +nel. We run different algorithms, i.e., (robust) value iteration +and (robust) relative value iteration, and obtain the greedy +policies at each time step. Then, we use robust average- +reward policy evaluation (Algorithm 1) to evaluate the robust +average-reward of these policies. We plot the robust average- +reward against the number of iterations. +Contamination model. For any (s, a) the uncertainty set Pa +s +is defined as Pa +s = {q : q = (1 − R)pa +s + Rp′, p′ ∈ ∆(S)}, +where pa +s is the nominal transition kernel. It can be viewed +as an adversarial model, where at each time-step, the envi- +ronment transits according to the nominal transition kernel p +with probability 1 − R, and according to an arbitrary kernel +p′ with probability R. +It can be easily shown that the value of the problem +σPas (V ) = (1−R)(pa +s)⊤V +R mins V (s). Our experimental +results under the contamination model are shown in Figure 1. +(a) Robust VI. +(b) Robust RVI. +Figure 1: Comparison on contamination model with R = 0.4. +Total variation. The total variation distance is another +commonly used distance metric to measure the dif- +ference between two distributions. Specifically, the to- +tal variation between two distributions p and q +is +defined as DT V (p, q) += +1 +2∥p − q∥1. Consider an +uncertainty +set +defined +via +total +variation: +Pa +s += +{q : DT V (q||pa +s) ≤ R}. Then, its support function can be ef- +ficiently solved as follows (Iyengar 2005): σPas (V ) = p⊤V − +R minµ≥0 {maxs(V (s) − µ(s)) − mins(V (s) − µ(s))} . +Our experimental results under the total variation model +are shown in Figure 2. +Kullback-Lerbler (KL) divergence. The Kullback–Leibler +divergence is widely used to measure the distance between +two probability distributions. The KL-divergence of two dis- +tributions p, q is defined as DKL(q||p) = � +s q(s) log q(s) +p(s). +Consider an uncertainty set defined via KL divergence: +Pa +s = {q : DKL(q||pa +s) ≤ R}. Then, its support function can +be efficiently solved using the duality result in (Hu and Hong +2013): σPas (V ) = − minα≥0 +� +Rα + α log +� +p⊤e +−V +α +�� +. +Our experimental results under the KL-divergence model +are shown in Figure 3. +(a) Robust VI. +(b) Robust RVI. +Figure 2: Comparison on total variation model with R = 0.6. +(a) Robust VI. +(b) Robust RVI. +Figure 3: Comparison on KL-divergence model with R = 0.8. +It can be seen that our robust methods can obtain policies +that achieve higher worst-case reward. Also, both our limit- +based robust value iteration and our direct method of robust +relative value iteration converge to the optimal robust policies, +which validates our theoretical results. +Conclusion +In this paper, we investigated the problem of robust MDPs +under the average-reward setting. We established uniform +convergence of the discounted value function to average- +reward, which further implies the uniform convergence of the +robust discounted value function to robust average-reward. +Based on this insight, we designed a robust dynamic pro- +gramming approach using the robust discounted MDPs as an +approximation (the limit method). We theoretically proved +their convergence and optimality and proved a robust version +of the Blackwell optimality (Blackwell 1962), i.e., any op- +timal policy of the robust discounted MDP when γ is large +enough is also an optimal policy of the robust average-reward. +We then designed a direct approach for robust average-reward +MDPs, where we derived the robust Bellman equation for +robust average-reward MDPs. We further designed a robust +RVI method, which was proven to converge to the optimal +robust solution. Technically, our proof techniques are funda- +mentally different from existing studies on average-reward +robust MDPs, e.g., those in (Blackwell 1962; Tewari and +Bartlett 2007). +Acknowledgment +This work was supported by the National Science Foundation +under Grants CCF-2106560, CCF-2007783, CCF-2106339 +and CCF-1552497. + +0.6 +0.5 +0.4 +0.3 +0.2 +0.1 +0.0 +0.1 +non-robustvalueiteration +robustvalueiteration +0.2 +0 +50 +100 +150 +200 +250 +300 +350 +400 +Number of iteration0.6 +0.5 +0.4 +0.3 +0.2 +0.1 +0.0 +non-robustrelativevalueiteration +-0.1 +robustrelativevalueiteration +0 +50 +100 +150 +200 +250 +300 +350 +400 +Number of iteration0.3 +0.2 +0.1 +0.0 +non-robust value iteration +0.1 +robustvalueiteration +0 +50 +100 +150 +200 +250 +300 +350 +400 +Number of iteration0.3 +0.2 +0.1 +0.0 +0.1 +0.2 +non-robustrelativevalueiteration +0.3 +robust relative valueiteration +0 +50 +100 +150 +200 +250 +300 +350 +400 +Number of iteration0.7 +0.6 +0.5 +non-robustvalueiteration +0.3 +robustvalueiteration +0 +50 +100 +150 +200 +250 +300 +350 +400 +Number of iteration0.70 +0.65 +0.60 +0.55 +0.50 +0.45 +0.40 +0.35 +non-robustrelativevalueiteration +0.30 +robustrelativevalueiteration +0 +50 +100 +150 +200 +250 +300 +350 +400 +Number of iterationReferences +Abdullah, M. A.; Ren, H.; Ammar, H. B.; Milenkovic, V.; +Luo, R.; Zhang, M.; and Wang, J. 2019. Wasserstein robust +reinforcement learning. arXiv preprint arXiv:1907.13196. +Archibald, T.; McKinnon, K.; and Thomas, L. 1995. On +the generation of Markov decision processes. Journal of the +Operational Research Society, 46(3): 354–361. +Atia, G. K.; Beckus, A.; Alkhouri, I.; and Velasquez, A. 2021. +Steady-State Planning in Expected Reward Multichain MDPs. +Journal of Artificial Intelligence Research, 72: 1029–1082. +Badrinath, K. P.; and Kalathil, D. 2021. Robust Reinforce- +ment Learning using Least Squares Policy Iteration with +Provable Performance Guarantees. In Proc. International +Conference on Machine Learning (ICML), 511–520. PMLR. +Bagnell, J. A.; Ng, A. Y.; and Schneider, J. G. 2001. Solving +uncertain Markov decision processes. +Bertsekas, D. P. 2011. Dynamic Programming and Opti- +mal Control 3rd edition, volume II. Belmont, MA: Athena +Scientific. +Blackwell, D. 1962. Discrete dynamic programming. The +Annals of Mathematical Statistics, 719–726. +Chen, L.; Jain, R.; and Luo, H. 2022. Learning Infinite- +Horizon Average-Reward Markov Decision Processes with +Constraints. arXiv preprint arXiv:2202.00150. +Derman, C. 1970. Finite state Markovian decision processes. +Academic Press, Inc. +Derman, E.; Geist, M.; and Mannor, S. 2021. Twice regu- +larized MDPs and the equivalence between robustness and +regularization. In Proc. Advances in Neural Information +Processing Systems (NeurIPS). +Eysenbach, B.; and Levine, S. 2021. Maximum entropy RL +(provably) solves some robust RL problems. arXiv preprint +arXiv:2103.06257. +Goyal, V.; and Grand-Clement, J. 2018. Robust Markov +decision process: Beyond rectangularity. +arXiv preprint +arXiv:1811.00215. +Ho, C. P.; Petrik, M.; and Wiesemann, W. 2018. Fast Bellman +updates for robust MDPs. In Proc. International Conference +on Machine Learning (ICML), 1979–1988. PMLR. +Ho, C. P.; Petrik, M.; and Wiesemann, W. 2021. Partial policy +iteration for L1-robust Markov decision processes. Journal +of Machine Learning Research, 22(275): 1–46. +Hordijk, A.; and Yushkevich, A. A. 2002. Blackwell opti- +mality. In Handbook of Markov decision processes, 231–267. +Springer. +Hou, L.; Pang, L.; Hong, X.; Lan, Y.; Ma, Z.; and Yin, D. +2020. Robust Reinforcement Learning with Wasserstein +Constraint. arXiv preprint arXiv:2006.00945. +Hu, Z.; and Hong, L. J. 2013. Kullback-Leibler divergence +constrained distributionally robust optimization. Available at +Optimization Online, 1695–1724. +Huang, S.; Papernot, N.; Goodfellow, I.; Duan, Y.; and +Abbeel, P. 2017. +Adversarial attacks on neural network +policies. In Proc. International Conference on Learning +Representations (ICLR). +Huber, P. J. 1965. A Robust Version of the Probability Ratio +Test. Ann. Math. Statist., 36: 1753–1758. +Iyengar, G. N. 2005. Robust dynamic programming. Mathe- +matics of Operations Research, 30(2): 257–280. +Kaufman, D. L.; and Schaefer, A. J. 2013. Robust modified +policy iteration. INFORMS Journal on Computing, 25(3): +396–410. +Kober, J.; Bagnell, J. A.; and Peters, J. 2013. Reinforcement +Learning in Robotics: A Survey. The International Journal +of Robotics Research, 32(11): 1238–1274. +Kos, J.; and Song, D. 2017. Delving into adversarial at- +tacks on deep policies. In Proc. International Conference on +Learning Representations (ICLR). +Lim, S. H.; and Autef, A. 2019. Kernel-based reinforcement +learning in robust Markov decision processes. In Proc. In- +ternational Conference on Machine Learning (ICML), 3973– +3981. PMLR. +Lim, S. H.; Xu, H.; and Mannor, S. 2013. Reinforcement +learning in robust Markov decision processes. In Proc. Ad- +vances in Neural Information Processing Systems (NIPS), +701–709. +Lin, Y.-C.; Hong, Z.-W.; Liao, Y.-H.; Shih, M.-L.; Liu, M.- +Y.; and Sun, M. 2017. Tactics of adversarial attack on deep +reinforcement learning agents. In Proc. International Joint +Conferences on Artificial Intelligence (IJCAI), 3756–3762. +Mandlekar, A.; Zhu, Y.; Garg, A.; Fei-Fei, L.; and Savarese, S. +2017. Adversarially robust policy learning: Active construc- +tion of physically-plausible perturbations. In 2017 IEEE/RSJ +International Conference on Intelligent Robots and Systems +(IROS), 3932–3939. IEEE. +Nilim, A.; and El Ghaoui, L. 2004. Robustness in Markov +decision problems with uncertain transition matrices. In Proc. +Advances in Neural Information Processing Systems (NIPS), +839–846. +Panaganti, K.; and Kalathil, D. 2021. Sample Complexity +of Robust Reinforcement Learning with a Generative Model. +arXiv preprint arXiv:2112.01506. +Pattanaik, A.; Tang, Z.; Liu, S.; Bommannan, G.; and Chowd- +hary, G. 2018. Robust Deep Reinforcement Learning with +Adversarial Attacks. In Proc. International Conference on +Autonomous Agents and MultiAgent Systems, 2040–2042. +Pinto, L.; Davidson, J.; Sukthankar, R.; and Gupta, A. 2017. +Robust adversarial reinforcement learning. In Proc. Interna- +tional Conference on Machine Learning (ICML), 2817–2826. +PMLR. +Puterman, M. L. 1994. Markov Decision Processes: Discrete +Stochastic Dynamic Programming. +Rahimian, H.; Bayraksan, G.; and De-Mello, T. H. 2022. +Effective scenarios in multistage distributionally robust op- +timization with a focus on total variation distance. SIAM +Journal on Optimization, 32(3): 1698–1727. +Rajeswaran, A.; Ghotra, S.; Ravindran, B.; and Levine, S. +2017. Epopt: Learning robust neural network policies using +model ensembles. +In Proc. International Conference on +Learning Representations (ICLR). + +Roy, A.; Xu, H.; and Pokutta, S. 2017. Reinforcement learn- +ing under model mismatch. In Proc. Advances in Neural +Information Processing Systems (NIPS), 3046–3055. +Rudin, W. 2022. Functional Analysis. McGraw-Hill Science +&Engineering &Math, 2nd edition. +Russel, R. H.; Benosman, M.; and Van Baar, J. 2020. Ro- +bust Constrained-MDPs: Soft-Constrained Robust Policy +Optimization under Model Uncertainty. +arXiv preprint +arXiv:2010.04870. +Satia, J. K.; and Lave Jr, R. E. 1973. Markovian decision +processes with uncertain transition probabilities. Operations +Research, 21(3): 728–740. +Si, N.; Zhang, F.; Zhou, Z.; and Blanchet, J. 2020. Distri- +butionally robust policy evaluation and learning in offline +contextual bandits. In Proc. International Conference on +Machine Learning (ICML), 8884–8894. PMLR. +Sigaud, O.; and Buffet, O. 2013. Markov decision processes +in artificial intelligence. John Wiley & Sons. +Sutton, R. S.; and Barto, A. G. 2018. Reinforcement Learning: +An Introduction. Cambridge, Massachusetts: The MIT Press. +Tamar, A.; Mannor, S.; and Xu, H. 2014. Scaling up robust +MDPs using function approximation. In Proc. International +Conference on Machine Learning (ICML), 181–189. PMLR. +Tessler, C.; Efroni, Y.; and Mannor, S. 2019. Action robust +reinforcement learning and applications in continuous control. +In International Conference on Machine Learning, 6215– +6224. PMLR. +Tewari, A.; and Bartlett, P. L. 2007. Bounded parameter +Markov decision processes with average reward criterion. In +International Conference on Computational Learning Theory, +263–277. Springer. +Vinitsky, E.; Du, Y.; Parvate, K.; Jang, K.; Abbeel, P.; and +Bayen, A. 2020. Robust Reinforcement Learning using Ad- +versarial Populations. arXiv preprint arXiv:2008.01825. +Wan, Y.; Naik, A.; and Sutton, R. S. 2021. Learning and +planning in average-reward markov decision processes. In In- +ternational Conference on Machine Learning, 10653–10662. +PMLR. +Wang, Y.; and Zou, S. 2021. Online Robust Reinforcement +Learning with Model Uncertainty. In Proc. Advances in +Neural Information Processing Systems (NeurIPS). +Wang, Y.; and Zou, S. 2022. Policy Gradient Method For +Robust Reinforcement Learning. In Proc. International Con- +ference on Machine Learning (ICML), volume 162, 23484– +23526. PMLR. +Wei, C.-Y.; Jahromi, M. J.; Luo, H.; Sharma, H.; and Jain, R. +2020. Model-free reinforcement learning in infinite-horizon +average-reward markov decision processes. In International +conference on machine learning, 10170–10180. PMLR. +Wiesemann, W.; Kuhn, D.; and Rustem, B. 2013. Robust +Markov decision processes. Mathematics of Operations Re- +search, 38(1): 153–183. +Xu, H.; and Mannor, S. 2010. +Distributionally Robust +Markov Decision Processes. In Proc. Advances in Neural +Information Processing Systems (NIPS), 2505–2513. +Yang, W.; Zhang, L.; and Zhang, Z. 2021. Towards The- +oretical Understandings of Robust Markov Decision Pro- +cesses: Sample Complexity and Asymptotics. arXiv preprint +arXiv:2105.03863. +Yu, P.; and Xu, H. 2015. Distributionally robust counterpart in +Markov decision processes. IEEE Transactions on Automatic +Control, 61(9): 2538–2543. +Zhang, Y.; and Ross, K. W. 2021. On-policy deep reinforce- +ment learning for the average-reward criterion. In Proc. Inter- +national Conference on Machine Learning (ICML), 12535– +12545. PMLR. +Zhou, Z.; Bai, Q.; Zhou, Z.; Qiu, L.; Blanchet, J.; and Glynn, +P. 2021. Finite-Sample Regret Bound for Distributionally +Robust Offline Tabular Reinforcement Learning. In Proc. In- +ternational Conference on Artifical Intelligence and Statistics +(AISTATS), 3331–3339. PMLR. + +Review of Robust Discounted MDPs +In this section, we provide a brief review on the existing methods and results for robust discounted MDPs. +Robust Policy Evaluation +We first consider the robust policy evaluation problem, where we aim to estimate the robust value function V π +P,γ for any policy +π. It has been shown that the robust Bellman operator Tπ is a γ-contraction, and the robust value function V π +P,γ is its unique +fixed-point. Hence by recursively applying the robust Bellman operator, we can find the robust discounted value function (Nilim +and El Ghaoui 2004; Iyengar 2005). +Algorithm 4: Policy evaluation for robust discounted MDPs +Input: π, V0, T +1: for t = 0, 1, ..., T − 1 do +2: +for all s ∈ S do +3: +Vt+1(s) ← Eπ[r(s, A) + γσPA +s (Vt)] +4: +end for +5: end for +6: return VT +Robust Optimal Control +Another important problem in robust MDP is to find the optimal policy which maximizes the robust discounted value function: +π∗ = arg max +π +V π +P,γ. +(16) +A robust value iteration approach is developed in (Nilim and El Ghaoui 2004; Iyengar 2005) as follows. +Algorithm 5: Optimal Control for robust discounted MDPs +Input: V0, T +1: for t = 0, 1, ..., T − 1 do +2: +for all s ∈ S do +3: +Vt+1(s) ← maxa +� +r(s, a) + γσPas (Vt) +� +4: +end for +5: end for +6: π∗(s) ← arg maxa +� +r(s, a) + γσPas (VT ) +� +, ∀s +7: return π∗ +Equivalence between Time-Varying and Stationary Models +We first provide an equivalence result between time-varying and stationary transition kernel models under stationary policies, +which is an analog result to the one for robust discounted MDPs (Iyengar 2005; Nilim and El Ghaoui 2004). This result will be +used in our following proofs. +Recall the definitions of robust discounted value function and worst-case average reward in eqs. (4) and (5), the worst-case +is taken w.r.t. κ = (P0, P1...) ∈ � +t≥0 P, therefore, the transition kernel at each time step could be different. This model is +referred to as time-varying transition kernel model (as in (Iyengar 2005; Nilim and El Ghaoui 2004)). Another commonly used +setting is that the transition kernels at different time step are the same, which is referred to as the stationary model (Iyengar +2005; Nilim and El Ghaoui 2004). In this paper, we use the following notations to distinguish the two models. By EP[·], we +denote the expectation when the transition kernels at all time steps are the same, P, i.e., the stationary model. We also denote by +gπ +P(s) ≜ limn→∞ EP,π +� +1 +n +�n−1 +t=0 rt +��S0 = s +� +and V π +P.γ(s) ≜ EP,π +��∞ +t=0 γtrt +��S0 = s +� +being the expected average-reward and +expected discounted value function under the stationary model P. By Eκ[·], we denote the expectation when the transition kernel +at time t is Pt, i.e., the time-varying model. +For the discounted setting, it has been shown in (Nilim and El Ghaoui 2004) that for a stationary policy π, any γ ∈ [0, 1), and +any s ∈ S, +V π +P,γ(s) = +min +κ∈� +t≥0 P Eπ,κ +� ∞ +� +t=0 +γtrt|S0 = s +� + += min +P∈P Eπ,P +� ∞ +� +t=0 +γtrt|S0 = s +� +. +(17) +In the following theorem, we prove an analog of eq. (17) for robust-average reward MDPs that if we consider stationary policies, +then the robust average-reward problem with the time-varying model can be equivalently solved by a stationary model. +Specifically, we define the worst-case average reward for the stationary transition kernel model as follows: +min +P∈P lim +n→∞ Eπ,P +� +1 +n +n−1 +� +t=0 +rt +��S0 = s +� +. +(18) +Recall the worst-case average reward for the time-varying model in eq. (5). We will show that for any stationary policy, eq. (5) +can be equivalently solved by solving eq. (18). +Theorem 9. Consider an arbitrary stationary policy π. Then, the worst-case average-reward under the time-varying model is +the same as the one under the stationary model: +gπ +P(s) ≜ +min +κ∈� +t≥0 P lim +n→∞ Eκ,π +� +1 +n +n−1 +� +t=0 +rt|S0 = s +� += min +P∈P lim +n→∞ EP,π +� +1 +n +n−1 +� +t=0 +rt +��S0 = s +� +. +(19) +Similar result also holds for the robust relative value function: +V π +P (s) ≜ +min +κ∈� +t≥0 P Eκ,π +� ∞ +� +t=0 +(rt − gπ +P)|S0 = s +� += min +P∈P EP,π +� ∞ +� +t=0 +(rt − gπ +P)|S0 = s +� +. +(20) +Proof. From the robust Bellman equation in Theorem 6 2, we have that +V π +P (s) + gπ +P = +� +a +π(a|s) +� +r(s, a) + σPas (V π +P ) +� +. +(21) +Denote by arg minp∈Pas (p)⊤V π +P ≜ pa +s +3, and denote by Pπ ≜ {pa +s : s ∈ S, a ∈ A}. It then follows that +V π +P (s) = +� +a +π(a|s) +� +r(s, a) − gπ +P + σPas (V π +P ) +� += +� +a +π(a|s)(r(s, a) − gπ +P) + +� +a +π(a|s)EPπ[V π +P (S1)|S0 = s, A0 = a] += +� +a +π(a|s)(r(s, a) − gπ +P) + EPπ,π[V π +P (S1)|S0 = s] += +� +a +π(a|s)(r(s, a) − gπ +P) + EPπ,π +� � +a +π(a|S1)(r(S1, a) − gπ +P)|S0 = s +� ++ EPπ,π +� � +a +π(a|S1)σPa +S1 (V π +P )|S0 = s +� += +� +a +π(a|s)(r(s, a) − gπ +P) + EPπ,π [r1 − gπ +P|S0 = s] + EPπ,π +� +σPA1 +S1 (V π +P )|S0 = s +� += +� +a +π(a|s)(r(s, a) − gπ +P) + EPπ,π +� +r1 − gπ +P +��S0 = s +� ++ EPπ,π +� +(pA1 +S1 )⊤V π +P |S0 = s +� += EPπ,π +� +r0 − gπ +P + r1 − gπ +P|S0 = s +� ++ EPπ,π[V π +P (S2)|S0 = s] +...... +2The proof of Theorem 6 is independent of theorem 9 and does not relay on the results to be showed here. +3We pick one arbitrarily, if there are multiple minimizers. + += EPπ,π +� ∞ +� +t=0 +(rt − gπ +P)|s +� +. +(22) +By the definition, the following always hold: +min +κ∈� +t≥0 P Eκ,π +� ∞ +� +t=0 +(rt − gπ +P)|S0 = s +� +≤ min +P∈P EP,π +� ∞ +� +t=0 +(rt − gπ +P)|S0 = s +� +. +(23) +This hence implies that a stationary transition kernel sequence κ = (Pπ, Pπ, ...) is one of the worst-case transition kernels for +V π +P . Therefore, eq. (20) can be proved. +Consider the transition kernel Pπ. We denote its non-robust average-reward and the non-robust relative value function by gπ +Pπ +and V π +Pπ. By the non-robust Bellman equation (Sutton and Barto 2018), we have that +V π +Pπ(s) = +� +a +π(a|s)(r(s, a) − gπ +Pπ) + EPπ,π[V π +Pπ(S1)|s]. +(24) +On the other hand, the robust Bellman equation shows that +V π +P (s) = V π +Pπ(s) = +� +a +π(a|s)(r(s, a) − gπ +P) + EPπ,π[V π +Pπ(S1)|s]. +(25) +These two equations hence implies that gπ +P = gπ +Pπ, and hence the stationary kernel (Pπ, Pπ, ...) is also a worst-case kernel of +robust average-reward in the time-varying setting. This proves eq. (19). +Proof of Theorem 1 +In the proof, unless otherwise specified, we denote by ∥v∥ the l∞ norm of a vector v, and for a matrix A, we denote by ∥A∥ its +matrix norm induced by l∞ norm, i.e., ∥A∥ = supx∈Rd ∥Ax∥∞ +∥x∥∞ . +Lemma 1. [Theorem 8.2.3 in (Puterman 1994)] For any P, γ, π, +V π +P,γ = +1 +1 − γ gπ +P + hπ +P + f π +P (γ), +(26) +where hπ +P = Hπ +Prπ, and f π +P (γ) = 1 +γ +�∞ +n=1(−1)n � +1−γ +γ +�n +(Hπ +P)n+1rπ. +Following Proposition 8.4.6 in (Puterman 1994), we can show the following lemma. +Lemma 2. Hπ +P is continuous on Π × P. If Π and P are compact, ∥Hπ +P∥ is uniformly bounded on Π × P, i.e., there exists a +constant h, such that ∥Hπ +P∥ ≤ h for any π, P. +For simplicity, denote by +Sπ +∞(P, γ) ≜ 1 +γ +∞ +� +n=1 +(−1)n +�1 − γ +γ +�n +(Hπ +P)n+1rπ, +Sπ +N(P, γ) ≜ 1 +γ +N +� +n=1 +(−1)n +�1 − γ +γ +�n +(Hπ +P)n+1rπ. +(27) +Clearly Sπ +∞(P, γ) = f π +P (γ) and limN→∞ Sπ +N(P, γ) = Sπ +∞(P, γ) for any specific π, P. +Lemma 3. There exists δ ∈ (0, 1), such that +lim +N→∞ Sπ +N(P, γ) = Sπ +∞(P, γ) +(28) +uniformly on Π × P × [δ, 1]. +Proof. Note that ∥Hπ +P∥ ≤ h, hence there exists δ, s.t. +1 − δ +δ +h ≤ k < 1 +(29) +for some constant k. Then for any γ ≥ δ, +1 − γ +γ +h ≤ 1 − δ +δ +h ≤ k. +(30) + +Moreover, note that +���� +1 +γ (−1)n +�1 − γ +γ +�n +(Hπ +P)n+1r +���� ≤ 1 +γ +�1 − γ +γ +�n +hn+1 ≤ hkn +δ +≜ Mn, +(31) +which is because ∥A + B∥ ≤ ∥A∥ + ∥B∥ for induced l∞ norm, ∥Ax∥ ≤ ∥A∥∥x∥ and ∥rπ∥∞ ≤ 1. +Note that +∞ +� +n=1 +Mn = h +δ +k +1 − k , +(32) +hence by Weierstrass M-test (Rudin 2022), Sπ +N(P, γ) uniformly converges to Sπ +∞(P, γ) on Π × P × [δ, 1]. +Lemma 4. There exists a uniform constant L, such that +∥Sπ +N(P, γ1) − Sπ +N(P, γ2)∥ ≤ L|γ1 − γ2|, +(33) +for any N, π, P, γ1, γ2 ∈ [δ, 1]. +Proof. We first show that γSπ +N(P, γ) = �N +n=1(−1)n � +1−γ +γ +�n +(Hπ +P)n+1rπ ≜ T π +N(P, γ) is uniformly Lipschitz w.r.t. the l∞ +norm, i.e., +∥T π +N(P, γ1) − T π +N(P, γ2)∥ ≤ l|γ1 − γ2|, +(34) +for any N, π, P, γ1, γ2 ∈ [δ, 1] and some constant l. +Clearly, it can be shown by verifying ∇T π +N(P, γ) is uniformly bounded for any π, N, P or γ. +First, it can be shown that +∇T π +N(P, γ) = +N +� +n=1 +(−1)nn +�1 − γ +γ +�n−1 −1 +γ2 (Hπ +P)n+1rπ, +(35) +and moreover +∥∇T π +N(P, γ)∥ ≤ +N +� +n=1 +n +�1 − γ +γ +�n−1 1 +γ2 hn+1 ≜ lN(γ). +(36) +Note that +h1 − γ +γ +lN(γ) = +N +� +n=1 +n +�1 − γ +γ +�n 1 +γ2 hn+2, +(37) +then, we can show that +� +1 − h1 − γ +γ +� +lN(γ) += +N +� +n=1 +n +�1 − γ +γ +�n−1 1 +γ2 hn+1 − +N +� +n=1 +n +�1 − γ +γ +�n 1 +γ2 hn+2 += 1 +γ2 h2 − N +�1 − γ +γ +�N 1 +γ2 hN+2 + +N +� +n=2 +�1 − γ +γ +�n−1 1 +γ2 hn+1 +≤ 1 +γ2 h2 + h2 +γ2 +1 − γ +γ +h +1 +1 − 1−γ +γ h += h2 +γ2 + h2 +γ2 +1 − γ +γ +h +1 +1 − 1−γ +γ h. +(38) +Hence, we have that +∥∇T π +N(P, γ)∥ ≤ lN(γ) ≤ +1 +1 − h 1−γ +γ +� +h2 +γ2 + h2 +γ2 +1 − γ +γ +h +1 +1 − 1−γ +γ h +� + +≤ +1 +1 − k +�h2 +δ2 + h2 +δ2 +k +1 − k +� +, +(39) +which implies a uniform bound on ∥∇T π +N(P, γ)∥. +Now, we have that +|Sπ +N(P, γ1) − Sπ +N(P, γ2)| +≤ |γ2 − γ1| +γ1γ2 +∥T π +N(P, γ1)∥ + ∥T π +N(P, γ1) − T π +N(P, γ2)∥ +γ2 +. +(40) +To show ∥T π +N(P, γ)∥ is uniformly bounded, we have that +∥T π +N(P, γ)∥ ≤ +N +� +n=1 +���� +�1 − γ +γ +�n +(Hπ +P)n+1r +���� +≤ +N +� +n=1 +�1 − γ +γ +�n +hn+1 +≤ +N +� +n=1 +knh +≤ h +k +1 − k . +(41) +Then, it follows that +∥Sπ +N(P, γ1) − Sπ +N(P, γ2)∥ += +���� +γ2 − γ1 +γ1γ2 +T π +N(P, γ1) + T π +N(P, γ1) − T π +N(P, γ2) +γ2 +���� +≤ +� 1 +δ2 h +k +1 − k + 1 +δ +1 +1 − k +�h2 +δ2 + h2 +δ2 +k +1 − k +�� +|γ1 − γ2| +≜ L|γ1 − γ2|, +(42) +where L = +� +1 +δ2 h +k +1−k + 1 +δ +1 +1−k +� +h2 +δ2 + h2 +δ2 +k +1−k +�� +is a universal constant that does not depend on N, P, π or γ. +Lemma 5. Sπ +∞(P, γ) uniformly converges as γ → 1 on Π × P. Also, Sπ +∞(P, γ) is L-Lipschitz for any γ > δ: for any π, P and +any γ1, γ2 ∈ (δ, 1]. +∥Sπ +∞(P, γ1) − Sπ +∞(P, γ2)∥ ≤ L|γ1 − γ2|. +(43) +Proof. From Lemma 3, for any ϵ, there exists Nϵ, such that for any n ≥ Nϵ, π, P, γ > δ, +∥Sπ +∞(P, γ) − Sπ +n(P, γ)∥ < ϵ. +(44) +Thus for any γ1, γ2 ∈ (δ, 1], +∥Sπ +∞(P, γ1) − Sπ +∞(P, γ2)∥ +≤ ∥Sπ +∞(P, γ1) − Sπ +n(P, γ1)∥ + ∥Sπ +n(P, γ1) − Sπ +n(P, γ2)∥ + ∥Sπ +n(P, γ2) − Sπ +∞(P, γ2)∥ +≤ 2ϵ + ∥Sπ +n(P, γ1) − Sπ +n(P, γ2)∥ +≤ 2ϵ + L|γ1 − γ2|, +(45) +where the last step is from Lemma 4. +Thus, for any ϵ, there exists ω = max {δ, 1 − ϵ}, such that for any γ1, γ2 > ω, +∥Sπ +∞(P, γ1) − Sπ +∞(P, γ2)∥ < (2 + L)ϵ, +(46) +and hence by Cauchy’s criterion we conclude that Sπ +∞(P, γ) converges uniformly on Π × P. +On the other hand, since eq. (45) holds for any ϵ, it implies that +∥Sπ +∞(P, γ1) − Sπ +∞(P, γ2)∥ ≤ L|γ1 − γ2|, +(47) +which completes the proof. + +We now prove Theorem 1. For any P, π, we have that +V π +P,γ = +1 +1 − γ gπ +P + hπ +P + f π +P (γ). +(48) +It then follows that +(1 − γ)V π +P,γ = gπ +P + (1 − γ)hπ +P + (1 − γ)f π +P (γ). +(49) +Clearly (1 − γ)hπ +P → 0 uniformly on Π × P because ∥hπ +P∥ = ∥Hπ +Prπ∥ ≤ h is uniformly bounded. Then, +∥(1 − γ1)f π +P (γ1) − (1 − γ2)f π +P (γ2)∥ +≤ ∥(1 − γ1)f π +P (γ1) − (1 − γ1)f π +P (γ2)∥ + ∥(1 − γ1)f π +P (γ2) − (1 − γ2)f π +P (γ2)∥ +≤ (1 − γ1)L|γ1 − γ2| + ∥f π +P (γ2)∥|γ1 − γ2|. +(50) +For any π, P, γ > δ, +∥f π +P (γ)∥ = +���� +1 +γ +∞ +� +n=1 +(−1)n +�1 − γ +γ +�n +(Hπ +P)n+1rπ +���� +≤ +���� +1 +γ +∞ +� +n=1 +�1 − γ +γ +�n +hn+1 +���� +≤ h +δ +1 − γ +γ +h +1 +1 − 1−γ +γ h +≤ h +δ +k +1 − k +≜ cf. +(51) +Hence, (1 − γ)f π +P (γ) → 0 uniformly on Π × P due to the fact that ∥f π +P (γ)∥ is uniformly bounded for any π, γ > δ, P. +Then we have that limγ→1(1 − γ)V π +P,γ = gπ +P uniformly on P × Π. This completes the proof of Theorem 1. +Proof of Theorem 2 +We first show a lemma which allows us to interchange the order of lim and max. +Lemma 6. If a function f(x, y) converges uniformly to F(x) on X as y → y0, then +max +x +lim +y→y0 f(x, y) = lim +y→y0 max +x +f(x, y). +(52) +Proof. For each f(x, y), denote by arg maxx f(x, y) = xy, and hence f(xy, y) ≥ f(x, y) for any x, y. Also denote by +arg maxx F(x) = x′. Now because f(x, y) uniformly converges to F(x), then for any ϵ, there exists δ′, such that ∀|y −y0| < δ′, +|f(x, y) − F(x)| ≤ ϵ +(53) +for any x. Now consider |f(xy, y) − F(x′)| for |y − y0| < δ′. If f(xy, y) − F(x′) > 0, then +|f(xy, y) − F(x′)| = f(xy, y) − F(x′) = f(xy, y) − F(xy) + F(xy) − F(x′) ≤ ϵ; +(54) +On the other hand if f(xy, y) − F(x′) < 0, then +|f(xy, y) − F(x′)| = F(x′) − f(xy, y) = F(x′) − f(x′, y) + f(x′, y) − f(xy, y) ≤ ϵ. +(55) +Hence, we showed that for any ϵ, there exists δ′, such that ∀|y − y0| < δ′, +|f(xy, y) − F(x′)| = | max +x +f(x, y) − max +x +F(x)| ≤ ϵ, +(56) +and hence +lim +y→y0 max +x +f(x, y) = max +x +F(x) = max +x +lim +y→y0 f(x, y), +(57) +and this completes the proof. +Then, we show that the robust discounted value function converges uniformly to the robust average-reward as the discounted +factor approaches 1. + +Theorem 10 (Restatement of Theorem 2). The robust discounted value function converges uniformly to the robust average-reward +on Π: +lim +γ→1(1 − γ)V π +P,γ = gπ +P. +(58) +Proof. Due to Theorem 9, for any stationary policy π, gπ +P(s) = minP∈P gπ +P(s) under the stationary model. Hence from the +uniform convergence in Theorem 1, we first show the following: +gπ +P = min +P∈P gπ +P += min +P∈P lim +γ→1(1 − γ)V π +P,γ +(a) += lim +γ→1 min +P∈P(1 − γ)V π +P,γ += lim +γ→1(1 − γ)V π +P,γ, +(59) +where (a) is because Lemma 6. Moreover, note that limγ→1(1 − γ)V π +P,γ = gπ +P uniformly on Π × P, hence the convergence in +(59) is also uniform on Π. Thus, we complete the proof. +Proof of Theorem 3 +Theorem 11 (Restatement of Theorem 3). VT generated by Algorithm 1 converges to the robust average-reward gπ +P as T → ∞. +Proof. From discounted robust Bellman equation (Nilim and El Ghaoui 2004), it can be shown that +(1 − γt)V π +P,γt = (1 − γt) +� +a +π(a|s)(r(s, a) + γtσPas (V π +P,γt)). +(60) +Then we can show that for any s ∈ S, +|Vt+1(s) − (1 − γt+1)V π +P,γt+1(s)| += |Vt+1(s) − (1 − γt)V π +P,γt(s) + (1 − γt)V π +P,γt(s) − (1 − γt+1)V π +P,γt+1(s)| +(61) +≤ |(1 − γt)V π +P,γt(s) − (1 − γt+1)V π +P,γt+1(s)| + |Vt+1(s) − (1 − γt)V π +P,γt(s)| += |(1 − γt)V π +P,γt(s) − (1 − γt+1)V π +P,γt+1(s)| ++ +���� +� +a +π(a|s) +� +(1 − γt)r(s, a) + γtσPas (Vt) − ((1 − γt)r(s, a) + γtσPas ((1 − γt)V π +P,γt)) +����� += |(1 − γt)V π +P,γt(s) − (1 − γt+1)V π +P,γt+1(s)| + +���� +� +a +π(a|s) +� +γtσPas (Vt) − γtσPas ((1 − γt)V π +P,γt) +����� += |(1 − γt)V π +P,γt(s) − (1 − γt+1)V π +P,γt+1(s)| + γt +���� +� +a +π(a|s) +� +σPas (Vt) − σPas ((1 − γt)V π +P,γt) +�����. +(62) +If we denote by ∆t ≜ ∥Vt − (1 − γt)V π +P,γt∥∞, then +∆t+1 ≤ ∥(1 − γt)V π +P,γt − (1 − γt+1)V π +P,γt+1∥∞ + γt max +s +� � +a +π(a|s) +����σPas (Vt) − σPas ((1 − γt)V π +P,γt) +���� +� +. +(63) +It can be easily verified that σPa +s (V ) is a 1-Lipschitz function, thus the second term in (63) can be further bounded as +� +a +π(a|s) +����σPa +s (Vt) − σPa +s ((1 − γt)V π +P,γt) +���� +≤ +� +a +π(a|s)∥Vt − (1 − γt)V π +P,γt∥∞ += ∥Vt − (1 − γt)V π +P,γt∥∞, +(64) +and hence +∆t+1 ≤ ∥(1 − γt)V π +P,γt − (1 − γt+1)V π +P,γt+1∥∞ + γt∆t. +(65) + +Recall that +(1 − γt)V π +P,γt = (1 − γt) min +P V π +P,γt. +(66) +Let s∗ +t ≜ arg maxs |(1 − γt)V π +P,γt(s) − (1 − γt+1)V π +P,γt+1(s)|. Then it follows that +∥(1 − γt)V π +P,γt − (1 − γt+1)V π +P,γt+1∥∞ = |(1 − γt)V π +P,γt(s∗ +t ) − (1 − γt+1)V π +P,γt+1(s∗ +t )|. +(67) +Note that from (Nilim and El Ghaoui 2004; Iyengar 2005), for any stationary policy π, there exists a stationary model P such +that V π +P,γ(s) = EP,π +� �∞ +t=0 γtrt|S0 = s +� +≜ V π +P,γ. Hence in the following, for each γt, we denote the worst-case transition +kernel of V π +P,γt by Pt. +If (1 − γt)V π +P,γt(s∗ +t ) ≥ (1 − γt+1)V π +P,γt+1(s∗ +t ), then +|(1 − γt)V π +P,γt(s∗ +t ) − (1 − γt+1)V π +P,γt+1(s∗ +t )| += min +P (1 − γt)V π +P,γt(s∗ +t ) − min +P (1 − γt+1)V π +P,γt+1(s∗ +t ) += (1 − γt)V π +Pt,γt(s∗ +t ) − (1 − γt+1)V π +Pt+1,γt+1(s∗ +t ) += (1 − γt)V π +Pt,γt(s∗ +t ) − (1 − γt)V π +Pt+1,γt(s∗ +t ) + (1 − γt)V π +Pt+1,γt(s∗ +t ) − (1 − γt+1)V π +Pt+1,γt+1(s∗ +t ) +(a) +≤ (1 − γt)V π +Pt+1,γt(s∗ +t ) − (1 − γt+1)V π +Pt+1,γt+1(s∗ +t ) +≤ ∥(1 − γt)V π +Pt+1,γt − (1 − γt+1)V π +Pt+1,γt+1∥∞, +(68) +where (a) is due to (1 − γt)V π +Pt,γt(s∗ +t ) = minP(1 − γt)V π +P,γt(s∗ +t ) ≤ (1 − γt)V π +Pt+1,γt(s∗ +t ). +Now, according to Lemma 1, +(1 − γt)V π +Pt+1,γt = gπ +Pt+1 + (1 − γt)hπ +Pt+1 + (1 − γt)f π +Pt+1(γt), +(69) +(1 − γt+1)V π +Pt+1,γt+1 = gπ +Pt+1 + (1 − γt+1)hπ +Pt+1 + (1 − γt+1)f π +Pt+1(γt+1). +(70) +Hence, for any γt > δ, eq. (68) can be further bounded as +∥(1 − γt)V π +Pt+1,γt − (1 − γt+1)V π +Pt+1,γt+1∥∞ += ∥(γt+1 − γt)hπ +Pt+1 + (1 − γt)f π +Pt+1(γt) − (1 − γt+1)f π +Pt+1(γt+1)∥∞ +≤ (γt+1 − γt)∥hπ +Pt+1∥∞ + ∥f π +Pt+1(γt) − f π +Pt+1(γt+1)∥∞ + ∥γt+1f π +Pt+1(γt+1) − γtf π +Pt+1(γt)∥∞ +(a) +≤ h(γt+1 − γt) + L(γt+1 − γt) + ∥γt+1f π +Pt+1(γt+1) − γtf π +Pt+1(γt)∥∞ +≤ h(γt+1 − γt) + L(γt+1 − γt) + ∥γt+1f π +Pt+1(γt+1) − γt+1f π +Pt+1(γt)∥∞ + ∥γt+1f π +Pt+1(γt) − γtf π +Pt+1(γt)∥∞ +≤ h(γt+1 − γt) + L(γt+1 − γt) + γt+1∥f π +Pt+1(γt+1) − f π +Pt+1(γt)∥∞ + ∥f π +Pt+1(γt)∥∞(γt+1 − γt) +(b) +≤ (h + L + γt+1L + sup +π,P,γ +∥f π +P (γ)∥∞)(γt+1 − γt) +≤ K(γt+1 − γt), +(71) +where (a) is from Lemma 5 for any γt > δ, cf is defined in (51) and K ≜ h + 2L + cf is a uniform constant; And (b) is from +Lemma 5. +Similarly, the inequality also holds for the case when (1 − γt)V π +P,γt(s∗ +t ) ≤ (1 − γt+1)V π +P,γt+1(s∗ +t ). Thus we have that for any +t such that γt > δ, +∆t+1 ≤ K(γt+1 − γt) + γt∆t, +(72) +where K is a uniform constant. +Following Lemma 8 from (Tewari and Bartlett 2007), we have that ∆t → 0. Note that +∥Vt − gπ +P∥∞ ≤ ∥Vt − (1 − γt)V π +P,γt∥∞ + ∥(1 − γt)V π +P,γt − gπ +P∥∞ = ∆t + ∥(1 − γt)V π +P,γt − gπ +P∥∞. +(73) +Together with Theorem 2, we further have that +lim +t→∞ ∥Vt − gπ +P∥∞ = 0, +(74) +which completes the proof. + +Proof of Theorem 4 +Note that the optimal robust average-reward is defined as +g∗ +P(s) ≜ max +π +gπ +P(s). +(75) +We further define +V ∗ +P,γ(s) ≜ max +π +V π +P,γ(s). +(76) +Theorem 12 (Restatement of Theorem 4). VT generated by Algorithm 2 converges to the optimal robust average-reward g∗ +P as +T → ∞. +Proof. Firstly, from the uniform convergence in Theorem 2, it can be shown that +lim +t→∞(1 − γt)V ∗ +P,γt = g∗ +P. +(77) +We then show that for any s ∈ S, +|Vt+1(s) − (1 − γt+1)V ∗ +P,γt+1(s)| +≤ |Vt+1(s) − (1 − γt)V ∗ +P,γt(s)| + |(1 − γt)V ∗ +P,γt(s) − (1 − γt+1)V ∗ +P,γt+1(s)| +(a) += |(1 − γt)V ∗ +P,γt(s) − (1 − γt+1)V ∗ +P,γt+1(s)| ++ +���� max +a +� +(1 − γt)r(s, a) + γtσPas (Vt) +� +− max +a +� +((1 − γt)r(s, a) + γtσPas ((1 − γt)V ∗ +P,γt)) +����� +≤ |(1 − γt)V ∗ +P,γt(s) − (1 − γt+1)V ∗ +P,γt+1(s)| ++ max +a +����(1 − γt)r(s, a) + γtσPas (Vt) − ((1 − γt)r(s, a) + γtσPas ((1 − γt)V ∗ +P,γt)) +����, +(78) +where (a) is because the optimal robust Bellman equation, and the last inequality is from the fact that | maxx f(x)−maxx g(x)| ≤ +maxx |f(x) − g(x)|. +Hence eq. (78) can be further bounded as +|Vt+1(s) − (1 − γt+1)V ∗ +P,γt+1(s)| +≤ |(1 − γt)V ∗ +P,γt(s) − (1 − γt+1)V ∗ +P,γt+1(s)| + γt max +a +����σPas (Vt) − σPas ((1 − γt)V ∗ +P,γt) +����. +(79) +If we denote by ∆t ≜ ∥Vt − (1 − γt)V ∗ +P,γt∥∞, then +∆t+1 ≤ ∥(1 − γt)V ∗ +P,γt − (1 − γt+1)V ∗ +P,γt+1∥∞ + γt max +s.a +����σPas (Vt) − σPas ((1 − γt)V ∗ +P,γt) +����. +(80) +Since the support function σPas (V ) is 1-Lipschitz, then it can be shown that for any s, a, +����σPas (Vt) − σPas ((1 − γt)V ∗ +P,γt) +���� ≤ ∥Vt − (1 − γt)V ∗ +P,γt∥∞. +(81) +Hence +∆t+1 ≤ ∥(1 − γt)V ∗ +P,γt − (1 − γt+1)V ∗ +P,γt+1∥∞ + γt∆t. +(82) +Similar to (71) in Theorem 3, we can show that +∥(1 − γt)V ∗ +P,γt − (1 − γt+1)V ∗ +P,γt+1∥∞ ≤ K|γt − γt+1|, +(83) +and similar to Lemma 8 from (Tewari and Bartlett 2007), +lim +t→∞ ∆t = 0. +(84) +Moreover, note that +∥Vt − g∗ +P∥∞ ≤ ∥Vt − (1 − γt)V ∗ +P,γt∥∞ + ∥(1 − γt)V ∗ +P,γt − g∗ +P∥∞ = ∆t + ∥(1 − γt)V ∗ +P,γt − g∗ +P∥∞, +(85) +which together with eq. (77) implies that +∥Vt − g∗ +P∥∞ → 0, +(86) +and hence it completes the proof. + +Proof of Theorem 5 +We denote the set of all stationary deterministic polices by ΠD in this section. +Theorem 13 (Restatement of Theorem 5). There exists 0 < δ < 1, such that for any γ > δ, a deterministic optimal robust +policy for robust discounted value function V ∗ +P,γ is also an optimal policy for robust average-reward, i.e., +V π∗ +P,γ = V ∗ +P,γ. +(87) +Moreover, when arg maxπ∈ΠD gπ +P is a singleton, there exists a unique Blackwell optimal policy. +Proof. According to Theorem 74, there exists π∗ ∈ ΠD such that +g∗ +P = gπ∗ +P . +(88) +Assume the robust average-reward of all deterministic policies are sorted in a descending order: +g∗ +P = gπ∗ +1 +P = gπ∗ +2 +P = ... = gπ∗ +m +P +> gπ1 +P ≥ ... ≥ gπn +P +(89) +for all π∗ +i , πi ∈ ΠD, and we define Π∗ = {π∗ +i : i = 1, ..., m}. Denote by d = gπ∗ +i +P − gπ1 +P . +From Theorem 2, we know that for any π ∈ ΠD, +lim +γ→1(1 − γ)V π +P,γ = gπ +P. +(90) +Because the set ΠD is finite, for any ϵ < d +2, there exists δ′ < 1, such that for any γ > δ′, π∗ +i and πj, +|(1 − γ)V π∗ +i +P,γ − g∗ +P| < ϵ, +(91) +|(1 − γ)V πj +P,γ − gπj +P | < ϵ. +(92) +It hence implies that +(1 − γ)V π∗ +i +P,γ ≥ (d − 2ϵ) + (1 − γ)V πj +P,γ > (1 − γ)V πj +P,γ, +(93) +and +V π∗ +i +P,γ > V πj +P,γ. +(94) +Note that from Theorem 3.1 in (Iyengar 2005), i.e., maxπ∈ΠD V π +P,γ = V ∗ +P,γ, we have that for any γ, there exists a deterministic +policy π ∈ ΠD, such that V ∗ +P,γ = V π +P,γ. Together with (94), it implies that all the possible optimal robust polices of V π +P,γ belong +to {π∗ +1, ...π∗ +m}, i.e., the set Π∗. Hence, there exists π∗ +j ∈ Π∗, such that +V +π∗ +j +P,γ = max +π∈ΠD V π +P,γ = V ∗ +P,γ. +(95) +For the second part, when the optimal robust policy of robust average-reward is unique, i.e., Π∗ = {π∗}. Then from the +results above, there exists δ′, such that for any γ > δ′, V π∗ +P,γ > V π +P,γ for any π∗ ̸= π ∈ ΠD, and hence π∗ is the optimal policy +for discounted robust MDPs, which is the unique Blackwell optimal policy. +Proof of Results for Direct Approach +Recall that +V π +P (s) ≜ +min +κ∈� +t≥0 P Eκ,π +� ∞ +� +t=0 +(rt − gπ +P) +��S0 = s +� +, +(96) +where +gπ +P = +min +κ∈� +t≥0 P lim +n→∞ Eκ,π +� +1 +n +n−1 +� +t=0 +rt|S0 = s +� +. +(97) +We first show that the robust relative function is always finite. +Lemma 7. For any π, V π +P is finite. +4The proof of Theorem 7 is independent of theorem 5 and does not relay on the results to be showed here. + +Proof. According to Theorem 9, V π +P = minP∈P V π +P = minP∈P EP,π +� �∞ +t=0(rt − gπ +P) +� +. Note that V π +P can be rewritten as +V π +P = min +P∈P EP,π +� ∞ +� +t=0 +(rt − gπ +P) +� += min +P∈P EP,π +� +lim +n→∞ +n +� +t=0 +(rt − gπ +P) +� += min +P∈P EP,π +� +lim +n→∞ +n +� +t=0 +(rt − gπ +P + gπ +P − gπ +P) +� += min +P∈P EP,π +� +lim +n→∞(Rn − ngπ +P + ngπ +P − ngπ +P) +� +, +(98) +where Rn = �n +t=0 rt. Note that for any P ∈ P and n, ngπ +P ≥ ngπ +P, hence +lim +n→∞(Rn − ngπ +P + ngπ +P − ngπ +P) ≥ lim +n→∞(Rn − ngπ +P), +(99) +and thus the lower bound of V π +P can be derived as follows, +V π +P ≥ min +P∈P EP,π +� ∞ +� +t=0 +(rt − gπ +P) +� += min +P∈P V π +P += min +P∈P Hπ +Prπ. +(100) +which is finite due to the fact that Hπ +P is continuous on the compact set P. +From Theorem 9, we denote the stationary worst-case transition kernel of gπ +P by Pg. Then the upper bound of V π +P can be +bounded by noting that +V π +P = min +P∈P EP,π +� ∞ +� +t=0 +(rt − gπ +Pg) +� +≤ EPg,π +� ∞ +� +t=0 +(rt − gπ +Pg) +� += V π +Pg, +(101) +which is also finite and Pg denotes the worst-case transition kernel of gπ +P. Hence we show that V π +P is finite for any π and hence +complete the proof. +After showing that the robust relative value function is well-defined, we show the following robust Bellman equation for +average-reward robust MDPs. +Theorem 14 (Restatement of Theorem 6). For any s and π, (V π +P , gπ +P) is a solution to the following robust Bellman equation: +V (s) + g = +� +a +π(a|s) +� +r(s, a) + σPa +s (V ) +� +. +(102) +Proof. From the definition, +V π +P (s) = +min +κ∈� +t≥0 P Eκ,π +� ∞ +� +t=0 +(rt − gπ +P) +��S0 = s +� +, +(103) +hence +V π +P (s) = +min +κ∈� +t≥0 P Eκ,π +� ∞ +� +t=0 +(rt − gπ +P) +��S0 = s +� += +min +κ∈� +t≥0 P Eκ,π +� +(r0 − gπ +P) + +∞ +� +t=1 +(rt − gπ +P) +��S0 = s +� + += +min +κ∈� +t≥0 P +�� +a +π(a|s)r(s, a) − gπ +P + Eκ,π +� ∞ +� +t=1 +(rt − gπ +P) +��S0 = s +�� += +� +a +π(a|s) (r(s, a) − gπ +P) + +min +κ∈� +t≥0 P +� +� +� +� +a,s′ +π(a|s)Pa +s,s′Eκ,π +� ∞ +� +t=1 +(rt − gπ +P)|S1 = s′ +�� +� +� += +� +a +π(a|s) (r(s, a) − gπ +P) + min +P0∈P +min +κ=(P1,...)∈� +t≥1 P +� +� +� +� +a,s′ +π(a|s)(P0)a +s,s′Eκ,π +� ∞ +� +t=1 +(rt − gπ +P)|S1 = s′ +�� +� +� += +� +a +π(a|s) (r(s, a) − gπ +P) + min +P0∈P +� +� +� +� +a,s′ +π(a|s)(P0)a +s,s′ +min +κ=(P1,...)∈� +t≥1 P +� +Eκ,π +� ∞ +� +t=1 +(rt − gπ +P)|S1 = s′ +��� +� +� += +� +a +π(a|s) (r(s, a) − gπ +P) + +� +a +π(a|s) +� +s′ +min +pa +s,s′∈Pas +pa +s,s′V π +P (s′) += +� +a +π(a|s) (r(s, a) − gπ +P) + +� +a +π(a|s)σPas (V π +P ) += +� +a +π(a|s) +� +r(s, a) − gπ +P + σPas (V π +P ) +� +. +(104) +This hence completes the proof. +Theorem 15. [Restatement of Theorem 7, Part 1] For any (g, V ) that is a solution to maxa +� +r(s, a) − g + σPas (V ) − V (s) +� += +0, ∀s, then g = g∗ +P. +Proof. In this proof, for two vectors v, w ∈ Rn, v ≥ w denotes that v(s) ≥ w(s) entry-wise. +Let B(g, V )(s) ≜ maxa +� +r(s, a) − g + σPas (V ) − V (s) +� +. Since (g, V ) is a solution to (13), hence for any a ∈ A and any +s ∈ S, +r(s, a) − g + σPas (V ) − V (s) ≤ 0, +(105) +from which it follows that for any policy π, +g(s) ≥ rπ(s) + +� +a +π(a|s)σPas (V ) − V (s) ≜ rπ(s) + +� +a +π(a|s)(pa +s)⊤V − V (s), +(106) +where rπ(s) ≜ � +a π(a|s)r(s, a), pa +s ≜ arg minp∈Pas p⊤V , and PV = {pa +s : s ∈ S, a ∈ A}. We also denotes the state transition +matrix induced by π and PV by Pπ +V . +Using these notations, and rewrite eq. (106), we have that +g1 ≥ rπ + (Pπ +V − I)V. +(107) +Since the inequality in eq. (107) holds entry-wise, all entries of Pπ +V are positive, then by multiplying both sides of eq. (107) by +Pπ +V , we have that +g1 = gPπ +V 1 ≥ Pπ +V rπ + Pπ +V (Pπ +V − I)V. +(108) +Multiplying the both sides of eq. (108) by Pπ +V , and repeatedly doing that, we have that +g1 ≥ (Pπ +V )2rπ + (Pπ +V )2(Pπ +V − I)V, +(109) +... +... +(110) +g1 ≥ (Pπ +V )n−1rπ + (Pπ +V )n−1(Pπ +V − I)V. +(111) +Summing up these inequalities from eq. (107) to eq. (111), we have that +ng1 ≥ (I + Pπ +V + ... + (Pπ +V )n−1)rπ + (I + Pπ +V + ... + (Pπ +V )n−1)(Pπ +V − I)V, +(112) +and from which, it follows that +g1 ≥ 1 +n(I + Pπ +V + ... + (Pπ +V )n−1)rπ + 1 +n(I + Pπ +V + ... + (Pπ +V )n−1)(Pπ +V − I)V + += 1 +n(I + Pπ +V + ... + (Pπ +V )n−1)rπ + 1 +n((Pπ +V )n − I)V. +(113) +It can be easily verified that limn→∞ 1 +n((Pπ +V )n − I)V = 0, and hence it implies that +g1 ≥ lim +n→∞ +1 +n(I + Pπ +V + ... + (Pπ +V )n−1)rπ += lim +n→∞ +1 +nEPπ +V ,π +� +n +� +t=0 +rt +� += gπ +Pπ +V 1 +≥ gπ +P1. +(114) +Since eq. (114) holds for any policy π, it follows that g ≥ g∗ +P. On the other hand, since B(g, V ) = 0, there exists a policy τ such +that +g1 = rτ + (Pτ +V − I)V, +(115) +where rτ, Pτ +V are similarly defined as for π. From Theorem 9, there exists a stationary transition kernel Pτ +ave such that gτ +P = gτ +Pτave. +We denote the state transition matrix induced by τ and Pτ +ave by Pτ. Then because Pτ +V is the worst-case transition of V , it follows +that +Pτ +V V ≤ PτV. +(116) +Thus +g1 ≤ rτ + (Pτ − I)V. +(117) +Similarly, we have that +g1 ≤ (Pτ)j−1rτ + (Pτ)j−1(Pτ − I)V, +(118) +for j = 2, ..., n. Summing these inequalities together we have that +ng1 ≤ (I + Pτ + ... + (Pτ)n−1)rτ + (I + Pτ + ... + (Pτ)n−1)(Pτ)n−1(Pτ − I)V += (I + Pτ + ... + (Pτ)n−1)rτ + ((Pτ)n − I)V. +(119) +Hence +g1 ≤ lim +n→∞ +1 +nEPτave,τ +� +n +� +t=0 +rt +� += gτ +Pτave1 = gτ +P1 ≤ g∗ +P1. +(120) +Thus g = g∗ +P, and this concludes the proof. +Theorem 16 (Restatement of Theorem 7, Part 2). For any (g, V ) that is a solution to +max +a +� +r(s, a) − g + σPas (V ) − V (s) +� += 0, ∀s, +(121) +if we set +π∗(s) = arg max +a +� +r(s, a) + σPa +s (V ) +� +(122) +for any s ∈ S, then π∗ is an optimal robust policy. +Proof. Note that for any stationary policy π, we denote by σPπ(V ) ≜ (� +a π(a|s1)σPas1 (V ), ..., � +a π(a|s|S|)σPas|S| (V )) being +a vector in R|S|. Then eq. (14) is equivalent to +rπ∗ + σPπ∗ (V ) = max +π +{rπ + σPπ(V )} . +(123) +Hence, +rπ∗ − g + σPπ∗ (V ) − V = max +π +{rπ − g + σPπ(V ) − V } . +(124) +Since (g, V ) is a solution to (13), it follows that +rπ∗ − g + σPπ∗ (V ) − V = 0. +(125) +According to the robust Bellman equation eq. (12), (gπ∗ +P , V π∗ +P ) is a solution to eq. (125). Thus from Theorem 15, gπ∗ +P = g∗ +P, and +hence π∗ is an optimal robust policy. + +Theorem 17 (Restatement of Theorem 8). (wT , Vt) in Algorithm 3 converges to a solution of eq. (13). +Proof. We first denote the update operator as +Lv(s) ≜ max +a (r(s, a) + σPas (v)). +(126) +Now, consider sp(Lv − Lu). Denote by ´s ≜ arg maxs(Lv(s) − Lu(s)) and `s ≜ arg mins(Lv(s) − Lu(s)). Also denote by +av ≜ arg maxa(r(´s, a) + σPa +´s (v)) and au ≜ arg maxa(r(´s, a) + σPa +´s (u)) Then +Lv(´s) − Lu(´s) = max +a (r(´s, a) + σPa +´s (v)) − max +a (r(´s, a) + σPa +´s (u)) +≜ r(´s, av) + σPav +´s (v) − (r(´s, au) + σPau +´s (u)) +≤ r(´s, av) + σPav +´s (v) − (r(´s, av) + σPav +´s (u)) += σPav +´s (v) − σPav +´s (u) +≜ (pav,v +´s +)⊤v − (pav,u +´s +)⊤u, +(127) +where pav,v +´s += arg minp∈Pav +´s p⊤v and pav,u +´s += arg minp∈Pav +´s p⊤u. Thus eq. (127) can be further bounded as +Lv(´s) − Lu(´s) +≤ (pav,v +´s +)⊤v − (pav,u +´s +)⊤u +≤ (pav,u +´s +)⊤(v − u). +(128) +Similarly, +Lv(`s) − Lu(`s) ≥ (pau,v +`s +)⊤(v − u). +(129) +Thus +sp(Lv − Lu) ≤ (pav,u +´s +)⊤(v − u) − (pau,v +`s +)⊤(v − u). +(130) +Now denote by v −u ≜ (x1, x2, ..., xn), pav,u +´s += (p1, ..., pn) and pau,v +`s += (q1, ..., qn). Further denote by bi ≜ min{pi, qi} Then +n +� +i=1 +pixi − +n +� +i=1 +qixi += +n +� +i=1 +(pi − bi)xi − +n +� +i=1 +(qi − bi)xi +≤ +n +� +i=1 +(pi − bi) max{xi} − +n +� +i=1 +(qi − bi) min{xi} += +n +� +i=1 +(pi − bi)sp(x) + +� +n +� +i=1 +(pi − bi) − +n +� +i=1 +(qi − bi) +� +min{xi} += +� +1 − +n +� +i=1 +bi +� +sp(x). +(131) +Thus we showed that +sp(Lv − Lu) ≤ +� +1 − +n +� +i=1 +bi +� +sp(v − u). +(132) +Now from Assumption 2, and following Theorem 8.5.3 from (Puterman 1994), it can be shown that there exists 1 > λ > 0, such +that for any a, u, v, +n +� +i=1 +bi ≥ λ. +(133) +Further, following Theorem 8.5.2 in (Puterman 1994), it can be shown that L is a J-step contraction operator for some integer J, +i.e., +sp(LJv − LJu) ≤ (1 − λ)sp(v − u). +(134) +Then, it can be shown that the relative value iteration converges to a solution of the optimal equation similar to the relative +value iteration for non-robust MDPs under the average-reward criterion (Theorem 8.5.7 in (Puterman 1994), Section 1.6.4 +in(Sigaud and Buffet 2013)), and hence (wt, Vt) converges to a solution to eq. (13) as ϵ → 0. + diff --git a/UNAyT4oBgHgl3EQf8fqv/content/tmp_files/load_file.txt b/UNAyT4oBgHgl3EQf8fqv/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..6fefea4f3be11b924d59cb8459552ba562cea38e --- /dev/null +++ b/UNAyT4oBgHgl3EQf8fqv/content/tmp_files/load_file.txt @@ -0,0 +1,1466 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf,len=1465 +page_content='Robust Average-Reward Markov Decision Processes Yue Wang,1 Alvaro Velasquez, 2 George Atia, 3 Ashley Prater-Bennette, 4 Shaofeng Zou 1 1 University at Buffalo, The State University of New York 2 Information Innovation Office, Defense Advanced Research Projects Agency 3 University of Central Florida 4 Air Force Research Laboratory ywang294@buffalo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='com, alvaro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='velasquez@darpa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='mil, george.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='atia@ucf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='edu, ashley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='prater-bennette@us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='af.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='mil, szou3@buffalo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='edu Abstract In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' While much of the literature has focused on dis- counted MDPs, robust average-reward MDPs remain largely unexplored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We first take an approach that approximates average-reward MDPs using discounted MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We prove that the robust discounted value function converges to the robust average-reward as the discount factor γ goes to 1, and moreover, when γ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, we in- vestigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably find its solution, or equivalently, the optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Introduction A Markov decision process (MDP) is an effective mathemat- ical tool for sequential decision-making in stochastic envi- ronments (Derman 1970;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Puterman 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Solving an MDP problem entails finding an optimal policy that maximizes a cumulative reward according to a given criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, in practice there could exist a mismatch between the assumed MDP model and the underlying environment due to various factors, such as non-stationarity of the environment, model- ing error, exogenous perturbation, partial observability, and adversarial attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The ensuing model mismatch could result in solution policies with poor performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This challenge spurred noteworthy efforts on developing and analyzing a framework of robust MDPs e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', (Bagnell, Ng, and Schneider 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Rather than adopting a fixed MDP model, in the robust MDP setting, one seeks to optimize the worst-case perfor- mance over an uncertainty set of possible MDP models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='aaai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='org).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' All rights reserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' solution to the robust MDP problem provides performance guarantee for all uncertain MDP models, and is thus robust to the model mismatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust MDP problems falling under different reward op- timality criteria are fundamentally different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In robust dis- counted MDPs, the goal is to find a policy that maximizes the discounted cumulative reward in the worst case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this setting, as the agent interacts with the environment, the re- ward received diminishes exponentially over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Much of the prior work in the robust setting has focused on the dis- counted reward formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The model-based method, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', (Iyengar 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bagnell, Ng, and Schneider 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Satia and Lave Jr 1973;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wiesemann, Kuhn, and Rustem 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tamar, Mannor, and Xu 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lim and Autef 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Xu and Mannor 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Yu and Xu 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lim, Xu, and Mannor 2013), where information about the uncertainty set is assumed to be known to the learner, unveiled several fundamental characterizations of robust discounted MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This was further extended to the more practical model-free setting in which only samples from a simulator (the cen- troid of the uncertainty set) are available to the learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For example, the value-based method (Roy, Xu, and Pokutta 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Badrinath and Kalathil 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wang and Zou 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tessler, Efroni, and Mannor 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Yang, Zhang, and Zhang 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Panaganti and Kalathil 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Goyal and Grand-Clement 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kaufman and Schaefer 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ho, Petrik, and Wiesemann 2018, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Si et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020) optimizes the worst-case performance using the robust value function as an intermediate step;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' on the other hand, the model-free policy-based method (Russel, Benosman, and Van Baar 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Derman, Geist, and Mannor 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Eysenbach and Levine 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wang and Zou 2022) directly optimizes the policy and is thus scalable to large/continuous state and action spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Although discounted MDPs induce an elegant Bellman op- erator that is a contraction, and have been studied extensively, the policy obtained usually has poor long-term performance when a system operates for an extended period of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' When the discount factor is very close to 1, the agent may prefer to compare policies on the basis of their average expected reward instead of their expected total discounted reward, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', queueing control, inventory management in supply chains, scheduling automatic guided vehicles and applications in communication networks (Kober, Bagnell, and Peters 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Therefore, it is also important to optimize the long-term aver- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='00858v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='LG] 2 Jan 2023 age performance of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, robust MDPs under the average-reward crite- rion are largely understudied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Compared to the discounted setting, the average-reward setting depends on the limiting behavior of the underlying stochastic process, and hence is markedly more intricate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A recognized instance of such in- tricacy concerns the one-to-one correspondence between the stationary policies and the limit points of state-action frequen- cies, which while true for discounted MDPs, breaks down under the average-reward criterion even in the non-robust setting except in some very special cases (Puterman 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Atia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This is largely due to dependence of the necessary conditions for establishing a contraction in average- reward settings on the graph structure of the MDP, versus the discounted-reward setting where it simply suffices to have a discount factor that is strictly less than one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Heretofore, only a handful of studies have considered average-reward MDPs in the robust setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The first work by (Tewari and Bartlett 2007) considers robust average-reward MDPs un- der a specific finite interval uncertainty set, but their method is not easily applicable to other uncertainty sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' More re- cently, (Lim, Xu, and Mannor 2013) proposed an algorithm for robust average-reward MDPs under the ℓ1 uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, obtaining fundamental characterizations of the problem and convergence guarantee remains elusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Challenges and Contributions In this paper, we derive characterizations of robust average- reward MDPs with general uncertainty sets, and develop model-based approaches with provable theoretical guarantee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Our approach is fundamentally different from previous work on robust discounted MDPs, robust and non-robust average- reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In particular, the key challenges and the main contributions are summarized below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We characterize the limiting behavior of robust dis- counted value function as the discount factor γ → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For the standard non-robust setting and for a specific tran- sition kernel, the discounted non-robust value function con- verges to the average-reward non-robust value function as γ → 1 (Puterman 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, in the robust setting, we need to consider the worst-case limiting behavior under all possible transition kernels in the uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence, the previous point-wise convergence result (Puterman 1994) cannot be directly applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In (Tewari and Bartlett 2007), a finite interval uncertainty set is studied, where due to its special structure, the number of possible worst-case transi- tion kernels of robust discounted MDPs is finite, and hence the order of min (over transition kernel) and limγ→1 can be exchanged, and therefore, the robust discounted value func- tion converges to the robust average-reward value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This result, however, does not hold for general uncertainty sets investigated in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We first prove the uniform convergence of discounted non-robust value function to average-reward w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' the transition kernels and policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on this uniform convergence, we show the conver- gence of the robust discounted value function to the robust average-reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This uniform convergence result is the first in the literature and is of key importance to motivate our algorithm design and to guarantee convergence to the optimal robust policy in the average-reward setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We design algorithms for robust policy evaluation and optimal control based on the limit method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on the uniform convergence, we then use robust discounted MDPs to approximate robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We show that when γ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward, and hence solves the robust optimal control problem in the average reward setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This result is similar to the Black- well optimality (Blackwell 1962;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hordijk and Yushkevich 2002) for the non-robust setting, however, our proof is fun- damentally different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Technically, the proof in (Blackwell 1962;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hordijk and Yushkevich 2002) is based on the fact that the difference between the discounted value functions of two policies is a rational function of the discount factor, which has a finite number of zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, in the robust setting with a general uncertainty set, the difference is no longer a rational function due to the min over the transition kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We construct a novel proof based on the limiting behavior of robust discounted MDPs, and show that the (optimal) robust discounted value function converges to the (optimal) robust average-reward as γ → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Motivated by these insights, we then design our algorithms by applying a sequence of robust discounted Bellman operators while increasing the discount factor at a certain rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We prove that our method can (i) evaluate the robust average-reward for a given policy and;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (ii) find the optimal robust value function and, in turn, the optimal robust policy for general uncertainty sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We design a robust relative value iteration method without using the discounted MDPs as an intermedi- ate step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We further pursue a direct approach that solves the robust average-reward MDPs without using the limit method, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', without using discounted MDPs as an interme- diate step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We derive a robust Bellman equation for robust average-reward MDPs, and show that the pair of robust rel- ative value function and robust average-reward is a solution to the robust Bellman equation under the average-reward setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We further prove that if we can find any solution to the robust Bellman equation, then the optimal policy can be derived by a greedy approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The problem hence can be equivalently solved by solving the robust Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We then design a robust value iteration method which provably converges to the solution of the robust Bell- man equation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', solve the optimal policy for the robust average-reward MDP problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Related Work Robust discounted MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Model-based methods for robust discounted MDPs were studied in (Iyengar 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bagnell, Ng, and Schneider 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Satia and Lave Jr 1973;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wiesemann, Kuhn, and Rustem 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lim and Autef 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Xu and Mannor 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Yu and Xu 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lim, Xu, and Mannor 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tamar, Mannor, and Xu 2014), where the uncertainty set is assumed to be known, and the problem can be solved using robust dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Later, the stud- ies were generalized to the model-free setting where stochas- tic samples from the centroid MDP of the uncertainty set are available in an online fashion (Roy, Xu, and Pokutta 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Badrinath and Kalathil 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wang and Zou 2021, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tessler, Efroni, and Mannor 2019) and an offline fashion (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Yang, Zhang, and Zhang 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Panaganti and Kalathil 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Goyal and Grand-Clement 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kaufman and Schaefer 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ho, Petrik, and Wiesemann 2018, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Si et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' There are also empirical studies on robust RL, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', (Vinitsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Pinto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Abdullah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Rajeswaran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kos and Song 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Pattanaik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Mandlekar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For discounted MDPs, the robust Bellman operator is a contraction, based on which robust dynamic programming and value-based methods can be designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we focus on robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, the robust Bellman operator for average- reward MDPs is not a contraction, and its fixed point may not be unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Moreover, the average-reward setting depends on the limiting behavior of the underlying stochastic process, which is thus more intricate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Studies on robust average- reward MDPs are quite limited in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust average-reward MDPs under a specific finite interval uncer- tainty set was studied in (Tewari and Bartlett 2007), where the authors showed the existence of a Blackwell optimal policy, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', there exists some δ ∈ [0, 1), such that the optimal robust policy exists and remains unchanged for any discount factor γ ∈ [δ, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, this result depends on the structure of the uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For general uncertainty sets, the existence of a Blackwell optimal policy may not be guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' More recently, (Lim, Xu, and Mannor 2013) designed a model-free algorithm for a specific ℓ1-norm uncertainty set and charac- terized its regret bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, their method also relies on the structure of the ℓ1-norm uncertainty set, and may not be generalizable to other types of uncertainty sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, our results can be applied to various types of uncertainty sets, and thus is more general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Preliminaries and Problem Model In this section, we introduce some preliminaries on dis- counted MDPs, average-reward MDPs, and robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Discounted MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A discounted MDP (S, A, P, r, γ) is specified by: a state space S, an action space A, a transi- tion kernel P = {pa s ∈ ∆(S), a ∈ A, s ∈ S}1, where pa s is the distribution of the next state over S upon taking action a in state s (with pa s,s′ denoting the probability of transitioning to s′), a reward function r : S × A → [0, 1], and a discount factor γ ∈ [0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' At each time step t, the agent at state st takes an action at, the environment then transitions to the next state st+1 according to pat st , and produces a reward sig- nal r(st, at) ∈ [0, 1] to the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we also write rt = r(st, at) for convenience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A stationary policy π : S → ∆(A) is a distribution over A for any given state s, and the agent takes action a at state s with probability π(a|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The discounted value function of a stationary policy π starting from s ∈ S is defined as the 1∆(S): the (|S| − 1)-dimensional probability simplex on S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' expected discounted cumulative reward by following policy π: V π P,γ(s) ≜ Eπ,P [�∞ t=0 γtrt|S0 = s].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Average-Reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Different from discounted MDPs, average-reward MDPs do not discount the reward over time, and consider the behavior of the underlying Markov process under the steady-state distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' More specifically, under a specific transition kernel P, the average-reward of a policy π starting from s ∈ S is defined as gπ P(s) ≜ lim n→∞ Eπ,P � 1 n n−1 � t=0 rt|S0 = s � , (1) which we also refer to in this paper as the average-reward value function for convenience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The average-reward value function can also be equiva- lently written as follows: gπ P = limn→∞ 1 n �n−1 t=0 (Pπ)trπ ≜ Pπ ∗rπ, where (Pπ)s,s′ ≜ � a π(a|s)pa s,s′ and rπ(s) ≜ � a π(a|s)r(s, a) are the transition matrix and reward func- tion induced by π, and Pπ ∗ ≜ limn→∞ 1 n �n−1 t=0 (Pπ)t is the limit matrix of Pπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the average-reward setting, we also define the following relative value function V π P (s) ≜ Eπ,P � ∞ � t=0 (rt − gπ P)|S0 = s � , (2) which is the cumulative difference over time between the reward and the average value gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It has been shown that (Puterman 1994): V π P = Hπ Prπ, where Hπ P ≜ (I − Pπ + Pπ ∗)−1(I − Pπ ∗) is defined as the deviation matrix of Pπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The relationship between the average-reward and the rel- ative value functions can be characterized by the following Bellman equation (Puterman 1994): V π P (s) = Eπ � r(s, A) − gπ P(s) + � s′∈S pA s,s′V π P (s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (3) Robust discounted and average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For robust MDPs, the transition kernel is not fixed but belongs to some uncertainty set P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' After the agent takes an action, the envi- ronment transits to the next state according to an arbitrary transition kernel P ∈ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we focus on the (s, a)- rectangular uncertainty set (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyen- gar 2005), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', P = � s,a Pa s, where Pa s ⊆ ∆(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We note that there are also studies on relaxing the (s, a)-rectangular uncertainty set to s-rectangular uncertainty set, which is not the focus of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Under the robust setting, we consider the worst-case perfor- mance over the uncertainty set of MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' More specifically, the robust discounted value function of a policy π for a dis- counted MDP is defined as V π P,γ(s) ≜ min κ∈� t≥0 P Eπ,κ � ∞ � t=0 γtrt|S0 = s � , (4) where κ = (P0, P1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=') ∈ � t≥0 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we focus on the following worst-case average- reward for a policy π: gπ P(s) ≜ min κ∈� t≥0 P lim n→∞ Eπ,κ � 1 n n−1 � t=0 rt|S0 = s � , (5) to which, for convenience, we refer as the robust average- reward value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For robust discounted MDPs, it has been shown that the robust discounted value function is the unique fixed-point of the robust discounted Bellman operator (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Puterman 1994): TπV (s) ≜ � a∈A π(a|s) � r(s, a) + γσPas (V ) � , (6) where σPas (V ) ≜ minp∈Pas p⊤V is the support function of V on Pa s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on the contraction of Tπ, robust dynamic programming approaches, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', robust value iteration, can be designed (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005) (see Appendix for a review of these methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, there is no such contraction result for robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, our goal is to find a policy that optimizes the robust average-reward value function: max π∈Π gπ P(s), for any s ∈ S, (7) where Π is the set of all stationary policies, and we denote by g∗ P(s) ≜ maxπ gπ P(s) the optimal robust average-reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Limit Approach for Robust Average-Reward MDPs We first take a limit approach to solve the problem of robust average-reward MDPs in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It is known that under the non-robust setting, for any fixed π and P, the discounted value function converges to the average-reward value function as the discount factor γ approaches 1 (Puterman 1994), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', lim γ→1(1 − γ)V π P,γ = gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (8) We take a similar idea, and show that the same result holds in the robust case: limγ→1(1 − γ)V π P,γ = gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on this result, we further design algorithms (Algorithms 1 and 2) that apply a sequence of robust discounted Bellman operators while increasing the discount factor at a certain rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We then theoretically prove that our algorithms converge to the optimal solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the following, we first show that the convergence limγ→1(1 − γ)V π P,γ = gπ P is uniform on the set Π × P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We make a mild assumption as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any s ∈ S, a ∈ A, the uncertainty set Pa s is a compact subset of ∆(S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The set Pa s is compact if and only if it is bounded and closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Since Pa s ⊆ ∆(S), it is clearly bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence, As- sumption 1 amounts to assuming that the uncertainty set is closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We remark that many standard uncertainty sets sat- isfy this assumption, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', those defined by ϵ-contamination (Huber 1965), finite interval (Tewari and Bartlett 2007), total- variation (Rahimian, Bayraksan, and De-Mello 2022) and KL-divergence (Hu and Hong 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In (Puterman 1994), the convergence limγ→1(1 − γ)V π P,γ = gπ P for a fixed policy π and a fixed transition kernel P (non-robust setting) is point-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, such point-wise convergence does not provide any convergence guarantee on the robust discounted value function, as the robust value function measures the worst-case performance over the uncertainty set and the order of lim and min may not be exchanged in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the following theorem, we prove the uniform convergence of the discounted value function under the foregoing assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 1 (Uniform convergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Under Assumption 1, the discounted value function converges uniformly to the average-reward value function on Π × P as γ → 1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', lim γ→1(1 − γ)V π P,γ = gπ P, uniformly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (9) With uniform convergence in Theorem 1, the order of the limit γ → 1 and min over P can be interchanged, then the following convergence of the robust discounted value function can be established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The robust discounted value function in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (4) converges to the robust average-reward uniformly on Π: lim γ→1(1 − γ)V π P,γ = gπ P, uniformly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (10) We note that a similar convergence result is shown in (Tewari and Bartlett 2007), but only for a special uncertainty set of finite interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Our Theorem 2 holds for general com- pact uncertainty sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Moreover, it is worth highlighting that our proof technique is fundamentally different from the one in (Tewari and Bartlett 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Specifically, under the finite interval uncertainty set, the worst-case transition kernels are from a finite set, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', V π P,γ = minP∈M V π P,γ for a finite set M ⊆ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This hence implies the interchangeability of lim and min.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' However, for general uncertainty sets, the number of worst-case transition kernels may not be finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We demon- strate the interchangeability via our uniform convergence result in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The convergence result in Theorem 2 is of key importance to motivate the design of the following two algorithms, the ba- sic idea of which is to apply a sequence of robust discounted Bellman operators on an arbitrary initialization while increas- ing the discount factor at a certain rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We first consider the robust policy evaluation problem, which aims to estimate the robust average-reward gπ P for a fxied policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This problem for robust discounted MDPs is well studied in the literature, however, results for robust average-reward MDPs are quite limited except for the one in (Tewari and Bartlett 2007) for a specific finite interval uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We present the a robust value iteration (robust VI) algorithm for evaluating the robust average-reward with general compact uncertainty sets in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' At each time step t, the discount factor γt is set to t+1 t+2, which converges to 1 as t → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Subsequently, a robust Bellman operator w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t discount factor γt is applied on the current estimate Vt of the robust discounted value function (1 − γt)V π P,γt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' As the discount factor approaches 1, the es- timated robust discounted value function converges to the robust average-reward gπ P by Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Algorithm 1: Robust VI: Policy Evaluation Input: π, V0(s) = 0, ∀s, T 1: for t = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', T − 1 do 2: γt ← t+1 t+2 3: for all s ∈ S do 4: Vt+1(s) ← Eπ[(1 − γt)r(s, A) + γtσPA s (Vt)] 5: end for 6: end for 7: return VT Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Algorithm 1 converges to robust average reward, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', limT →∞ VT → gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 3 shows that the output of Algorithm 1 converges to the robust average-reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Besides the robust policy evaluation problem, it is also of great practical importance to find an optimal policy that max- imizes the worst-case average-reward, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', to solve eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on a similar idea as the one of Algorithm 1, we ex- tend our limit approach to solve the robust optimal control problem in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Algorithm 2: Robust VI: Optimal Control Input: V0(s) = 0, ∀s, T 1: for t = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', T − 1 do 2: γt ← t+1 t+2 3: for all s ∈ S do 4: Vt+1(s) ← max a∈A � (1 − γt)r(s, a) + γtσPas (Vt) � 5: end for 6: end for 7: for s ∈ S do 8: πT (s) ← arg maxa∈A � (1 − γt)r(s, a) + γtσPas (VT ) � 9: end for 10: return VT , πT Similar to Algorithm 1, at each time step, the discount fac- tor γt is set to be closer to 1, and a one-step robust discounted Bellman operator (for optimal control) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' γt is applied to the current estimate Vt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The following theorem establishes that VT in Algorithm 2 converges to the optimal robust value function, hence can find the optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The output VT in Algorithm 2 converges to the optimal robust average-reward g∗ P: VT → g∗ P as T → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' As discussed in (Blackwell 1962;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hordijk and Yushkevich 2002), the average-reward criterion is insensitive and under selective since it is only interested in the performance un- der the steady-state distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For example, two policies providing rewards: 100 + 0 + 0 + · · · and 0 + 0 + 0 + · · · are equally good/bad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Towards this issue, for the non-robust setting, a more sensitive term of optimality was introduced by Blackwell (Blackwell 1962).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' More specifically, a policy is said to be Blackwell optimal if it optimizes the discounted value function for all discount factor γ ∈ (δ, 1) for some δ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Together with eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (8), the optimal policy obtained by taking γ → 1 is optimal not only for the average-reward criterion, but also for the discounted criterion with large γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Intuitively, it is optimal under the average-reward setting, and is sensitive to early rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Following a similar idea, we justify that the obtained policy from Algorithm 2 is not only optimal in the robust average- reward setting, but also sensitive to early rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Denote by Π∗ the set of all the optimal policies for robust average-reward, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Π∗ = {π : gπ P = g∗ P} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 5 (Blackwell optimality).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' There exists 0 < δ < 1, such that for any γ > δ, the optimal robust policy for robust discounted value function V ∗ P,γ belongs to Π∗, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', for any δ < γ < 1, ∃π∗ ∈ Π∗, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' V ∗ P,γ = V π∗ P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Moreover, when arg maxπ∈ΠD gπ P is a singleton, there exists a unique Blackwell optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This result implies that using the limit method in this sec- tion to find the optimal robust policy for average-reward MDPs has an additional advantage that the policy it finds not only optimizes the average reward in steady state, but also is sensitive to early rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It is worth highlighting the distinction of our results from the technique used in the proof of Blackwell optimality (Blackwell 1962).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the non-robust setting, the existence of a stationary Blackwell optimal policy is proved via contra- diction, where a difference function of two policies π and ν: fπ,ν(γ) ≜ V π P,γ − V µ P,γ is used in the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It was shown by contradiction that f has infinitely many zeros, which however contradicts with the fact that f is a rational function of γ with a finite number of zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A similar technique was also used in (Tewari and Bartlett 2007) for the finite interval uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Specifically, in (Tewari and Bartlett 2007), it was shown that the worst-case transition kernels for any π, γ are from a finite set M, hence fπ,ν(γ) ≜ minP∈M V π P,γ −minP∈M V µ P,γ can also be shown to be a rational function with a finite num- ber of zeroes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For a general uncertainty set P, the difference function fπ,ν(γ), however, may not be rational.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This makes the method in (Blackwell 1962;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tewari and Bartlett 2007) inapplicable to our problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Direct Approach for Robust Average-Reward MDPs The limit approach in Section is based on the uniform conver- gence of the discounted value function, and uses discounted MDPs to approximate average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this section, we develop a direct approach to solving the robust average- reward MDPs that does not adopt discounted MDPs as inter- mediate steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For average-reward MDPs, the relative value iteration (RVI) approach (Puterman 1994) is commonly used since it is numerically stable and has convergence guarantee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the following, we generalize the RVI algorithm to the robust setting, and design the robust RVI algorithm in Algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We first generalize the relative value function in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (2) to the robust relative value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The robust relative value function measures the difference between the worst-case cumulative reward and the worst-case average-reward for a policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The robust relative value function is defined as V π P (s) ≜ min κ∈� t≥0 P Eκ,π � ∞ � t=0 (rt − gπ P)|S0 = s � , (11) where gπ P is the worst-case average-reward defined in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The following theorem presents a robust Bellman equation for robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any s and π, (V π P , gπ P) is a solution to the following robust Bellman equation: V (s) + g = � a π(a|s) � r(s, a) + σPas (V ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (12) It can be seen that the robust Bellman equation for average- reward MDPs has a similar structure to the one for discounted MDPs in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (6) except for a discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This actually reveals a fundamental difference between the robust Bellman operator of the discounted MDPs and the average-reward ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For a discounted MDP, its robust Bellman operator is a contraction with constant γ (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005), and hence the fixed point is unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on this, the robust value function can be found by recursively ap- plying the robust Bellman operator (see Appendix ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In sharp contrast, in the average-reward setting, the robust Bellman is not necessarily a contraction, and the fixed point may not be unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Therefore, repeatedly applying the robust Bell- man operator in the average-reward setting may not even converge, which underscores that the two problem settings are fundamentally different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Using the robust Bellman equation in Theorem 6, we de- rive the following equivalent optimality condition for robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any (g, V ) that is a solution to max a � r(s, a) − g + σPas (V ) − V (s) � = 0, ∀s, (13) g = g∗ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' If we further set π∗(s) = arg max a � r(s, a) + σPas (V ) � (14) for any s ∈ S, then π∗ is an optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 7 suggests that as long as we find a solution (g, V ) to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13), which though may not be unique, then g is the optimal robust average-reward g∗ P, and the greedy policy π∗ is the optimal policy to our robust average-reward MDP problem in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on Theorem 7, our problem in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (7) can be equivalently solved by finding a solution to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We note that eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (12) holds for any π and if we let the π in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (12) be the greedy policy, then eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (12) and eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13) are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the following, we generalize the RVI approach to the robust setting, and design a robust RVI algorithm in Algo- rithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We will further show that the output of this algo- rithm converges to a solution to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13), and further the optimal policy could be obtained by eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Here 1 de- notes the all-ones vector, and sp denotes the span semi-norm: sp(w) = maxs w(s)−mins w(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Different from Algorithm 2, in Algorithm 3, we do not need to apply the robust dis- counted Bellman operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The method directly solves the Algorithm 3: Robust RVI Input: V0, ϵ and arbitrary s∗ ∈ S 1: w0 ← V0 − V0(s∗)1 2: while sp(wt − wt+1) ≥ ϵ do 3: for all s ∈ S do 4: Vt+1(s) ← maxa(r(s, a) + σPas (wt)) 5: wt+1(s) ← Vt+1(s) − Vt+1(s∗) 6: end for 7: end while 8: return wt, Vt robust optimal control problem for average-reward robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In studies of average-reward MDPs, it is usually the case that a certain class of MDPs are considered, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', unichain and communicating (Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhang and Ross 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Chen, Jain, and Luo 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wan, Naik, and Sutton 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we focus on the unichain setting to highlight the major technical novelty to achieve robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any P = {pa s ∈ ∆(S)} ∈ P and any a ∈ A, s, s′ ∈ S, pa s,s′ > 0, and the induced Markov process is a unichain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the following theorem, we show that our Algorithm 3 converges to a solution of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13), hence according to The- orem 7 if we set π according to (14), then π is the optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (wt, Vt) converges to a solution (w, V ) to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13) as ϵ → 0, which satisfies w(s) + max a {r(s∗, a) + σPa s∗ (w)} = max a {r(s, a) + σPas (w)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (15) Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this section, we mainly present the robust RVI algorithm for the robust optimal control problem, and its con- vergence and optimality guarantee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A robust RVI algorithm for robust policy evaluation can be similarly designed by replacing the max in line 4, Algorithm 3 with an expectation w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The convergence results in Theorem 8 can also be similarly derived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Assumption 2 can be also replaced using some weaker ones, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 of (Bertsekas 2011), or be re- moved by designing a variant of RVI, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='4 of (Bertsekas 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Examples and Numerical Results In this section, we study several commonly used uncertainty set models, including contamination model, Kullback-Lerbler (KL) divergence defined model and total-variation defined model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' As can be observed from Algorithms 1 to 3, for different uncertainty sets, the only difference lies in how the support function σPas (V ) is calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In the sequel, we discuss how to efficiently calculate the support function for various uncertainty sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We numerically compare our robust (relative) value itera- tion methods v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' non-robust (relative) value iteration method on different uncertainty sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Our experiments are based on the Garnet problem G(20, 40) (Archibald, McKinnon, and Thomas 1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' More specifically, there are 20 states and 30 actions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' the nominal transition kernel P = {pa s ∈ ∆(S)} is randomly generated according to the uniform distribu- tion, and the reward functions r(s, a) ∼ N(0, σs,a), where σs,a ∼ Uniform[0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In our experiments, the uncertainty sets are designed to be centered at the nominal transition ker- nel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We run different algorithms, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', (robust) value iteration and (robust) relative value iteration, and obtain the greedy policies at each time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, we use robust average- reward policy evaluation (Algorithm 1) to evaluate the robust average-reward of these policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We plot the robust average- reward against the number of iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Contamination model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any (s, a) the uncertainty set Pa s is defined as Pa s = {q : q = (1 − R)pa s + Rp′, p′ ∈ ∆(S)}, where pa s is the nominal transition kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It can be viewed as an adversarial model, where at each time-step, the envi- ronment transits according to the nominal transition kernel p with probability 1 − R, and according to an arbitrary kernel p′ with probability R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It can be easily shown that the value of the problem σPas (V ) = (1−R)(pa s)⊤V +R mins V (s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Our experimental results under the contamination model are shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (a) Robust VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (b) Robust RVI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Figure 1: Comparison on contamination model with R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Total variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The total variation distance is another commonly used distance metric to measure the dif- ference between two distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Specifically, the to- tal variation between two distributions p and q is defined as DT V (p, q) = 1 2∥p − q∥1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Consider an uncertainty set defined via total variation: Pa s = {q : DT V (q||pa s) ≤ R}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, its support function can be ef- ficiently solved as follows (Iyengar 2005): σPas (V ) = p⊤V − R minµ≥0 {maxs(V (s) − µ(s)) − mins(V (s) − µ(s))} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Our experimental results under the total variation model are shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kullback-Lerbler (KL) divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The Kullback–Leibler divergence is widely used to measure the distance between two probability distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The KL-divergence of two dis- tributions p, q is defined as DKL(q||p) = � s q(s) log q(s) p(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Consider an uncertainty set defined via KL divergence: Pa s = {q : DKL(q||pa s) ≤ R}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, its support function can be efficiently solved using the duality result in (Hu and Hong 2013): σPas (V ) = − minα≥0 � Rα + α log � p⊤e −V α �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Our experimental results under the KL-divergence model are shown in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (a) Robust VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (b) Robust RVI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Figure 2: Comparison on total variation model with R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (a) Robust VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (b) Robust RVI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Figure 3: Comparison on KL-divergence model with R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It can be seen that our robust methods can obtain policies that achieve higher worst-case reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Also, both our limit- based robust value iteration and our direct method of robust relative value iteration converge to the optimal robust policies, which validates our theoretical results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Conclusion In this paper, we investigated the problem of robust MDPs under the average-reward setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We established uniform convergence of the discounted value function to average- reward, which further implies the uniform convergence of the robust discounted value function to robust average-reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Based on this insight, we designed a robust dynamic pro- gramming approach using the robust discounted MDPs as an approximation (the limit method).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We theoretically proved their convergence and optimality and proved a robust version of the Blackwell optimality (Blackwell 1962), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', any op- timal policy of the robust discounted MDP when γ is large enough is also an optimal policy of the robust average-reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We then designed a direct approach for robust average-reward MDPs, where we derived the robust Bellman equation for robust average-reward MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We further designed a robust RVI method, which was proven to converge to the optimal robust solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Technically, our proof techniques are funda- mentally different from existing studies on average-reward robust MDPs, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', those in (Blackwell 1962;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tewari and Bartlett 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Acknowledgment This work was supported by the National Science Foundation under Grants CCF-2106560, CCF-2007783, CCF-2106339 and CCF-1552497.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 non-robustvalueiteration robustvalueiteration 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 0 50 100 150 200 250 300 350 400 Number of iteration0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='0 non-robustrelativevalueiteration 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 robustrelativevalueiteration 0 50 100 150 200 250 300 350 400 Number of iteration0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='0 non-robust value iteration 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 robustvalueiteration 0 50 100 150 200 250 300 350 400 Number of iteration0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 non-robustrelativevalueiteration 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 robust relative valueiteration 0 50 100 150 200 250 300 350 400 Number of iteration0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='5 non-robustvalueiteration 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 robustvalueiteration 0 50 100 150 200 250 300 350 400 Number of iteration0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='35 non-robustrelativevalueiteration 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='30 robustrelativevalueiteration 0 50 100 150 200 250 300 350 400 Number of iterationReferences Abdullah, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ren, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ammar, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Milenkovic, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Luo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wasserstein robust reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='13196.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Archibald, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' McKinnon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Thomas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' On the generation of Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Journal of the Operational Research Society, 46(3): 354–361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Atia, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Beckus, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Alkhouri, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Velasquez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Steady-State Planning in Expected Reward Multichain MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Journal of Artificial Intelligence Research, 72: 1029–1082.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Badrinath, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Kalathil, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Reinforce- ment Learning using Least Squares Policy Iteration with Provable Performance Guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Machine Learning (ICML), 511–520.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bagnell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Schneider, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Solving uncertain Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bertsekas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Dynamic Programming and Opti- mal Control 3rd edition, volume II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Belmont, MA: Athena Scientific.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Blackwell, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1962.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Discrete dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The Annals of Mathematical Statistics, 719–726.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Jain, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Luo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Learning Infinite- Horizon Average-Reward Markov Decision Processes with Constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='00150.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Derman, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Finite state Markovian decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Academic Press, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Derman, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Geist, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Twice regu- larized MDPs and the equivalence between robustness and regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NeurIPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Eysenbach, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Maximum entropy RL (provably) solves some robust RL problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='06257.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Goyal, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Grand-Clement, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Markov decision process: Beyond rectangularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='00215.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ho, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Petrik, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Wiesemann, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Fast Bellman updates for robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Machine Learning (ICML), 1979–1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ho, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Petrik, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Wiesemann, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Partial policy iteration for L1-robust Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Journal of Machine Learning Research, 22(275): 1–46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hordijk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Yushkevich, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Blackwell opti- mality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Handbook of Markov decision processes, 231–267.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Pang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Yin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Reinforcement Learning with Wasserstein Constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='00945.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Hong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kullback-Leibler divergence constrained distributionally robust optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Available at Optimization Online, 1695–1724.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Huang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Papernot, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Goodfellow, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Duan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Abbeel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Adversarial attacks on neural network policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Huber, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A Robust Version of the Probability Ratio Test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Statist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', 36: 1753–1758.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Mathe- matics of Operations Research, 30(2): 257–280.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kaufman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Schaefer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust modified policy iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' INFORMS Journal on Computing, 25(3): 396–410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kober, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bagnell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Peters, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Reinforcement Learning in Robotics: A Survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The International Journal of Robotics Research, 32(11): 1238–1274.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kos, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Song, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Delving into adversarial at- tacks on deep policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Autef, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kernel-based reinforcement learning in robust Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In- ternational Conference on Machine Learning (ICML), 3973– 3981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Reinforcement learning in robust Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ad- vances in Neural Information Processing Systems (NIPS), 701–709.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Liao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Shih, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='- Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Sun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tactics of adversarial attack on deep reinforcement learning agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Joint Conferences on Artificial Intelligence (IJCAI), 3756–3762.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Mandlekar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Garg, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Fei-Fei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Savarese, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Adversarially robust policy learning: Active construc- tion of physically-plausible perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3932–3939.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and El Ghaoui, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robustness in Markov decision problems with uncertain transition matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NIPS), 839–846.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Panaganti, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Kalathil, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Sample Complexity of Robust Reinforcement Learning with a Generative Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='01506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Pattanaik, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bommannan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Chowd- hary, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Deep Reinforcement Learning with Adversarial Attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Autonomous Agents and MultiAgent Systems, 2040–2042.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Pinto, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Davidson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Sukthankar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Gupta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust adversarial reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Interna- tional Conference on Machine Learning (ICML), 2817–2826.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Puterman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Markov Decision Processes: Discrete Stochastic Dynamic Programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Rahimian, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bayraksan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and De-Mello, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Effective scenarios in multistage distributionally robust op- timization with a focus on total variation distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' SIAM Journal on Optimization, 32(3): 1698–1727.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Rajeswaran, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ghotra, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ravindran, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Epopt: Learning robust neural network policies using model ensembles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Roy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Pokutta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Reinforcement learn- ing under model mismatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NIPS), 3046–3055.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Rudin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Functional Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' McGraw-Hill Science &Engineering &Math, 2nd edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Russel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Benosman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Van Baar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Ro- bust Constrained-MDPs: Soft-Constrained Robust Policy Optimization under Model Uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='04870.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Satia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Lave Jr, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1973.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Markovian decision processes with uncertain transition probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Operations Research, 21(3): 728–740.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Si, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Blanchet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Distri- butionally robust policy evaluation and learning in offline contextual bandits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Machine Learning (ICML), 8884–8894.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Sigaud, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Buffet, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Markov decision processes in artificial intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' John Wiley & Sons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Sutton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Barto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Reinforcement Learning: An Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Cambridge, Massachusetts: The MIT Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tamar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Scaling up robust MDPs using function approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Conference on Machine Learning (ICML), 181–189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tessler, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Efroni, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Action robust reinforcement learning and applications in continuous control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In International Conference on Machine Learning, 6215– 6224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Tewari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Bartlett, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bounded parameter Markov decision processes with average reward criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In International Conference on Computational Learning Theory, 263–277.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Vinitsky, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Du, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Parvate, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Jang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Abbeel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Bayen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Reinforcement Learning using Ad- versarial Populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='01825.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Naik, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Sutton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Learning and planning in average-reward markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In In- ternational Conference on Machine Learning, 10653–10662.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Zou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Online Robust Reinforcement Learning with Model Uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NeurIPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Zou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Policy Gradient Method For Robust Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' International Con- ference on Machine Learning (ICML), volume 162, 23484– 23526.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Jahromi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Luo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Sharma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Jain, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Model-free reinforcement learning in infinite-horizon average-reward markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In International conference on machine learning, 10170–10180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Wiesemann, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Kuhn, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Rustem, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Mathematics of Operations Re- search, 38(1): 153–183.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Mannor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Distributionally Robust Markov Decision Processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NIPS), 2505–2513.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Yang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Towards The- oretical Understandings of Robust Markov Decision Pro- cesses: Sample Complexity and Asymptotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='03863.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Yu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Distributionally robust counterpart in Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' IEEE Transactions on Automatic Control, 61(9): 2538–2543.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Ross, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' On-policy deep reinforce- ment learning for the average-reward criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Inter- national Conference on Machine Learning (ICML), 12535– 12545.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Bai, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Zhou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Qiu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Blanchet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and Glynn, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Finite-Sample Regret Bound for Distributionally Robust Offline Tabular Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In- ternational Conference on Artifical Intelligence and Statistics (AISTATS), 3331–3339.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Review of Robust Discounted MDPs In this section, we provide a brief review on the existing methods and results for robust discounted MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Robust Policy Evaluation We first consider the robust policy evaluation problem, where we aim to estimate the robust value function V π P,γ for any policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It has been shown that the robust Bellman operator Tπ is a γ-contraction, and the robust value function V π P,γ is its unique fixed-point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence by recursively applying the robust Bellman operator, we can find the robust discounted value function (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Algorithm 4: Policy evaluation for robust discounted MDPs Input: π, V0, T 1: for t = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', T − 1 do 2: for all s ∈ S do 3: Vt+1(s) ← Eπ[r(s, A) + γσPA s (Vt)] 4: end for 5: end for 6: return VT Robust Optimal Control Another important problem in robust MDP is to find the optimal policy which maximizes the robust discounted value function: π∗ = arg max π V π P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (16) A robust value iteration approach is developed in (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Algorithm 5: Optimal Control for robust discounted MDPs Input: V0, T 1: for t = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', T − 1 do 2: for all s ∈ S do 3: Vt+1(s) ← maxa � r(s, a) + γσPas (Vt) � 4: end for 5: end for 6: π∗(s) ← arg maxa � r(s, a) + γσPas (VT ) � , ∀s 7: return π∗ Equivalence between Time-Varying and Stationary Models We first provide an equivalence result between time-varying and stationary transition kernel models under stationary policies, which is an analog result to the one for robust discounted MDPs (Iyengar 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim and El Ghaoui 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This result will be used in our following proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Recall the definitions of robust discounted value function and worst-case average reward in eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (4) and (5), the worst-case is taken w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' κ = (P0, P1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=') ∈ � t≥0 P, therefore, the transition kernel at each time step could be different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This model is referred to as time-varying transition kernel model (as in (Iyengar 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim and El Ghaoui 2004)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Another commonly used setting is that the transition kernels at different time step are the same, which is referred to as the stationary model (Iyengar 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Nilim and El Ghaoui 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this paper, we use the following notations to distinguish the two models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' By EP[·], we denote the expectation when the transition kernels at all time steps are the same, P, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', the stationary model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We also denote by gπ P(s) ≜ limn→∞ EP,π � 1 n �n−1 t=0 rt ��S0 = s � and V π P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γ(s) ≜ EP,π ��∞ t=0 γtrt ��S0 = s � being the expected average-reward and expected discounted value function under the stationary model P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' By Eκ[·], we denote the expectation when the transition kernel at time t is Pt, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', the time-varying model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For the discounted setting, it has been shown in (Nilim and El Ghaoui 2004) that for a stationary policy π, any γ ∈ [0, 1), and any s ∈ S, V π P,γ(s) = min κ∈� t≥0 P Eπ,κ � ∞ � t=0 γtrt|S0 = s � = min P∈P Eπ,P � ∞ � t=0 γtrt|S0 = s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (17) In the following theorem, we prove an analog of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (17) for robust-average reward MDPs that if we consider stationary policies, then the robust average-reward problem with the time-varying model can be equivalently solved by a stationary model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Specifically, we define the worst-case average reward for the stationary transition kernel model as follows: min P∈P lim n→∞ Eπ,P � 1 n n−1 � t=0 rt ��S0 = s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (18) Recall the worst-case average reward for the time-varying model in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We will show that for any stationary policy, eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (5) can be equivalently solved by solving eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Consider an arbitrary stationary policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, the worst-case average-reward under the time-varying model is the same as the one under the stationary model: gπ P(s) ≜ min κ∈� t≥0 P lim n→∞ Eκ,π � 1 n n−1 � t=0 rt|S0 = s � = min P∈P lim n→∞ EP,π � 1 n n−1 � t=0 rt ��S0 = s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (19) Similar result also holds for the robust relative value function: V π P (s) ≜ min κ∈� t≥0 P Eκ,π � ∞ � t=0 (rt − gπ P)|S0 = s � = min P∈P EP,π � ∞ � t=0 (rt − gπ P)|S0 = s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (20) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From the robust Bellman equation in Theorem 6 2, we have that V π P (s) + gπ P = � a π(a|s) � r(s, a) + σPas (V π P ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (21) Denote by arg minp∈Pas (p)⊤V π P ≜ pa s 3, and denote by Pπ ≜ {pa s : s ∈ S, a ∈ A}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' It then follows that V π P (s) = � a π(a|s) � r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P + σPas (V π P ) � = � a π(a|s)(r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + � a π(a|s)EPπ[V π P (S1)|S0 = s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' A0 = a] = � a π(a|s)(r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π[V π P (S1)|S0 = s] = � a π(a|s)(r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � � a π(a|S1)(r(S1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P)|S0 = s � + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � � a π(a|S1)σPa S1 (V π P )|S0 = s � = � a π(a|s)(r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π [r1 − gπ P|S0 = s] + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � σPA1 S1 (V π P )|S0 = s � = � a π(a|s)(r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � r1 − gπ P ��S0 = s � + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � (pA1 S1 )⊤V π P |S0 = s � = EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � r0 − gπ P + r1 − gπ P|S0 = s � + EPπ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π[V π P (S2)|S0 = s] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='. 2The proof of Theorem 6 is independent of theorem 9 and does not relay on the results to be showed here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 3We pick one arbitrarily, if there are multiple minimizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' = EPπ,π � ∞ � t=0 (rt − gπ P)|s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (22) By the definition, the following always hold: min κ∈� t≥0 P Eκ,π � ∞ � t=0 (rt − gπ P)|S0 = s � ≤ min P∈P EP,π � ∞ � t=0 (rt − gπ P)|S0 = s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (23) This hence implies that a stationary transition kernel sequence κ = (Pπ, Pπ, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=') is one of the worst-case transition kernels for V π P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Therefore, eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (20) can be proved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Consider the transition kernel Pπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We denote its non-robust average-reward and the non-robust relative value function by gπ Pπ and V π Pπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' By the non-robust Bellman equation (Sutton and Barto 2018), we have that V π Pπ(s) = � a π(a|s)(r(s, a) − gπ Pπ) + EPπ,π[V π Pπ(S1)|s].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (24) On the other hand, the robust Bellman equation shows that V π P (s) = V π Pπ(s) = � a π(a|s)(r(s, a) − gπ P) + EPπ,π[V π Pπ(S1)|s].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (25) These two equations hence implies that gπ P = gπ Pπ, and hence the stationary kernel (Pπ, Pπ, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=') is also a worst-case kernel of robust average-reward in the time-varying setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This proves eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof of Theorem 1 In the proof, unless otherwise specified, we denote by ∥v∥ the l∞ norm of a vector v, and for a matrix A, we denote by ∥A∥ its matrix norm induced by l∞ norm, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', ∥A∥ = supx∈Rd ∥Ax∥∞ ∥x∥∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' [Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 in (Puterman 1994)] For any P, γ, π, V π P,γ = 1 1 − γ gπ P + hπ P + f π P (γ), (26) where hπ P = Hπ Prπ, and f π P (γ) = 1 γ �∞ n=1(−1)n � 1−γ γ �n (Hπ P)n+1rπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Following Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='6 in (Puterman 1994), we can show the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hπ P is continuous on Π × P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' If Π and P are compact, ∥Hπ P∥ is uniformly bounded on Π × P, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', there exists a constant h, such that ∥Hπ P∥ ≤ h for any π, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For simplicity, denote by Sπ ∞(P, γ) ≜ 1 γ ∞ � n=1 (−1)n �1 − γ γ �n (Hπ P)n+1rπ, Sπ N(P, γ) ≜ 1 γ N � n=1 (−1)n �1 − γ γ �n (Hπ P)n+1rπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (27) Clearly Sπ ∞(P, γ) = f π P (γ) and limN→∞ Sπ N(P, γ) = Sπ ∞(P, γ) for any specific π, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' There exists δ ∈ (0, 1), such that lim N→∞ Sπ N(P, γ) = Sπ ∞(P, γ) (28) uniformly on Π × P × [δ, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Note that ∥Hπ P∥ ≤ h, hence there exists δ, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 1 − δ δ h ≤ k < 1 (29) for some constant k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then for any γ ≥ δ, 1 − γ γ h ≤ 1 − δ δ h ≤ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (30) Moreover, note that ���� 1 γ (−1)n �1 − γ γ �n (Hπ P)n+1r ���� ≤ 1 γ �1 − γ γ �n hn+1 ≤ hkn δ ≜ Mn, (31) which is because ∥A + B∥ ≤ ∥A∥ + ∥B∥ for induced l∞ norm, ∥Ax∥ ≤ ∥A∥∥x∥ and ∥rπ∥∞ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Note that ∞ � n=1 Mn = h δ k 1 − k , (32) hence by Weierstrass M-test (Rudin 2022), Sπ N(P, γ) uniformly converges to Sπ ∞(P, γ) on Π × P × [δ, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' There exists a uniform constant L, such that ∥Sπ N(P, γ1) − Sπ N(P, γ2)∥ ≤ L|γ1 − γ2|, (33) for any N, π, P, γ1, γ2 ∈ [δ, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We first show that γSπ N(P, γ) = �N n=1(−1)n � 1−γ γ �n (Hπ P)n+1rπ ≜ T π N(P, γ) is uniformly Lipschitz w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' the l∞ norm, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', ∥T π N(P, γ1) − T π N(P, γ2)∥ ≤ l|γ1 − γ2|, (34) for any N, π, P, γ1, γ2 ∈ [δ, 1] and some constant l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Clearly, it can be shown by verifying ∇T π N(P, γ) is uniformly bounded for any π, N, P or γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' First, it can be shown that ∇T π N(P, γ) = N � n=1 (−1)nn �1 − γ γ �n−1 −1 γ2 (Hπ P)n+1rπ, (35) and moreover ∥∇T π N(P, γ)∥ ≤ N � n=1 n �1 − γ γ �n−1 1 γ2 hn+1 ≜ lN(γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (36) Note that h1 − γ γ lN(γ) = N � n=1 n �1 − γ γ �n 1 γ2 hn+2, (37) then, we can show that � 1 − h1 − γ γ � lN(γ) = N � n=1 n �1 − γ γ �n−1 1 γ2 hn+1 − N � n=1 n �1 − γ γ �n 1 γ2 hn+2 = 1 γ2 h2 − N �1 − γ γ �N 1 γ2 hN+2 + N � n=2 �1 − γ γ �n−1 1 γ2 hn+1 ≤ 1 γ2 h2 + h2 γ2 1 − γ γ h 1 1 − 1−γ γ h = h2 γ2 + h2 γ2 1 − γ γ h 1 1 − 1−γ γ h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (38) Hence, we have that ∥∇T π N(P, γ)∥ ≤ lN(γ) ≤ 1 1 − h 1−γ γ � h2 γ2 + h2 γ2 1 − γ γ h 1 1 − 1−γ γ h � ≤ 1 1 − k �h2 δ2 + h2 δ2 k 1 − k � , (39) which implies a uniform bound on ∥∇T π N(P, γ)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Now, we have that |Sπ N(P, γ1) − Sπ N(P, γ2)| ≤ |γ2 − γ1| γ1γ2 ∥T π N(P, γ1)∥ + ∥T π N(P, γ1) − T π N(P, γ2)∥ γ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (40) To show ∥T π N(P, γ)∥ is uniformly bounded, we have that ∥T π N(P, γ)∥ ≤ N � n=1 ���� �1 − γ γ �n (Hπ P)n+1r ���� ≤ N � n=1 �1 − γ γ �n hn+1 ≤ N � n=1 knh ≤ h k 1 − k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (41) Then, it follows that ∥Sπ N(P, γ1) − Sπ N(P, γ2)∥ = ���� γ2 − γ1 γ1γ2 T π N(P, γ1) + T π N(P, γ1) − T π N(P, γ2) γ2 ���� ≤ � 1 δ2 h k 1 − k + 1 δ 1 1 − k �h2 δ2 + h2 δ2 k 1 − k �� |γ1 − γ2| ≜ L|γ1 − γ2|, (42) where L = � 1 δ2 h k 1−k + 1 δ 1 1−k � h2 δ2 + h2 δ2 k 1−k �� is a universal constant that does not depend on N, P, π or γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Sπ ∞(P, γ) uniformly converges as γ → 1 on Π × P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Also, Sπ ∞(P, γ) is L-Lipschitz for any γ > δ: for any π, P and any γ1, γ2 ∈ (δ, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' ∥Sπ ∞(P, γ1) − Sπ ∞(P, γ2)∥ ≤ L|γ1 − γ2|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (43) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From Lemma 3, for any ϵ, there exists Nϵ, such that for any n ≥ Nϵ, π, P, γ > δ, ∥Sπ ∞(P, γ) − Sπ n(P, γ)∥ < ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (44) Thus for any γ1, γ2 ∈ (δ, 1], ∥Sπ ∞(P, γ1) − Sπ ∞(P, γ2)∥ ≤ ∥Sπ ∞(P, γ1) − Sπ n(P, γ1)∥ + ∥Sπ n(P, γ1) − Sπ n(P, γ2)∥ + ∥Sπ n(P, γ2) − Sπ ∞(P, γ2)∥ ≤ 2ϵ + ∥Sπ n(P, γ1) − Sπ n(P, γ2)∥ ≤ 2ϵ + L|γ1 − γ2|, (45) where the last step is from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Thus, for any ϵ, there exists ω = max {δ, 1 − ϵ}, such that for any γ1, γ2 > ω, ∥Sπ ∞(P, γ1) − Sπ ∞(P, γ2)∥ < (2 + L)ϵ, (46) and hence by Cauchy’s criterion we conclude that Sπ ∞(P, γ) converges uniformly on Π × P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' On the other hand, since eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (45) holds for any ϵ, it implies that ∥Sπ ∞(P, γ1) − Sπ ∞(P, γ2)∥ ≤ L|γ1 − γ2|, (47) which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We now prove Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any P, π, we have that V π P,γ = 1 1 − γ gπ P + hπ P + f π P (γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (48) It then follows that (1 − γ)V π P,γ = gπ P + (1 − γ)hπ P + (1 − γ)f π P (γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (49) Clearly (1 − γ)hπ P → 0 uniformly on Π × P because ∥hπ P∥ = ∥Hπ Prπ∥ ≤ h is uniformly bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, ∥(1 − γ1)f π P (γ1) − (1 − γ2)f π P (γ2)∥ ≤ ∥(1 − γ1)f π P (γ1) − (1 − γ1)f π P (γ2)∥ + ∥(1 − γ1)f π P (γ2) − (1 − γ2)f π P (γ2)∥ ≤ (1 − γ1)L|γ1 − γ2| + ∥f π P (γ2)∥|γ1 − γ2|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (50) For any π, P, γ > δ, ∥f π P (γ)∥ = ���� 1 γ ∞ � n=1 (−1)n �1 − γ γ �n (Hπ P)n+1rπ ���� ≤ ���� 1 γ ∞ � n=1 �1 − γ γ �n hn+1 ���� ≤ h δ 1 − γ γ h 1 1 − 1−γ γ h ≤ h δ k 1 − k ≜ cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (51) Hence, (1 − γ)f π P (γ) → 0 uniformly on Π × P due to the fact that ∥f π P (γ)∥ is uniformly bounded for any π, γ > δ, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then we have that limγ→1(1 − γ)V π P,γ = gπ P uniformly on P × Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' This completes the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof of Theorem 2 We first show a lemma which allows us to interchange the order of lim and max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' If a function f(x, y) converges uniformly to F(x) on X as y → y0, then max x lim y→y0 f(x, y) = lim y→y0 max x f(x, y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (52) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For each f(x, y), denote by arg maxx f(x, y) = xy, and hence f(xy, y) ≥ f(x, y) for any x, y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Also denote by arg maxx F(x) = x′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Now because f(x, y) uniformly converges to F(x), then for any ϵ, there exists δ′, such that ∀|y −y0| < δ′, |f(x, y) − F(x)| ≤ ϵ (53) for any x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Now consider |f(xy, y) − F(x′)| for |y − y0| < δ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' If f(xy, y) − F(x′) > 0, then |f(xy, y) − F(x′)| = f(xy, y) − F(x′) = f(xy, y) − F(xy) + F(xy) − F(x′) ≤ ϵ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (54) On the other hand if f(xy, y) − F(x′) < 0, then |f(xy, y) − F(x′)| = F(x′) − f(xy, y) = F(x′) − f(x′, y) + f(x′, y) − f(xy, y) ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (55) Hence, we showed that for any ϵ, there exists δ′, such that ∀|y − y0| < δ′, |f(xy, y) − F(x′)| = | max x f(x, y) − max x F(x)| ≤ ϵ, (56) and hence lim y→y0 max x f(x, y) = max x F(x) = max x lim y→y0 f(x, y), (57) and this completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then, we show that the robust discounted value function converges uniformly to the robust average-reward as the discounted factor approaches 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 10 (Restatement of Theorem 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' The robust discounted value function converges uniformly to the robust average-reward on Π: lim γ→1(1 − γ)V π P,γ = gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (58) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Due to Theorem 9, for any stationary policy π, gπ P(s) = minP∈P gπ P(s) under the stationary model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence from the uniform convergence in Theorem 1, we first show the following: gπ P = min P∈P gπ P = min P∈P lim γ→1(1 − γ)V π P,γ (a) = lim γ→1 min P∈P(1 − γ)V π P,γ = lim γ→1(1 − γ)V π P,γ, (59) where (a) is because Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Moreover, note that limγ→1(1 − γ)V π P,γ = gπ P uniformly on Π × P, hence the convergence in (59) is also uniform on Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Thus, we complete the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof of Theorem 3 Theorem 11 (Restatement of Theorem 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' VT generated by Algorithm 1 converges to the robust average-reward gπ P as T → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From discounted robust Bellman equation (Nilim and El Ghaoui 2004), it can be shown that (1 − γt)V π P,γt = (1 − γt) � a π(a|s)(r(s, a) + γtσPas (V π P,γt)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (60) Then we can show that for any s ∈ S,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' |Vt+1(s) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| = |Vt+1(s) − (1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) + (1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| (61) ≤ |(1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| + |Vt+1(s) − (1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s)| = |(1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| + ���� � a π(a|s) � (1 − γt)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) + γtσPas (Vt) − ((1 − γt)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) + γtσPas ((1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt)) ����� = |(1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| + ���� � a π(a|s) � γtσPas (Vt) − γtσPas ((1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt) ����� = |(1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| + γt ���� � a π(a|s) � σPas (Vt) − σPas ((1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt) �����.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (62) If we denote by ∆t ≜ ∥Vt − (1 − γt)V π P,γt∥∞, then ∆t+1 ≤ ∥(1 − γt)V π P,γt − (1 − γt+1)V π P,γt+1∥∞ + γt max s � � a π(a|s) ����σPas (Vt) − σPas ((1 − γt)V π P,γt) ���� � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (63) It can be easily verified that σPa s (V ) is a 1-Lipschitz function, thus the second term in (63) can be further bounded as � a π(a|s) ����σPa s (Vt) − σPa s ((1 − γt)V π P,γt) ���� ≤ � a π(a|s)∥Vt − (1 − γt)V π P,γt∥∞ = ∥Vt − (1 − γt)V π P,γt∥∞, (64) and hence ∆t+1 ≤ ∥(1 − γt)V π P,γt − (1 − γt+1)V π P,γt+1∥∞ + γt∆t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (65) Recall that (1 − γt)V π P,γt = (1 − γt) min P V π P,γt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (66) Let s∗ t ≜ arg maxs |(1 − γt)V π P,γt(s) − (1 − γt+1)V π P,γt+1(s)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then it follows that ∥(1 − γt)V π P,γt − (1 − γt+1)V π P,γt+1∥∞ = |(1 − γt)V π P,γt(s∗ t ) − (1 − γt+1)V π P,γt+1(s∗ t )|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (67) Note that from (Nilim and El Ghaoui 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Iyengar 2005), for any stationary policy π, there exists a stationary model P such that V π P,γ(s) = EP,π � �∞ t=0 γtrt|S0 = s � ≜ V π P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence in the following, for each γt, we denote the worst-case transition kernel of V π P,γt by Pt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' If (1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) ≥ (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s∗ t ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' then |(1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) − (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s∗ t )| = min P (1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) − min P (1 − γt+1)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s∗ t ) = (1 − γt)V π Pt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) − (1 − γt+1)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s∗ t ) = (1 − γt)V π Pt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) − (1 − γt)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) + (1 − γt)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) − (1 − γt+1)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s∗ t ) (a) ≤ (1 − γt)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) − (1 − γt+1)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s∗ t ) ≤ ∥(1 − γt)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt − (1 − γt+1)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1∥∞,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (68) where (a) is due to (1 − γt)V π Pt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) = minP(1 − γt)V π P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ) ≤ (1 − γt)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s∗ t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Now, according to Lemma 1, (1 − γt)V π Pt+1,γt = gπ Pt+1 + (1 − γt)hπ Pt+1 + (1 − γt)f π Pt+1(γt), (69) (1 − γt+1)V π Pt+1,γt+1 = gπ Pt+1 + (1 − γt+1)hπ Pt+1 + (1 − γt+1)f π Pt+1(γt+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (70) Hence, for any γt > δ, eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (68) can be further bounded as ∥(1 − γt)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt − (1 − γt+1)V π Pt+1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1∥∞ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='= ∥(γt+1 − γt)hπ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1 + (1 − γt)f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt) − (1 − γt+1)f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt+1)∥∞ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='≤ (γt+1 − γt)∥hπ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1∥∞ + ∥f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt) − f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt+1)∥∞ + ∥γt+1f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt+1) − γtf π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt)∥∞ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='≤ h(γt+1 − γt) + L(γt+1 − γt) + ∥γt+1f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt+1) − γtf π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt)∥∞ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='≤ h(γt+1 − γt) + L(γt+1 − γt) + ∥γt+1f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt+1) − γt+1f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt)∥∞ + ∥γt+1f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt) − γtf π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt)∥∞ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='≤ h(γt+1 − γt) + L(γt+1 − γt) + γt+1∥f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt+1) − f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt)∥∞ + ∥f π ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='Pt+1(γt)∥∞(γt+1 − γt) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='(b) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='≤ (h + L + γt+1L + sup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γ ∥f π P (γ)∥∞)(γt+1 − γt) ≤ K(γt+1 − γt),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (71) where (a) is from Lemma 5 for any γt > δ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' cf is defined in (51) and K ≜ h + 2L + cf is a uniform constant;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' And (b) is from Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Similarly, the inequality also holds for the case when (1 − γt)V π P,γt(s∗ t ) ≤ (1 − γt+1)V π P,γt+1(s∗ t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Thus we have that for any t such that γt > δ, ∆t+1 ≤ K(γt+1 − γt) + γt∆t, (72) where K is a uniform constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Following Lemma 8 from (Tewari and Bartlett 2007), we have that ∆t → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Note that ∥Vt − gπ P∥∞ ≤ ∥Vt − (1 − γt)V π P,γt∥∞ + ∥(1 − γt)V π P,γt − gπ P∥∞ = ∆t + ∥(1 − γt)V π P,γt − gπ P∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (73) Together with Theorem 2, we further have that lim t→∞ ∥Vt − gπ P∥∞ = 0, (74) which completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof of Theorem 4 Note that the optimal robust average-reward is defined as g∗ P(s) ≜ max π gπ P(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (75) We further define V ∗ P,γ(s) ≜ max π V π P,γ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (76) Theorem 12 (Restatement of Theorem 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' VT generated by Algorithm 2 converges to the optimal robust average-reward g∗ P as T → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Firstly, from the uniform convergence in Theorem 2, it can be shown that lim t→∞(1 − γt)V ∗ P,γt = g∗ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (77) We then show that for any s ∈ S,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' |Vt+1(s) − (1 − γt+1)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| ≤ |Vt+1(s) − (1 − γt)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s)| + |(1 − γt)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| (a) = |(1 − γt)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| + ���� max a � (1 − γt)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) + γtσPas (Vt) � − max a � ((1 − γt)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) + γtσPas ((1 − γt)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt)) ����� ≤ |(1 − γt)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt(s) − (1 − γt+1)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt+1(s)| + max a ����(1 − γt)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) + γtσPas (Vt) − ((1 − γt)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) + γtσPas ((1 − γt)V ∗ P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='γt)) ����,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (78) where (a) is because the optimal robust Bellman equation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' and the last inequality is from the fact that | maxx f(x)−maxx g(x)| ≤ maxx |f(x) − g(x)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (78) can be further bounded as |Vt+1(s) − (1 − γt+1)V ∗ P,γt+1(s)| ≤ |(1 − γt)V ∗ P,γt(s) − (1 − γt+1)V ∗ P,γt+1(s)| + γt max a ����σPas (Vt) − σPas ((1 − γt)V ∗ P,γt) ����.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (79) If we denote by ∆t ≜ ∥Vt − (1 − γt)V ∗ P,γt∥∞, then ∆t+1 ≤ ∥(1 − γt)V ∗ P,γt − (1 − γt+1)V ∗ P,γt+1∥∞ + γt max s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='a ����σPas (Vt) − σPas ((1 − γt)V ∗ P,γt) ����.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (80) Since the support function σPas (V ) is 1-Lipschitz, then it can be shown that for any s, a, ����σPas (Vt) − σPas ((1 − γt)V ∗ P,γt) ���� ≤ ∥Vt − (1 − γt)V ∗ P,γt∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (81) Hence ∆t+1 ≤ ∥(1 − γt)V ∗ P,γt − (1 − γt+1)V ∗ P,γt+1∥∞ + γt∆t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (82) Similar to (71) in Theorem 3, we can show that ∥(1 − γt)V ∗ P,γt − (1 − γt+1)V ∗ P,γt+1∥∞ ≤ K|γt − γt+1|, (83) and similar to Lemma 8 from (Tewari and Bartlett 2007), lim t→∞ ∆t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (84) Moreover, note that ∥Vt − g∗ P∥∞ ≤ ∥Vt − (1 − γt)V ∗ P,γt∥∞ + ∥(1 − γt)V ∗ P,γt − g∗ P∥∞ = ∆t + ∥(1 − γt)V ∗ P,γt − g∗ P∥∞, (85) which together with eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (77) implies that ∥Vt − g∗ P∥∞ → 0, (86) and hence it completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof of Theorem 5 We denote the set of all stationary deterministic polices by ΠD in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 13 (Restatement of Theorem 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' There exists 0 < δ < 1, such that for any γ > δ, a deterministic optimal robust policy for robust discounted value function V ∗ P,γ is also an optimal policy for robust average-reward, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', V π∗ P,γ = V ∗ P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (87) Moreover, when arg maxπ∈ΠD gπ P is a singleton, there exists a unique Blackwell optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' According to Theorem 74, there exists π∗ ∈ ΠD such that g∗ P = gπ∗ P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (88) Assume the robust average-reward of all deterministic policies are sorted in a descending order: g∗ P = gπ∗ 1 P = gπ∗ 2 P = .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' = gπ∗ m P > gπ1 P ≥ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' ≥ gπn P (89) for all π∗ i , πi ∈ ΠD, and we define Π∗ = {π∗ i : i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', m}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Denote by d = gπ∗ i P − gπ1 P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From Theorem 2, we know that for any π ∈ ΠD, lim γ→1(1 − γ)V π P,γ = gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (90) Because the set ΠD is finite, for any ϵ < d 2, there exists δ′ < 1, such that for any γ > δ′, π∗ i and πj, |(1 − γ)V π∗ i P,γ − g∗ P| < ϵ, (91) |(1 − γ)V πj P,γ − gπj P | < ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (92) It hence implies that (1 − γ)V π∗ i P,γ ≥ (d − 2ϵ) + (1 − γ)V πj P,γ > (1 − γ)V πj P,γ, (93) and V π∗ i P,γ > V πj P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (94) Note that from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='1 in (Iyengar 2005), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', maxπ∈ΠD V π P,γ = V ∗ P,γ, we have that for any γ, there exists a deterministic policy π ∈ ΠD, such that V ∗ P,γ = V π P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Together with (94), it implies that all the possible optimal robust polices of V π P,γ belong to {π∗ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π∗ m}, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', the set Π∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence, there exists π∗ j ∈ Π∗, such that V π∗ j P,γ = max π∈ΠD V π P,γ = V ∗ P,γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (95) For the second part, when the optimal robust policy of robust average-reward is unique, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', Π∗ = {π∗}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then from the results above, there exists δ′, such that for any γ > δ′, V π∗ P,γ > V π P,γ for any π∗ ̸= π ∈ ΠD, and hence π∗ is the optimal policy for discounted robust MDPs, which is the unique Blackwell optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof of Results for Direct Approach Recall that V π P (s) ≜ min κ∈� t≥0 P Eκ,π � ∞ � t=0 (rt − gπ P) ��S0 = s � , (96) where gπ P = min κ∈� t≥0 P lim n→∞ Eκ,π � 1 n n−1 � t=0 rt|S0 = s � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (97) We first show that the robust relative function is always finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any π, V π P is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' 4The proof of Theorem 7 is independent of theorem 5 and does not relay on the results to be showed here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' According to Theorem 9, V π P = minP∈P V π P = minP∈P EP,π � �∞ t=0(rt − gπ P) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Note that V π P can be rewritten as V π P = min P∈P EP,π � ∞ � t=0 (rt − gπ P) � = min P∈P EP,π � lim n→∞ n � t=0 (rt − gπ P) � = min P∈P EP,π � lim n→∞ n � t=0 (rt − gπ P + gπ P − gπ P) � = min P∈P EP,π � lim n→∞(Rn − ngπ P + ngπ P − ngπ P) � , (98) where Rn = �n t=0 rt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Note that for any P ∈ P and n, ngπ P ≥ ngπ P, hence lim n→∞(Rn − ngπ P + ngπ P − ngπ P) ≥ lim n→∞(Rn − ngπ P), (99) and thus the lower bound of V π P can be derived as follows, V π P ≥ min P∈P EP,π � ∞ � t=0 (rt − gπ P) � = min P∈P V π P = min P∈P Hπ Prπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (100) which is finite due to the fact that Hπ P is continuous on the compact set P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From Theorem 9, we denote the stationary worst-case transition kernel of gπ P by Pg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then the upper bound of V π P can be bounded by noting that V π P = min P∈P EP,π � ∞ � t=0 (rt − gπ Pg) � ≤ EPg,π � ∞ � t=0 (rt − gπ Pg) � = V π Pg, (101) which is also finite and Pg denotes the worst-case transition kernel of gπ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Hence we show that V π P is finite for any π and hence complete the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' After showing that the robust relative value function is well-defined, we show the following robust Bellman equation for average-reward robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 14 (Restatement of Theorem 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any s and π, (V π P , gπ P) is a solution to the following robust Bellman equation: V (s) + g = � a π(a|s) � r(s, a) + σPa s (V ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (102) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From the definition,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' V π P (s) = min κ∈� t≥0 P Eκ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � ∞ � t=0 (rt − gπ P) ��S0 = s � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (103) hence V π P (s) = min κ∈� t≥0 P Eκ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � ∞ � t=0 (rt − gπ P) ��S0 = s � = min κ∈� t≥0 P Eκ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � (r0 − gπ P) + ∞ � t=1 (rt − gπ P) ��S0 = s � = min κ∈� t≥0 P �� a π(a|s)r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P + Eκ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � ∞ � t=1 (rt − gπ P) ��S0 = s �� = � a π(a|s) (r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + min κ∈� t≥0 P � � � � a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='s′ π(a|s)Pa s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='s′Eκ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='π � ∞ � t=1 (rt − gπ P)|S1 = s′ �� � � = � a π(a|s) (r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' a) − gπ P) + min P0∈P min κ=(P1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=')∈� t≥1 P � � � � a,s′ π(a|s)(P0)a s,s′Eκ,π � ∞ � t=1 (rt − gπ P)|S1 = s′ �� � � = � a π(a|s) (r(s, a) − gπ P) + min P0∈P � � � � a,s′ π(a|s)(P0)a s,s′ min κ=(P1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=')∈� t≥1 P � Eκ,π � ∞ � t=1 (rt − gπ P)|S1 = s′ ��� � � = � a π(a|s) (r(s, a) − gπ P) + � a π(a|s) � s′ min pa s,s′∈Pas pa s,s′V π P (s′) = � a π(a|s) (r(s, a) − gπ P) + � a π(a|s)σPas (V π P ) = � a π(a|s) � r(s, a) − gπ P + σPas (V π P ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (104) This hence completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' [Restatement of Theorem 7, Part 1] For any (g, V ) that is a solution to maxa � r(s, a) − g + σPas (V ) − V (s) � = 0, ∀s, then g = g∗ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' In this proof, for two vectors v, w ∈ Rn, v ≥ w denotes that v(s) ≥ w(s) entry-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Let B(g, V )(s) ≜ maxa � r(s, a) − g + σPas (V ) − V (s) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Since (g, V ) is a solution to (13), hence for any a ∈ A and any s ∈ S, r(s, a) − g + σPas (V ) − V (s) ≤ 0, (105) from which it follows that for any policy π, g(s) ≥ rπ(s) + � a π(a|s)σPas (V ) − V (s) ≜ rπ(s) + � a π(a|s)(pa s)⊤V − V (s), (106) where rπ(s) ≜ � a π(a|s)r(s, a), pa s ≜ arg minp∈Pas p⊤V , and PV = {pa s : s ∈ S, a ∈ A}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We also denotes the state transition matrix induced by π and PV by Pπ V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Using these notations, and rewrite eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (106), we have that g1 ≥ rπ + (Pπ V − I)V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (107) Since the inequality in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (107) holds entry-wise, all entries of Pπ V are positive, then by multiplying both sides of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (107) by Pπ V , we have that g1 = gPπ V 1 ≥ Pπ V rπ + Pπ V (Pπ V − I)V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (108) Multiplying the both sides of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (108) by Pπ V , and repeatedly doing that, we have that g1 ≥ (Pπ V )2rπ + (Pπ V )2(Pπ V − I)V, (109) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (110) g1 ≥ (Pπ V )n−1rπ + (Pπ V )n−1(Pπ V − I)V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (111) Summing up these inequalities from eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (107) to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (111), we have that ng1 ≥ (I + Pπ V + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pπ V )n−1)rπ + (I + Pπ V + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pπ V )n−1)(Pπ V − I)V, (112) and from which, it follows that g1 ≥ 1 n(I + Pπ V + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pπ V )n−1)rπ + 1 n(I + Pπ V + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pπ V )n−1)(Pπ V − I)V = 1 n(I + Pπ V + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pπ V )n−1)rπ + 1 n((Pπ V )n − I)V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (113) It can be easily verified that limn→∞ 1 n((Pπ V )n − I)V = 0, and hence it implies that g1 ≥ lim n→∞ 1 n(I + Pπ V + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pπ V )n−1)rπ = lim n→∞ 1 nEPπ V ,π � n � t=0 rt � = gπ Pπ V 1 ≥ gπ P1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (114) Since eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (114) holds for any policy π, it follows that g ≥ g∗ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' On the other hand, since B(g, V ) = 0, there exists a policy τ such that g1 = rτ + (Pτ V − I)V, (115) where rτ, Pτ V are similarly defined as for π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' From Theorem 9, there exists a stationary transition kernel Pτ ave such that gτ P = gτ Pτave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We denote the state transition matrix induced by τ and Pτ ave by Pτ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then because Pτ V is the worst-case transition of V , it follows that Pτ V V ≤ PτV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (116) Thus g1 ≤ rτ + (Pτ − I)V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (117) Similarly, we have that g1 ≤ (Pτ)j−1rτ + (Pτ)j−1(Pτ − I)V, (118) for j = 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Summing these inequalities together we have that ng1 ≤ (I + Pτ + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pτ)n−1)rτ + (I + Pτ + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pτ)n−1)(Pτ)n−1(Pτ − I)V = (I + Pτ + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' + (Pτ)n−1)rτ + ((Pτ)n − I)V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (119) Hence g1 ≤ lim n→∞ 1 nEPτave,τ � n � t=0 rt � = gτ Pτave1 = gτ P1 ≤ g∗ P1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (120) Thus g = g∗ P, and this concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 16 (Restatement of Theorem 7, Part 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' For any (g, V ) that is a solution to max a � r(s, a) − g + σPas (V ) − V (s) � = 0, ∀s, (121) if we set π∗(s) = arg max a � r(s, a) + σPa s (V ) � (122) for any s ∈ S, then π∗ is an optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Note that for any stationary policy π, we denote by σPπ(V ) ≜ (� a π(a|s1)σPas1 (V ), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', � a π(a|s|S|)σPas|S| (V )) being a vector in R|S|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Then eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (14) is equivalent to rπ∗ + σPπ∗ (V ) = max π {rπ + σPπ(V )} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (123) Hence, rπ∗ − g + σPπ∗ (V ) − V = max π {rπ − g + σPπ(V ) − V } .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (124) Since (g, V ) is a solution to (13), it follows that rπ∗ − g + σPπ∗ (V ) − V = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (125) According to the robust Bellman equation eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (12), (gπ∗ P , V π∗ P ) is a solution to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (125).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Thus from Theorem 15, gπ∗ P = g∗ P, and hence π∗ is an optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Theorem 17 (Restatement of Theorem 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (wT , Vt) in Algorithm 3 converges to a solution of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' We first denote the update operator as Lv(s) ≜ max a (r(s, a) + σPas (v)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (126) Now, consider sp(Lv − Lu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Denote by ´s ≜ arg maxs(Lv(s) − Lu(s)) and `s ≜ arg mins(Lv(s) − Lu(s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Also denote by av ≜ arg maxa(r(´s, a) + σPa ´s (v)) and au ≜ arg maxa(r(´s, a) + σPa ´s (u)) Then Lv(´s) − Lu(´s) = max a (r(´s, a) + σPa ´s (v)) − max a (r(´s, a) + σPa ´s (u)) ≜ r(´s, av) + σPav ´s (v) − (r(´s, au) + σPau ´s (u)) ≤ r(´s, av) + σPav ´s (v) − (r(´s, av) + σPav ´s (u)) = σPav ´s (v) − σPav ´s (u) ≜ (pav,v ´s )⊤v − (pav,u ´s )⊤u, (127) where pav,v ´s = arg minp∈Pav ´s p⊤v and pav,u ´s = arg minp∈Pav ´s p⊤u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Thus eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (127) can be further bounded as Lv(´s) − Lu(´s) ≤ (pav,v ´s )⊤v − (pav,u ´s )⊤u ≤ (pav,u ´s )⊤(v − u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (128) Similarly, Lv(`s) − Lu(`s) ≥ (pau,v `s )⊤(v − u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (129) Thus sp(Lv − Lu) ≤ (pav,u ´s )⊤(v − u) − (pau,v `s )⊤(v − u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (130) Now denote by v −u ≜ (x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', xn), pav,u ´s = (p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', pn) and pau,v `s = (q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', qn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' Further denote by bi ≜ min{pi, qi} Then n � i=1 pixi − n � i=1 qixi = n � i=1 (pi − bi)xi − n � i=1 (qi − bi)xi ≤ n � i=1 (pi − bi) max{xi} − n � i=1 (qi − bi) min{xi} = n � i=1 (pi − bi)sp(x) + � n � i=1 (pi − bi) − n � i=1 (qi − bi) � min{xi} = � 1 − n � i=1 bi � sp(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (131) Thus we showed that sp(Lv − Lu) ≤ � 1 − n � i=1 bi � sp(v − u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (132) Now from Assumption 2, and following Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='3 from (Puterman 1994), it can be shown that there exists 1 > λ > 0, such that for any a, u, v, n � i=1 bi ≥ λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (133) Further, following Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='2 in (Puterman 1994), it can be shown that L is a J-step contraction operator for some integer J, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=', sp(LJv − LJu) ≤ (1 − λ)sp(v − u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (134) Then, it can be shown that the relative value iteration converges to a solution of the optimal equation similar to the relative value iteration for non-robust MDPs under the average-reward criterion (Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='7 in (Puterman 1994), Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content='4 in(Sigaud and Buffet 2013)), and hence (wt, Vt) converges to a solution to eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} +page_content=' (13) as ϵ → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNAyT4oBgHgl3EQf8fqv/content/2301.00858v1.pdf'} diff --git a/UdAzT4oBgHgl3EQf0_6m/content/tmp_files/2301.01793v1.pdf.txt b/UdAzT4oBgHgl3EQf0_6m/content/tmp_files/2301.01793v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..4d8c40f05a6eac66532bca2c22206ec4935ba48c --- /dev/null +++ b/UdAzT4oBgHgl3EQf0_6m/content/tmp_files/2301.01793v1.pdf.txt @@ -0,0 +1,2486 @@ +arXiv:2301.01793v1 [math.CV] 4 Jan 2023 +INTERIOR H¨OLDER ESTIMATE FOR THE LINEARIZED +COMPLEX MONGE-AMPERE EQUATION +YULUN XU +Abstract. Let w0 be a bounded, C3, strictly plurisubharmonic func- +tion defined on B1 ⊂ Cn. +Then w0 has a neighborhood in L∞(B1). +Suppose that we have a function φ in this neighborhood with 1 − ε ≤ +MA(u) ≤ 1+ ε and there exists a function u solving the linearized com- +plex Monge-Ampere equation: det(φk¯l)φi¯jui¯j = 0. +Then one has an +estimate on |u|Cα(B 1 +2 +) for some α > 0 depending on n, as long as ǫ is +small depending on n. This partially generalizes Caffarelli’s estimate for +linearized real Monge-Ampere equation to the complex version. +1. introduction +Monge-Ampere equations are second-order partial differential equations +whose leading term is the determinant of the Hessian of a real unknown +function. The Hessian is required to be positive or at least nonnegative, +so that the equations are elliptic or degenerate elliptic. +Monge-Ampere +equations can be divided into real or complex, depending on whether one is +considering real Hessian or complex Hessian. In the real case, the Hessian +is φij, so that the positivity of the Hessian is a convexity condition. In the +complex case, the Hessian is φi¯j, and its positivity is a plurisubharmonicity +condition. +Let φ be a convex solution to a real Monge-Ampere equation: +(1.1) +detD2φ = g. +Definition 1.2. Let E ⊂ Cn be a set and x0 ∈ E. We will sometimes +denote E to be E(x0) to indicate it is a “pointed set”. Let c > 0, we define: +cE(x0) = {x0 + c(y − x0) : y ∈ E(x0)}. +Namely cE(x0) is the image of the dilation map centered at x0 by a factor +c. +Definition 1.3. Let µ be the Monge-Ampere measure of φ, (In the case of +φ ∈ C2, µ(A) = +� +A det(D2φ) for any set A) We say µ satisfies the doubling +property if there exist constants C > 0 and 0 < α < 1 such that: +µ(St(x)) ≤ Cµ(αSt(x)), +Date: December 2022. +1 + +2 +YULUN XU +for any section +St(x) = {y ∈ Rn : φ(y) < l(y) + t}, +where l is a supporting hyperplane of φ at x. +Note that if we have that λ < g < Λ for some positive constants λ and +Λ, then the doubling property holds. +Next we consider the following linearized Monge-Ampere equation: +Lφ = det(D2φ)φijuij = f. +When we take first derivatives of φ, we can see that φj = Djφ, j = 1, ..., n +satisfy the linearized Monge-Ampere equation: +Lφ(φj) = gj. +Since φ is convex, the linearized Monge-Ampere equation is elliptic. How- +ever, the linearized Monge-Ampere equation is not uniformly elliptic unless +we have the estimate for the second derivatives of φ. The standard H¨older +estimates for the solutions to linear second order elliptic equations usually +require the uniform ellipticity. However, Caffarelli prove the H¨older estimate +for the solutions to the linearized Monge-Ampere equations under a weak +condition on g which doesn’t imply the uniform ellipticity of the linearized +Monge-Ampere equation, see[4]: +Theorem 1.4. Assume that the Monge-Ampere measure µ satisfies the dou- +bling property. Let u be a nonnegative solution to the equation: +Lφu = 0 +in a section SR(x0). Then there exist constants C0 > 0 and α > 0 depending +on n and |u|∞ and the constants in the doubling property such that: +||u||Cα(S R +2 +(x0)) ≤ C0. +The boundary Harnack inequality for the linearized real Monge-Ampere +equation is derived in [11]. There are estimates for the high order derivatives +of the solutions to the linearized real Monge-Ampere equation. When g is +continuous, the C1,α estimate is derived in [8] and the W 2,p estimate is +derived in [7]. The boundary H¨older gradient estimates is derived in [12]. If +g is not continuous but belongs to some VMO-type space, the interior W 2,p +estimate is derived in [10] while the global W 2,p estimate is derived in [13]. +The C1,α estimate is derived in [14]. +There are some applications of the Real linearized Monge-Ampere equa- +tion to the complex geometry. It can be used to prove the interior regularity +of the Calabi flow on a toric surface, see [5]. It can also be applied to the +extremal metrics on toric surfaces, see [15]. However, as far as I am con- +cerned, the theory of the real linearized Monge-Ampere equation can only +be applied to the toric case where a complex Monge-Ampere equation can +be reduced to a real Monge-Ampere equation. Besides, the complex lin- +earized Monge-Ampere equation appears in the complex geometry such as + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION3 +the study of the csck problem [2]. So a natural question is: how to adapt +the method for the real linearized Monge-Ampere equation to the complex +linerized Monge-Ampere equation directly? Thanks to [6], we can give a +partial answer to this question: +Theorem 1.5. Let Ω ⊂ Cn be a bounded domain with B1−γ0 ⊂ Ω ⊂ B1+γ0. +Let φ ∈ C2(Ω) ∩ PSH(Ω) ∩ C(¯Ω) be such that 1 − ε ≤ det φi¯j ≤ 1 + ε in Ω +and φ = 0 on ∂Ω. Suppose that γ0 and ǫ are small constants depending on +n. Let St(z0) be defined in [6]. Then there exist constants β > 1, µ0 and C0 +depending on n. Suppose that u ∈ C2(St(z0)) is a nonnegative solution to +Lφu = 0 on St(z0) with t ≤ µ5 +0 +C0 and z0 ∈ B 1 +2 (0). Then we have that: +sup +St(z0) +u ≤ β inf +St(z0) u. +Corollary 1.6. Let Ω ⊂ Cn be a bounded domain with B1−γ0 ⊂ Ω ⊂ B1+γ0. +Let φ ∈ C2(Ω) ∩ PSH(Ω) ∩ C(¯Ω) be such that 1 − ε ≤ det φi¯j ≤ 1 + ε in +Ω and φ = 0 on ∂Ω. Suppose that γ0 and ǫ are small constants depending +on n. Let St(z0) be defined in [6]. Suppose that u ∈ C2(Ω) is a solution to +Lφu = 0 on Ω. Then we have that: +||u||Cα(B 1 +2 ) ≤ C, +Here α > 0 is a constant depending on n and C is a constant depending on +n and |u|L∞(Ω). +More generally, we have that: +Theorem 1.7. Let w0 be a smooth function in the unit ball such that for +some C0 > 1: +1 +C0 +I ≤ (w0)zi¯zj ≤ C0I, |D3w0| ≤ C0 in B1. +Then there exists δ0 > 0 small enough, depending only on C0 and n, such +that for all φ ∈ C2(B1)∩PSH(B1)∩C(B1) with |φ−w0| ≤ δ0 on B1, solving +1 − ε ≤ MA(φ) ≤ 1 + ε, and for any solution u ∈ C2(B1) solving +Lφu = det(φk¯l)φi¯jui¯j = 0, +we have that: +||u||Cα(B 1 +2 ) ≤ C. +Here α > 0 is a constant depending on n. Here C is a constant depending +on C0, |u|L∞(B1) and n. ε is small enough depending only on n. +In the above, MA(φ) is the complex Monge-Ampere operator defined +for continuous plurisubharmonic functions, in the Bedford-Taylor sense (see +[1]), so that MA(φ) = det φi¯j when φ ∈ C2. From now on, we use Lφ for +the complex linearized Monge-Ampere equation. +For the manifold setting, we have the following Corollary: + +4 +YULUN XU +Corollary 1.8. Let (M, ω0) be a compact K¨ahler manifold. Let φ ∈ C2(M)∩ +PSH(M, ω0) be the solution to: +(ω0 + +√ +−1∂ ¯∂φ)n = fωn +0 , ω0 + +√ +−1∂ ¯∂φ > 0, +where |f − 1| < ε and +� +M fωn +0 = +� +M ωn +0 . Let u ∈ C2(M) be the solution to +the equation: +∆φu = gi¯j +φ ui¯j = n − trgφg +Suppose that ǫ is small enough depending on n, ω0. Then we have that +||u||Cα ≤ C, +Here α > 0 is a constant depending on n and C is a constant depending on +n, ω0 and ||u||L∞. +In the section 3, we reduce the Theorem 1.7 to the Corollary 1.6. Then we +prove the Corollary 1.6 starting by proving a version of Calderon-Zygmund +decomposition in the section 4(Theorem 4.1). Then we prove that the level +sets of solutions have uniform critical density in the section 5(Theorem 5.1). +Then we prove that solutions that are large on a section are uniformly large +on a bigger section in the section 6(Theorem 6.2). In the section 7, we first +prove the power decay of the distribution function of solutions and then +prove the Harnack inequality (Theorem 7.7) and the H¨older estimate of the +solutions (Corollary 1.6). In the section 8, we prove some corollaries of the +main theorem. +2. preliminary +We want to show that the equations in the main theorem is invariant un- +der affine transformations. Let z be the original coordinate. For any affine +transformation T and any positive constant λ, we can define a new coordi- +nate w by z = +√ +λTw. Let h be a degree two pluriharmonic polynomial. +Then we can normalize φ and u by: +�φ(w) = φ( +√ +λTw) +λ|det(T)| +2 +n ++ h +�u(w) = u( +√ +λTw). +Then by calculation, we have that: +L�φ�u(w) = λ|det(T)| +2 +n Lφu(z). +So if Lφu = 0, we can get that L�φ�u = 0 Recall that We denote the complex +Monge-Ampere measure as µ = MA(φ). We denote the Lebesgue measure +as m. + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION5 +3. Reduction of Theorem 1.7 to Corollary 1.6 +We first need the following lemmas from [6]: +Lemma 3.1. Let w0 be as stated in Theorem 1.7. Denote ax0,ij = (w0)i¯j(x0) +and hx0 = Re(Σi2(w0)izi) + Re(Σi,j(w0)ijzizj). +Namely we assume that +w0 ∈ C3(B1), and +1 +C0 I ≤ (w0)i¯j ≤ C0I, |D3w0| ≤ C0 on B0.99. Let δ ≥ 0 +and φ0 be a function on B1 with |φ0 − w0| ≤ δ on B0.95. Then there exists +C1 > 0 large enough and µ0 > 0 small enough depending only on C0, such +that for all µ with 4C1δ ≤ µ ≤ µ0, we have: +(1−C1γ)Eµ(x0) ⊂ {z ∈ B +1 +2C2 +0 +(x0) : (φ0−hx0)(z) ≤ φ0(x0)+µ} ⊂ (1+C1γ)Eµ(x0). +Moreover, (φ0 − hx0)(z) = φ0(x0) + µ on ∂{z ∈ B +1 +2C2 +0 +(x0) : (φ0 − hx0)(z) ≤ +φ0(x0) + µ}. +Here γ = δ +µ +µ +1 +2 and Eµ(x0) = {z ∈ Cn : �n +i,j=1 ax0,ij(z−x0)i(z − x0)j ≤ +µ}. +Let φ and w0 be as stated in Theorem 1.7. +Let µ > 0 and x0 ∈ +B0.8. +Let Tµ,x0 be a C-linear transformation such that Tµ,x00 = 0 and +x0 + Tµ,x0(B√µ(0)) = Eµ(x0). Define +(3.2) +φµ,x0(ζ) = +1 +µ| det Tµ,x0| +2 +n +(φ − hx0 − µ)(x0 + Tµ,x0(√µζ)). +Since Eµ(x0) is defined in terms of ax0,ij, with +1 +C0 ≤ ax0,ij = (w0)i¯j(x0) ≤ +C0I, it is easy to see that: +||Tµ,x0|| ≤ C2, ||T −1 +µ,x0|| ≤ C2, +1 +C2 +≤ | det Tµ,x0|2 ≤ C2. +Here C2 is a large enough constant depending only on C0 and n. Define Ωµ = +T −1 +µ,x0({z ∈ B +1 +2C2 +0 +: (φ − hx0)(z) ≤ φ(x0) + µ} − x0). Then by straightforward +calculation and Lemma 3.1, we can see the following: +Lemma 3.3. There is µ0 > 0 small enough depending only on C0 such that +for all 4C1δ0 ≤ µ ≤ µ0 (with C1 > 0 being the constant given by Lemma +3.1), we have +(1) B1−C1γ ⊂ Ωµ ⊂ B1+C1γ, with γ = δ0 +µ + µ +1 +2. +(2) det(φµ,x0)ζi¯ζj = f(x0 + Tµ,x0(√µζ)) in Ωµ, uµ,x0 = 0 on ∂Ωµ. +The renormalized function uµ,x0 fits in the assumptions for Theorem 1.6 +after suitably choosing the parameters, and Theorem 1.7 follows as a direct +consequence: +Corollary 3.4. Theorem 1.7 holds, if we assume Corollary 1.6. + +6 +YULUN XU +Proof. We wish to apply Corollary 1.6 to each uµ,x0. In order to do so, we +just need: +C1γ = C1(δ0 +µ + µ +1 +2 ) ≤ γ0(n), |f(x0 + Tµ,x0(√µζ)) − 1| ≤ ε(n). +Here γ0(n) and ε(n) are the constants given by Theorem 1.6. +So we could just take µ so that 2C1µ +1 +2 ≤ 1 +2γ0(n) and also µ ≤ µ0 (given +by Lemma 3.3). With this µ, we can take δ0 so that C1 δ0 +µ ≤ 1 +2γ0(n) and also +that 4C1δ0 ≤ µ. We fix this choice from now on. +Since we assumed that Corollary 1.6 holds, we conclude that: +||uµ,x0||Cα(B 1 +2 ) ≤ C, +where C is a constant depending only on n and |u|L∞(B1). Then using (3.2) +we may go back to u and obtain that +||u||Cα(E 1 +2 µ(x0)) ≤ C′, +for any x0 ∈ B 1 +2 (0) where C′ is a constant depending only on n, C0 (defined +in the statement of the Theorem 1.7) and |u|L∞(B1). Since there exists a +constant C1 depending on C0 such that +B √µ +C ⊂ E 1 +2µ(x0) ⊂ BC√µ(x0). +We can use en elementary covering argument to get that: +||u||Cα(B 1 +2 (0)) ≤ C′′. +C′′ is a constant depending only on n, C0 and |u|L∞(B1). +□ +4. Calderon-Zygmund decomposition +From now on we focus on the Corollary 1.6. +In this section we want +to prove the following theorem which is a version of Calderon-Zygmund +decomposition using the sections St(x) defined in [6]. +Theorem 4.1. Let 0 < σ < 1 and 0 < δ < 1 be given. Let µ0 (This is +the constant we use to define sections in the W 2,p paper) and ǫ (This is the +same constant in the Theorem 1.6) be small depending on σ, δ and n. Let +A be a bounded subset of B 1 +2 (0). Suppose that for a.e. x ∈ A, +(1) limt→0 +µ(St(x)∩A) +µ(St(x)) += 1. +(2) µ(St(x) ∩ A) ≤ δµ(St(x)) for any µ4 +0 < t ≤ µ3 +0. +Then for such x, we can define tx = sup{t ≤ µ4 +0 : µ(St(x)∩A) ≥ δµ(St(x))}. +Then there exist a countable family of sections {Sk = Stk(xk)}, where +xk ∈ A and tk ≤ µ3 +0, with the following properties: +(a) (1 − Cσ +1 +2 )δ ≤ µ(Sk∩A) +µ(Sk) +≤ δ. +(b) For a.e. x ∈ A, x ∈ ∪kSk. +(c) µ(A) ≤ δ0µ(∪∞ +1 Sk), where δ0 = δ0(δ) < 1. + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION7 +Remark 4.2. As can be seen from the proof, tk may not be equal to txk. +First we need the following proposition from the [6] paper. +Proposition 4.3. Let Ω and u be as stated in Corollary 1.6, with γ0 small +enough depending only on n. Let 0 < σ < 1 be given. Then there exists +ε > 0 depending only on σ and n, such that if |f −1| ≤ ε, the following hold: +(1) There exists µ0 > 0 small enough depending only on n and σ, such +that for all x0 ∈ B0,8 and all µ ≤ µ0, there exists a degree 2 pluri- +harmonic polynomial hµ,x0(z) with hµ,x0(x0) = 0, such that +(1 − 0.1σ)Eµ(x0) ⊂ Sµ(x0) := {z ∈ Ω : (u − hµ,x0)(z) ≤ u(x0) + µ} +⊂ (1 + 0.1σ)Eµ(x0). +In the above, Eµ(x0) = {z ∈ Cn : �n +i,j=1 aµ,x0,ij(z − x0)i(z − x0)j ≤ +µ}, with aµ,x0,ij being positive Hermitian and det aµ,x0,ij = 1. +(2) There is a function c(σ) : σ ∈ (0, 1) → R>0, such that for any +x0 ∈ B0.8 and any 0 < µ1 ≤ µ2 ≤ +µ0 +1+c(σ), one has Sµ1(x0) ⊂ +S(1+c(σ))µ2(x0). Moreover, 0 < c(σ) ≤ C2,nσ +1 +2 for some dimensional +constant C2,n. +(3) There is a dimensional constant C3,n > 0 such that for all 0 < µ ≤ µ0 +and any x0 ∈ B0.8, there exists a C-linear transformation Tµ,x0, +such that | det Tµ,x0| = 1, Tµ,x00 = 0, x0 + Tµ,x0(B√µ(0)) = Eµ(x0). +Moreover, for any 0 < µ1 < µ2 ≤ µ0 and any x0 ∈ B0.8: +||Tµ1,x0 ◦ T −1 +µ2,x0|| ≤ C3,n(µ2 +µ1 +) +C3,nσ +1 +2 +− log(0.1σ) , ||Tµ2,x0 ◦ T −1 +µ1,x0|| ≤ C3,n(µ2 +µ1 +) +C3,nσ +1 +2 +− log(0.1σ) . +The following conclusion is proved in the ”induction hypothesis” part of +the [6] paper. +Lemma 4.4. For any µk+1 +0 +< t ≤ µk +0, we can write +Tµ,x0 = Tx0,k = �Tx0,1 ◦ �Tx0,2 ◦ ... ◦ �Tx0,k, +where Tµ,x0 is used in the statement of the Proposition 4.3. we have that +| �Tx0,1| ≤ C, | �T −1 +x0,1| ≤ C +| �Tx0,k − I| ≤ Cσ +1 +2 for k ≥ 2. +We need the following engulfing property of the sections which is proved +in the [6] paper. +Proposition 4.5. Assume that x1, x2 ∈ B0.8, 0 < µ1, µ2 ≤ µ0 and µ1 ≤ +4µ2. Let σ > 0 be small enough (depending only on dimension). Assume +also that Sµ1(x1) ∩ Sµ2(x2) ̸= ∅, then Sµ1(x1) ⊂ 10Sµ2(x2). +There is another version of the engulfing property: +Lemma 4.6. Let σ be small and ǫ be small. There exists a constant θ > 0 +such that if St(z) is a section with y ∈ St(z), then St(z) ⊂ Sθt(y) for t ≤ µ0 +θ . + +8 +YULUN XU +Proof. Using the Proposition 4.3 we can get that +(1 − σ) +√ +tE1 ⊂ St(y) ⊂ (1 + σ) +√ +tE1 +(1 − σ)√t2E2 ⊂ St2(y) ⊂ (1 + σ)√t2E2, +(4.7) +where E1 and E2 are ellipsoids centered at z. Let k1 and k2 be integers such +that: +µk1+1 +0 +< t ≤ µk1 +0 , µk2+1 +0 +< t2 ≤ µk2 +0 . +If t and t2 are in the same generation or adjacent generations, i.e. k1 − 1 ≤ +k2 ≤ k1 + 1. So by the Lemma 4.4, we have that +|Ty,k2 ◦ T −1 +y,k1 − I| ≤ Cσ +1 +2 , +|Ty,k1 ◦ T −1 +y,k2 − I| ≤ Cσ +1 +2 . +Then we can define T = Ty,k1 ◦T −1 +y,k2 such that |T −I| ≤ Cσ +1 +2 and TE2 = E1. +So we have that: +10St(y) ⊂ 10 +√ +t(1 + σ)E1 ⊂ 10 +√ +t(1 + σ)(1 + Cσ +1 +2)E2 += (1 − σ) +� +100(1 + σ)2(1 + Cσ +1 +2 )2 +1 +(1 − σ)2 tE2 ⊂ S100(1+σ)2(1+Cσ +1 +2 )2 +1 +(1−σ)2 t(y) +Then we use the Proposition 4.5 to get that: +St(z) ⊂ 10St(y). +So we have that: +St(z) ⊂ S100(1+σ)2(1+Cσ +1 +2 )2 +1 +(1−σ)2 t(y). +We can assume that σ ≤ 1 +2 and let µ0 be small such that t and 100(1 + +σ)2(1 + Cσ +1 +2)2 +1 +(1−σ)2 t are in the same generation or adjacent generations. +In conclusion we can take θ = 100(1 + σ)2(1 + Cσ +1 +2 )2 +1 +(1−σ)2 . Then we finish +the proof the lemma. +□ +We also need the following lemma from the [6] estimating the shape of +the sections in the original coordinate. In particular, we can get an estimate +of the diameter of the sections. +Lemma 4.8. Suppose that µ0 is small and ǫ is small. We have that: +B +1 +C µ +1 +2 +logµ0 (1−Cσ +1 +2 )(x0) ⊂ Sµ(x0) ⊂ B +Cµ +1 +2 +logµ0 (1+Cσ +1 +2 )(x0), +for any µ ≤ µ2 +0. +Proof. We can assume that µ ≤ µ2 +0. For any µ, there exists an integer k +such that +µk+1 +0 +< µ ≤ µk +0. +As in the proof of the Proposition 4.3 and the Lemma 4.4 we have that +x0 + (1 − σ) �Tx0,1 ◦ �Tx0,2 ◦ ... ◦ �Tx0,k(B√µ(0)) ⊂ Sµ(x0) +⊂ x0 + (1 + σ) �Tx0,1 ◦ �Tx0,2 ◦ ... ◦ �Tx0,k(B√µ(0)). +(4.9) + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION9 +Using the Lemma 4.4, we have that +| �Tx0,1| ≤ C, | �T −1 +x0,1| ≤ C +| �Tx0,k − I| ≤ Cσ +1 +2 for k ≥ 2. +So the Formula 4.9 becomes: +B +1 +C µ +1 +2 +logµ0 (1−Cσ +1 +2 )(x0) ⊂ 1 +C (1 − σ)(1 − Cσ +1 +2)k−2B√µ(x0) ⊂ Sµ(x0) +⊂ C(1 + σ)(1 + Cσ +1 +2 )k−2B√µ(x0) ⊂ B +Cµ +1 +2 +logµ0 (1+Cσ +1 +2 )(x0). +This concludes the proof of the lemma. +□ +The following lemma is a characterization of sections. The lemma follows +directly from the construction of the sections. +Lemma 4.10. For any x0 ∈ B0.8 and µ ≤ µ0, there exists a degree 2 +pluriharmonic polynomial hx0,µ such that hx0,µ(x0) = 0 and +Sµ(x0) = {x : φ(x) ≤ hx0,µ(x) + φ(x0) + µ}. +The following is the key lemma in the [6]. It basically means that two sec- +tions of the same height of two functions which differ by a plurisubharmonic +function are close to each other. +Lemma 4.11. Let φ be a function defined on an open set U ⊂ Cn and let +h(z) be pluriharmonic function on U. Let 0 < µ < 1 and σ > 0 be such that: +(1 − γ) �E√σ ⊂ {φ ≤ σ} ⊂ (1 + γ) �E√σ ⊂ U, +(1 − γ)E√σ ⊂ {φ ≤ h + σ} ⊂ (1 + γ)E√σ ⊂ U. +In the above, �E√σ = {z ∈ Cn : �n +i,j=1 �ai¯jzi¯zj ≤ σ} and E√σ = {z ∈ Cn : +�n +i,j=1 ai¯jzi¯zj ≤ σ}. Then there exists c1(γ) which is universal (depending +only on dimension and you can explicitly calculate) and c1(γ) → 0 as γ → 0 +such that +(1 − c1(γ)) �E√σ ⊂ E√σ ⊂ (1 + c1(γ)) �E√σ +Now we want to prove the following lemma which implies that if two +sections have nonempty intersection, then they are comparable to each other: +Lemma 4.12. Let St0(x0) and St(x) be two sections such that t ≤ t0 ≤ µ0 +2 +and +St0(x0) ∩ St(x) ̸= ∅. +Let Tt0,x0 be the affine transformation defined in the W 2,p paper that nor- +malize St0(x0). Then we have that +1 +C B( t +t0 ) +1 +2 +ǫ1( 1 +√t0 +T −1 +t0,x0x) ⊂ +1 +√t0 +T −1 +t0,x0St(x) ⊂ CB( t +t0 ) +1 +2 −ǫ1( 1 +√t0 +T −1 +t0,x0x). +ǫ1 is a positive constant that can be made arbitrarily small if we let µ0 be +small enough. The constant C depends on n. + +10 +YULUN XU +Proof. We can use the part (3) of the Proposition 4.3 to get that +St0(x0) ⊂ S2t0(x0), St(x) ⊂ S2t0(x). +Since St0(x0) ∩ St(x) ̸= ∅, we have that S2t0(x0) ∩ S2t0(x) ̸= ∅. +By the +Proposition 4.5 we have that: +S2t0(x0) ⊂ 10S2t0(x), S2t0(x) ⊂ 10S2t0(x0). +So we have that: +(1 − σ)(x0 + T2t0,x0B√2t0(0)) ⊂ (1 − σ)E2t0(x0) ⊂ S2t0(x0) +⊂ 10S2t0(x) ⊂ 10(1 + σ)E2t0(x) = 10(2 + σ)(x + T2t0,xB√2t0(0)). +(1 − σ)(x + T2t0,xB√2t0(0)) ⊂ (1 − σ)E2t0(x) ⊂ S2t0(x) +⊂ 10S2t0(x0) ⊂ 10(1 + σ)E2t0(x0) = 10(2 + σ)(x0 + T2t0,x0B√2t0(0)). +So T2t0,x0 and T2t0,x are bounded from each other. I.e. +(4.13) +|T −1 +2t0,x0 ◦ T2t0,x| ≤ C, |T −1 +2t0,x ◦ T2t0,x0| ≤ C. +By the Lemma 4.4, T2t0,x and Tt0,x differ by a linear transformation which +is Cσ +1 +2 −close to Id. So we have that T2t0,x and Tt0,x are bounded from +each other. Similarly, T2t0,x0 and Tt0,x0 are bounded from each other. In +conclusion, Tt0,x and Tt0,x0 are bounded from each other. We consider the +following two cases: +(1) t ≤ 2t0µ02. In a coordinate w where S2t0(x) is close to a ball, we can +define �St(x) just like how we define St(x). In the same coordinate, using the +Lemma 4.8, we can get that: +B +1 +C ( +t +2t0 ) +1 +2 +logµ0 (1−Cσ +1 +2 )(0) ⊂ �St(x) ⊂ B +C( +t +2t0 ) +1 +2 +logµ0 (1+Cσ +1 +2 )(0). +Here we use +t +2t0 because in the coordinate w, S2t0(x) is close to the unit +ball. So the height of the section �St(x) is scaled to +t +2t0 accordingly. By the +Lemma 4.11, St(x) is comparable to �St(x), i.e. +1 +C St(x) ⊂ �St(x) ⊂ CSt(x). +So we have that +B +1 +C ( +t +2t0 ) +1 +2 +logµ0 (1−Cσ +1 +2 )(0) ⊂ St(x) ⊂ B +C( +t +2t0 ) +1 +2 +logµ0 (1+Cσ +1 +2 )(0). +If we go back to the original coordinate, we have that: +B +1 +C ( +t +2t0 ) +1 +2 +logµ0 (1−Cσ +1 +2 )(0) ⊂ +1 +√2t0 +T −1 +2t0,x(St(x)−x) ⊂ B +C( +t +2t0 ) +1 +2 +logµ0 (1+Cσ +1 +2 )(0). + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +11 +We already prove that T2t0,x and T2t0,x0 are bounded from each other. +B +1 +C ( +t +2t0 ) +1 +2 +logµ0 (1−Cσ +1 +2 )(0) ⊂ +1 +√2t0 +T −1 +2t0,x0(St(x) − x) +⊂ B +C( +t +2t0 ) +1 +2 +logµ0 (1+Cσ +1 +2 )(0). +So we can take ǫ1 = max {−logµ0(1 + Cσ +1 +2 ), logµ0(1 − Cσ +1 +2)} to get that: +B 1 +C ( +t +2t0 ) +1 +2 +ǫ1(0) ⊂ +1 +√2t0 +T −1 +2t0,x0(St(x) − x) ⊂ BC( +t +2t0 ) +1 +2 −ǫ1(0). +Recall that T2t0,x0 and Tt0,x0 are bounded from each other. So we have that +1 +C B( t +t0 ) +1 +2 +ǫ1( 1 +√t0 +T −1 +t0,x0x) ⊂ +1 +√t0 +T −1 +t0,x0St(x) ⊂ CB( t +t0 ) +1 +2 −ǫ1( 1 +√t0 +T −1 +t0,x0x). +(2) t0µ2 +0 < t ≤ t0. By the Lemma 4.3 we have that: +(1 − γ)B√ +t(0) ⊂ T −1 +t,x (St(x) − x) ⊂ (1 + γ)B√ +t(0). +By the Lemma 4.4, we have that: T2t0,x0 and Tt,x0 are bounded from each +other. T2t0,x and Tt,x are bounded from each other. Combining these facts +and the Inequalities 4.13 we have that Tt,x and Tt0,x0 are bounded from each +other. So we have that: +B 1 +C +� +t +t0 +( 1 +√t0 +T −1 +t0,x0x) ⊂ +1 +√t0 +T −1 +t0,x0St(x) ⊂ BC +� +t +t0 +( 1 +√t0 +T −1 +t0,x0x). +□ +Consider the following Dirichlet problem on a domain Ω. +det((v0)i¯j) = 1 in Ω +v0 = 0 on ∂Ω. +(4.14) +To start the process, we need that v0 is smooth in the interior. +This is +guaranteed by the fact that Ω is close to B1. More precisely, we proved in +[6]: +Lemma 4.15. Let Ω ⊂ Cn be a bounded domain and B1−γ(0) ⊂ Ω ⊂ +B1+γ(0) for some 0 < γ < 1. Let v0 be the solution to the Dirichlet problem +in (4.14), then +|z|2 − 1 − 3γ ≤ v0 ≤ |z|2 − 1 + 3γ. +Moreover, there exists γn > 0 small enough, such that if γ ≤ γn, we have +v0 ∈ C4( ¯B0.9) with ||v0 − (|z|2 − 1)||C4,B0.9 ≤ C. Here C depends only on n. +The following Lemma is also proved in the W 2,p paper: +Lemma 4.16. Assume that det ui¯j = f in Ω and u|∂Ω = 0. Let v0 be the +solution to the Dirichlet problem (4.14). Assume that 1 − ε ≤ f ≤ 1 + ε. +Then we have (1 + ε) +1 +n v0 ≤ u ≤ (1 − ε) +1 +n v0. In particular +|v0 − u| ≤ 4ε in Ω. + +12 +YULUN XU +Then we can prove the following lemma: +Lemma 4.17. There exists a constant δ > 0. For any ¯ǫ ∈ (0, e−1), we can +choose µ0 and ǫ to be small depending on ¯ǫ and n such that given a section +St(x) with t ≤ µ2 +0 and y /∈ St(x), we have that +Bǫδ +2(T(y)) ∩ T(S(1−ǫ2)t(x)) = ∅. +for any ¯ǫ < ǫ2 < e−1. Here T = +1 +√ +tT −1 +t,x is an affine transformation such +that +B(1−σ)(0) ⊂ 1 +√ +tT −1 +t,x (St(x) − x) ⊂ B(1+σ)(0). +Proof. We prove this lemma in two cases. +(1) (1−ǫ2)t and t are in the same generation, i.e. there exists k such that +µk+1 +0 +< (1 − ǫ2)t ≤ t ≤ µk +0. Define a new coordinate w = +1 +√ +tT −1 +t,x (z − x). In +this coordinate, St(x) is σ− close to the unit ball as in the Lemma 4.3 and +there is a plurisubharmonic function h such that φ−h = 0 on ∂St(x) by the +Lemma 4.10. From the argument of the section 2, we can just assume that +φ = 0 on ∂St(x) and use the coordinate w in the rest of the proof because +the linearized Monge-Ampere equation is invariant under the normalization. +Using the Lemma 4.3, the Lemma 4.15 and the Lemma 4.16 we can get that: +|w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ. +After the normalization, φ(0) = −1. This is because the height of the section +is determined by the value of φ at 0 and the height is scaled to be 1 under the +normalization (The details can be found in the construction of the sections +in the W 2,p paper). So for any w ∈ ∂S(1−ǫ2)t(x), we have that φ = −ǫ2 and: +|w|2 − 1 − 3ǫ − Cσ ≤ −ǫ2 ≤ |w|2 − 1 + 3ǫ + Cσ +This implies that: +1 − 3ǫ − Cσ − ǫ2 ≤ |w|2 ≤ 1 + 3ǫ + Cσ − ǫ2. +Since y /∈ St(x) and (1 − σ)B1(0) ⊂ T(St(x) − x), we have that: +T(y − x) /∈ (1 − σ)B1(0). +This implies that +Bǫδ +2(T(y − x)) ⊂ Bc +1−σ−ǫδ +2(0), +Here Bc +1−σ−ǫδ +2(0) = Cn \ B1−σ−ǫδ +2(0) In order to make sure that +Bǫδ +2(T(y − x)) ∩ B√1+3ǫ+Cσ−ǫ2(0) = ∅, +it suffices to require that +� +1 + 3ǫ + Cσ − ǫ2 ≤ 1 − σ − ǫδ +2. +This is equivalent to +3ǫ + (C + 2)σ ≤ ǫ2 + ǫ2δ +2 − 2ǫδ +2 − 2σǫδ +2. + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +13 +If we let δ be big(this is independent of ¯ǫ), then the minimum of ǫ2 + ǫ2δ +2 − +2ǫδ +2 − 2σǫδ +2 with ¯ǫ ≤ ǫ2 ≤ 1 +2 is ¯ǫ + ¯ǫ2δ − 2¯ǫδ − 2σ¯ǫδ. So we just require that +3ǫ + (C + 2)σ ≤ ¯ǫ + ¯ǫ2δ − 2¯ǫδ − 2σ¯ǫδ. +Noting that ¯ǫ ≤ e−1. When δ is big we have that: +¯ǫ + ¯ǫ2δ − 2¯ǫδ − 2σ¯ǫδ ≥ 1 +2¯ǫ. +So we just require that: +3ǫ + (C + 2)σ ≤ 1 +2¯ǫ. +(2) (1 − ǫ2)t and t are not in the same level, i.e. there exists an integer k +such that: +µk+1 +0 +< (1 − ǫ2)t ≤ µk+1 +0 +< t ≤ µk +0. +Recall that the sections St(x) for µk+1 +0 +< t ≤ µk +0 can be written as +St(x) = {z : φ − h(z) ≤ φ(x) + t}, +according to the Lemma 4.10. We can define �St(x): +(4.18) +�St(x) = {z : φ − h(z) ≤ φ(x) + t}, +for t ≤ µk+1 +0 +. Note that when we define St(x) for t ≤ µk+1 +0 +, we subtract φ by +a pluriharmonic function that is different from h and then take sublevel sets. +So �St(x) is different from St(x) for t ≤ µk+1 +0 +. Similar to the Proposition 4.3, +we can show that +S(1−ǫ2)t(x) ⊂ �S(1+c(σ))(1−ǫ2)t(x). +When we let µ0 be small and ǫ be small, σ can be arbitrarily small. Then +c(σ) can be arbitrarily small. +As is defined in the case (1), we use the +coordinate w where St(x) is normalized to be close to the unit ball. +So +using the Equation 4.18, for any w ∈ ∂ �S(1+c(σ))(1−ǫ2)t(x), we have that +φ = (1 + c(σ))(1 − ǫ2) − 1 = c(σ) − ǫ2 − ǫ2c(σ). Since in this coordinate +|w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ, +we have that: +|w|2 − 1 − 3ǫ − Cσ ≤ c(σ) − ǫ2 − ǫ1c(σ) ≤ |w|2 − 1 + 3ǫ + Cσ. +This is equivalent to that: +1 − 3ǫ − Cσ + c(σ) − ǫ2 − ǫ2c(σ) ≤ |w|2 ≤ 1 + 3ǫ + Cσ + c(σ) − ǫ2 − ǫ2c(σ). +Since y /∈ St(x) and +(1 − σ)B1(0) ⊂ T(St(x) − x). +So we have that +T(y − x) /∈ (1 − σ)B1(0). +So we have that +Bǫδ +2(T(y − x)) ⊂ Bc +1−σ−ǫδ +2(0). + +14 +YULUN XU +So if we want +Bǫδ +2(T(y − x)) ∩ B√ +1+3ǫ+Cσ−ǫ2+c(σ)−ǫ2c(σ)(0) = ∅, +we just require that +Bc +1−σ−ǫδ +2(0) ∩ B√ +1+3ǫ+Cσ−ǫ2+c(σ)−ǫ2c(σ)(0) = ∅. +This is implied by +� +1 + 3ǫ + Cσ − ǫ2 + c(σ) − ǫ2c(σ) ≤ 1 − σ − ǫδ +2. +This is equivalent to +3ǫ + Cσ + c(σ) + 2σ − σ2 ≤ ǫ2 + ǫ2c(σ) + ǫ2δ +2 + 2ǫδ +2 + 2σǫδ +2. +Let c(σ) be small and let δ be the same as in the case (1). We can see that +f(t) = t + tc(σ) + t2δ + 2tδ + 2σtδ takes the minimum at t = ¯ǫ. So we just +require that: +3ǫ + Cσ + c(σ) + 2σ − σ2 ≤ ¯ǫ + ¯ǫc(σ) + ¯ǫ2δ + 2¯ǫδ + 2σ¯ǫδ. +We can assume that δ is big and ¯ǫ ∈ (0, 1 +2) such that ¯ǫ + ¯ǫc(σ) + ¯ǫ2δ + 2¯ǫδ + +2σ¯ǫδ ≥ 1 +2¯ǫ. So we jut require that +3ǫ + Cσ + c(σ) + 2σ − σ2 ≤ 1 +2¯ǫ. +This is true if we let µ0 be small and ǫ be small. +□ +Then we can prove the following lemma which is similar to the Lemma 1 +in [3] with some modifications. +Lemma 4.19. For any ¯ǫ < e−1, we can let µ0 be small and let ǫ be small. +For any A ⊂ B 1 +2 (0) which is a bounded set. Fix a positive function �t defined +on A satisfying 0 < ˜t ≤ +µ0 +2 . +Let us denote by F the family of all the +sections S�t(x)(x) with x ∈ A. Then there exists a countable subfamily of +F, {S�t(xk)(xk)}∞ +k=1 with the following properties ( For simplicity we denote +�t(xk) as tk from now on ): +(i) For a.e. x ∈ A, x ∈ ∪∞ +k=1Stk(xk). +(ii) xk /∈ ∪j 1 and 0 < λ < 1, depending +only on the doubling constant of µ and dimension, such that for any section +S = St(x) and any nonnegative solution u of Lϕu = 0 such that +inf +z∈S t +2 (x) u(z) ≤ 1 +we have that +µ({z ∈ S : u(z) > M1}) < λµ(S). +Proof. The proof of the theorem is similar to the Theorem 1 in [4]. We first +sketch the proof. First we normalize St(x) to be close to the unit ball and +normalize φ correspondingly which will be made clearer latter. Let α be big +enough such that v = u + αφ satisfies that +inf +z∈S t +2 (x) v(z) ≤ −1. +We will show that we can take α = 6 in our case. Then we can use the ABP +estimate for v−(x) = − min {v(z), 0} to get an lower bound estimate on the +measure of the points where v− and its convex envelope Γ(v−) touch. On +these touching points, u is uniformly bounded because at such point, v ≤ 0 +and φ is uniformly bounded by the ABP estimate and α is a uniform constant +which we will explain later. This concludes the proof of the theorem. +Next we point out the modifications we use to adapt the proof of the +Theorem 1 in [4] to our case. +First we normalize φ and change the coordinate (denote the new coor- +dinate as w). According to the section 2, the Proposition 4.3, the Lemma +4.10, we can assume that: +φ = 0 on ∂St(x) +and +(1 − 0.1σ)B1(0) ⊂ St(x) ⊂ (1 + 0.1σ)B1(0). +Claim: We can let σ and ǫ be small enough such that: +φ ≤ −1 +3 on S t +2(x). +We prove this in two cases. Before the proof, we note that by the Proposition +4.3, we have that there exists an ellipsoid E such that: +(1 − 0.1σ) +� +1 +2E ⊂ S t +2(x) ⊂ (1 + 0.1σ) +� +1 +2E. +Case (1) St(x) and S t +2(x) are in the same generation. So we have that +E = B1(0). So we have that: +B 1−0.1σ +√ +2 (0) ⊂ S t +2 (x) ⊂ B 1+0.1σ +√ +2 (0). + +20 +YULUN XU +Using the Lemma 4.3, the Lemma 4.15 and the Lemma 4.16 we can get that: +(5.2) +|w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ. +So we have that for any w ∈ S t +2 (x), +φ ≤ (1 + 0.1σ)2 +2 +− 1 + 3ǫ + Cσ. +Let σ and ǫ be small enough, we can get that +φ ≤ −1 +3 on S t +2(x). +Case (2) St(x) and S t +2(x) are not in the same generation. We can assume +that µ0 < 1 +2 so that there exists k such that: +µk+2 +0 +< t +2 ≤ µk+1 +0 +< t ≤ µk +0, +i.e. St(x) and S t +2 (x) are in adjacent generations. So we have that: +B(1−Cσ +1 +2 )(0) ⊂ E ⊂ B(1+Cσ +1 +2 )(0), +according to the Proposition 4.3 and the Lemma 4.4. So we have that: +B +(1−0.1σ)(1−Cσ +1 +2 ) +√ +2 +(0) ⊂ S t +2 (x) ⊂ B +(1+0.1σ)(1+Cσ +1 +2 ) +√ +2 +(0). +Using the Inequalities 6.4, we have that for any w ∈ S t +2 (x), +φ ≤ (1 + 0.1σ)2(1 + Cσ +1 +2 )2 +2 +− 1 + 3ǫ + Cσ. +Let σ and ǫ be small enough, we can get that +φ ≤ −1 +3 on S t +2(x). +The claim is proved. +The ABP estimate needs to use det(D2φ). In our case we only know the +estimate for det(φi¯j) instead of det(D2φ). However, we can use det(D2φ) +1 +2 ≤ +2ndet(φi¯j) at the points where φ is convex. Fortunately the integral part in +the ABP estimate is taken only on the points where φ is convex. +The constant α we use is uniform. This is because after normalization, +we have that φ = 0 on ∂St(x) and φ ≤ − 1 +3 on S t +2 (x). In the assumption we +have that infz∈S t +2 (x) u(z) ≤ 1. So we can just take α = 6 so that +inf +z∈S t +2 (x)(u(z) + αφ(z)) ≤ −1. +□ + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +21 +6. Infimum estimate on a larger section +We need the following Lemma: +Lemma 6.1. Let u be a positive supersolution of the equation Lφu = 0, i.e., +Lφu ≤ 0. +If x0 ∈ Rn and 0 < ǫ4 < 1 then +Lφ(−uǫ4)(x0) ≥ −ǫ4(ǫ4 − 1)u(x0)ǫ4−2|∇u(x0)|2 det(ui¯j(x0)) +trace(ui¯j) . +Proof. The lemma can be proved by following the proof of the Lemma 2.1 +in [4] word by word. +□ +Theorem 6.2. There is a constant L > 1 depending on n, such that for any +α > 0, if u is any nonnegative solution of Lφu = 0 on S4t(x0) such that: +u|St(x0) > α, +then +u|S2t(x0) > α +L +Proof. This theorem is similar to the Theorem 2 in the paper [4]. We sketch +the proof and point out the modifications we need to adapt the proof of the +Theorem 2 in the paper [4] to our case. +First we sketch the proof of the theorem as follows: Consider four sections: +S∗ +k = Stk(x0), k = 1, 2, 3, 4, where t1 = t < t2 = 2t < t3 < t4 = 4t. t3 is to +be determined (This is the definition in the paper [4]. We will change the +definition of S∗ +3 in our case.). We normalize φ and change the coordinate +(denote the new coordinate as w) such that: +φ|∂S4t(x0) = 0 +and +B1−0.1σ(0) ⊂ S4t(x0) ⊂ B1+0.1σ(0). +Then define an auxiliary function wǫ4 whose definition will be talked about +in detail latter. Then consider +h(x) = −φ(x) + wǫ4(x) +2βn +, +where βn is a constant depending on n. Consider the minimum of uǫ4 −h(x) +on S∗ +4 and prove that +u|S2t(x) ≥ α +L +in three cases: +Case 1. The minimum is attained on S∗ +1. +Case 2. The minimum is attained on S∗ +4 \ S∗ +3. +Case 3. The minimum is attained on S∗ +3 \ S∗ +1. +The case 1 and 2 don’t even need the auxiliary function wǫ4. The case 3 +is more intricate. One can estimate Lφ(uǫ4 − h(x)) at the minimum point + +22 +YULUN XU +using the Lemma 6.1. Since Lφ(uǫ4 − h(x)) ≥ 0 at the minimum point, we +can show that at this point g = ∆φ is bounded which can be used to get +further estimate. +Next we talk about the modifications that we need to adapt the proof of +the Theorem 2 in the paper [4]. +First, S∗ +3 in our case may not take the form of St(x). Instead, we define +S∗ +3 = {w ∈ S∗ +4 : φ(w) ≤ s3}, +where s3 ∈ [− 3 +8, − 1 +4] is to be determined. Using the Lemma 4.3, the Lemma +4.15 and the Lemma 4.16 we can get that: +(6.3) +|w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ. +So we have that for any φ ≤ s3: +|w|2 ≤ 1 + 3ǫ + Cσ + s3. +Then we have that +S∗ +3 ⊂ B√1+3ǫ+Cσ+s3(0) ⊂ B� +1+3ǫ+Cσ− 1 +4 +(0) ⊂ B 2 +√ +5(0). +Here we assume that σ and ǫ are small. +In [4] an estimate for |∇φ| in S∗ +3 is needed. +We want to derive such +estimate in our case. We have shown that +S∗ +3 ⊂ B 2.1 +√ +5 (0). +In our case we can use the result in the W 2,p estimate: ||φ||W 2,p(B 2 +√ +5 +(0)) ≤ C. +If p is big then we have that ||φ||C1,β0(B 2 +√ +5 +(0)) ≤ C for some β0 > 0. In +particular, we get an estimate for |∇φ| in S∗ +3. +In [4], an estimate of the measure of the set is needed: +Hǫ4 = {x ∈ S∗ +3 : g(x) ≥ γ0 +ǫ4 +}, +where γ0 is a positive small parameter. The estimate of the area of ∂S∗ +3 is +needed which is easy when S∗ +3 is convex as in [4]. In our case S∗ +3 is just a +pseudoconvex set. So we use the coarea formula: +� +{a≤φ≤b} +|∇φ|dx = +� b +a +dt +� +φ=t +dS, +for any a < b. By the sard’s theorem, we can find sab ∈ (a, b) such that +{φ = sab} is smooth and +� b +a +dt +� +φ=t +dS ≥ (b − a)|{φ = sab}|. +We need to estimate +m({a ≤ φ ≤ b}). + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +23 +Using the Lemma 4.3, the Lemma 4.15 and the Lemma 4.16 we can get +that: +(6.4) +|w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ. +So we have that for any a ≤ φ ≤ b: +1 − 3ǫ − Cσ + a ≤ |w|2 ≤ 1 + 3ǫ + Cσ + b. +So we have that: +{a ≤ φ ≤ b} ⊂ B√ +1+3ǫ+Cσ+b(0) \ B√1−3ǫ−Cσ+a(0). +We assume that b ≤ − 1 +4. We have proved that |∇φ|{φ≤− 1 +4 } ≤ C, we can +combining the above equations to get that +(b − a)|{φ = sab}| ≤ +� +{a≤φ≤b} +|∇φ|dx +≤ |∇φ|C1({a≤φ≤b})m(B√1+3ǫ+Cσ+b(0) \ B√1−3ǫ−Cσ+a(0)). +(6.5) +We use the Lemma 4.21 to get that: +m(B√1+3ǫ+Cσ+b(0) \ B√1−3ǫ−Cσ+a(0)) ≤ C(σ + ǫ + b − a). +We can use this inequality and the Inequality 6.5 to get that: +|{φ = sab}| ≤ C(1 + σ + ǫ +b − a), +for some sab ∈ [a, b]. So when we want to select t3, we can use the above +argument to find s3 ∈ [− 3 +8, − 1 +4]. Then we define From the calculation above, +we have that +|{φ = s3}| ≤ C(1 + σ + ǫ). +Using this we can estimate |Hǫ4| as in [4] to get: +|Hǫ4| ≤ An +ǫ4 +γ0 +(1 + σ + ǫ) ≤ C ǫ4 +γ0 +Another modification that we want to make is the following: In [4], an +auxiliary function is defined. We first state the way [4] define an auxiliary +function: Let +k(x) = detD2φ(x) +Then approximate the set Hǫ4 by an open set ˜Hǫ4 such that Hǫ4 ⊂ ˜Hǫ4 ⊂ S∗ +4 +such that | ˜Hǫ4 \ Hǫ4| is sufficiently small. Given δ > 0(small), they define +ϕ(x) a smooth function in S∗ +4 such that ϕ(x) = 1 in Hǫ4, ϕ(x) = δ in S∗ +4\ ˜Hǫ4, +and δ ≤ ϕ(x) ≤ 1. Then they define the auxiliary function: +detD2wǫ4(x) = k(x)ϕ(x) in S∗ +4 +wǫ4|∂S∗ +4 = 0. +Next we explain how to define an auxiliary function in our case. Since we +are dealing with complex Monge-Ampere equation and linearzed complex +Monge-Ampere equation, it is natural to define the auxiliary function using +complex Monge-Ampere equation. However, later on we need to derive an + +24 +YULUN XU +estimate for the derivative of the auxiliary function. Such estimate is missing +for the complex Monge-Ampere equation. So we still want to use the real +Monge-Ampere equation to define the auxiliary function. Another difficulty +is that in our case S∗ +4 may not be convex. So we can’t solve the dirichlet +problem for the real Monge-Ampere equation on S∗ +4. So instead, we solve +the dirichlet problem on an ellipsoid which is slightly larger than S∗ +4: +detD2wǫ4(x) = 4nk2(x)ϕ2(x) in (1 + σ)B1(0) +wǫ4|(1+σ)B1(0) = 0, +where k(x) = detφi¯j. Recall that from the Proposition 4.3, +(1 − σ)B1(0) ⊂ S∗ +4 ⊂ (1 + σ)B1(0). +If we let ǫ4 be small and let δ be small, we can get that |wǫ4| and |D(wǫ4)|S∗ +3 +are very small as in [4]. Then we consider the three cases as listed above. +(1) We can prove the case 1 word by word as in [4]. +(2)For the second case. Since S4(1+s)t(x0) and S4t(x0) are in the same or +the adjacent generations for − 1 +2 ≤ s ≤ 0, we have that in the coordinate w: +(6.6) +(1 − Cσ +1 +2)B√1+s(0) ⊂ S4(1+s)t(x0) ⊂ (1 + Cσ +1 +2)B√1+s(0). +Combining this with the Inequalities 6.4, we have that: +{φ ≤ −1 +2 − C(σ +1 +2 + ǫ)} ⊂ S∗ +2 ⊂ {φ ≤ −1 +2 + C(σ +1 +2 + ǫ)}. +By the definition of s3, +{φ ≤ −3 +8} ⊂ S∗ +3 ⊂ {φ ≤ −1 +4} +So for x ∈ S∗ +2 we have +h(x) − h(P) = −φ∗(x) +2βn ++ wǫ4(x) +2βn ++ φ∗(P) +2βn +− wǫ4(P) +2βn +≥ +1 +2 − C(σ +1 +2 + ǫ) +2βn ++ wǫ4(x) +2βn ++ − 3 +8 +2βn += 1 − C(σ +1 +2 + ǫ) +16βn ++ wǫ4(x) +2βn +≥ +1 +32βn +In the last line we let σ, ǫ and ǫ4 be small enough. +(3) For the case three. As in [4], we need to show: +|∇φ∗(P)| ≥ C > 0, +for P ∈ S∗ +3 − S∗ +1. Recall that +S∗ +3 ⊂ B 2 +√ +5 (0). + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +25 +We need to use a version of interpolation (see the Lemma 4.32 of [9]). We +can use that +|φ − (|z|2 − 1)|B√ +5 +6 +(0) ≤ C(ǫ + σ). +and +|φ|C1,α(B√ +5 +6 +(0)) ≤ C +by the W 2,p estimate. Then using interpolation and get that |∇(φ − (|z|2 − +1))|B 2 +√ +5 +(0) can be arbitrarily small if σ and ǫ are small enough. In particular, +we can assume that: +|∇(φ − (|z|2 − 1))|B 2 +√ +5 +(0) ≤ +1 +√ +3. +Using the Formulae 6.6, we have that +S∗ +2 ⊃ B 1 +√ +3 (0). +Note that: +|∇(|z|2 − 1)|Bc +1 +√ +3 +(0) ≥ +2 +√ +3. +So we have that: +|∇φ|S∗ +3\S∗ +2 ≥ +1 +√ +3 +. +This can imply +|∇φ∗(P)| ≥ +1 +√ +3 +. +We also need to use the inequality: +detD2wǫ4 ≤ 4ndet((wǫ4)i¯j)2. +to reduce det((wǫ4)i¯j) to detD2wǫ4. Those are all the modifications that we +need. +□ +7. Harnack inequality and H¨older continuity +First we can combine the Theorem 5.1 and the Theorem 6.2 to get the +following Lemma: +Lemma 7.1. Let u be a nonnegative solution in a section S4t(x0), and α is +any positive number such that: +µ{x ∈ St(x0) : u(x) > α} ≥ λµ(St(x0)), +then +u(x) ≥ α +M0 +, ∀x ∈ St(x0), +Here M0 = M1L2. λ and M1 are given in the Theorem 5.1 and L is given +in the Theorem 6.2. + +26 +YULUN XU +Proof. We can define v = M1u +α . v still satisfies the linearized comlex Monge- +Ampere equation. Using the Theorem 5.1, we have that +inf +x∈S t +2 (x0) v(x) ≥ 1. +Since 22 ≥ 2(1 + c(σ)). Using the Proposition 4.3, we have that +S2t(x0) ⊃ St(x0). +Using the Therorem 6.2, we have that: +inf +S2t(x0) v ≥ 1 +L inf +St(x0) v ≥ 1 +L2 +inf +S 1 +2 t(x0) v ≥ 1 +L2 +Then we have that: +inf +St(x0) u ≥ +inf +S2t(x0) u = α +M1 +inf +S2t(x0) v ≥ +α +M1L2 +□ +The Lemma 3.1 of [4] also holds in our case with t ≤ µ3 +0: +Lemma 7.2. Let u be a nonnegative solution of Lφu = 0 in the section +St(z) such that: +inf +St(z) u ≤ 1 +with t ≤ µ3 +0. Let θ be the constant in the Lemma 4.6. Then if y ∈ St(z) and +Sh(y) is a section with h < θt, and +µ{x ∈ Sh(y) : u(x) > α} ≥ λµ(Sh(y)) +then +h ≤ θt(M0L +α +) +1 +δ . +Here, 0 < λ is given in the Theorem 5.1. M0 is given by the Lemma 7.1. θ +is the constant in the Lemma 4.6, and L > 1 is the constant in the Theorem +6.2. +Proof. We can prove this Lemma following the proof of the Lemma 3.1 of +[4] word by word. +□ +The following Lemma comes from [6]. +Lemma 7.3. Let f : B0.8 → R be an L1 function. Then for m-a.e. x ∈ B0.8, +we have: +lim +sup +x∈Sµα(xα), µα→0 +1 +m(Sµα(xα)) +� +Sµα(xα) +|f(y) − f(x)|dm(y) = 0. +In particular, for any A ⊂ B0.8, we can take f = χA in the above formula +and get that: For a.e. x ∈ A, +lim +sup +x∈Sµα(xα), µα→0 +| +1 +m(Sµα(xα))m(A ∩ Sµα(xα)) − 1| = 0. + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +27 +Next we want to prove a decay estimate for the suplevel sets of u and +then get Lp estimate for u for some p > 0. +Theorem 7.4. There exists p > 0 such that for any z0 ∈ B 1 +2 we have that: +|u|Lp(B µ2 +0 +C +(z0)) ≤ C +inf +Sµ3 +0(z0) u. +Where B µ2 +0 +C +(z0) ⊂ Sµ3 +0(z0). +Proof. We will do the calculation in a set Sµ3 +0(z0). By multiplying u by a +constant which doesn’t affect the conclusion, We may assume that +inf +Sµ3 +0(z0) u ≤ 1. +Let B µ2 +0 +C +(z0) be a subset of Sµ3 +0(z0). This is ensured by the Lemma 4.8. We +define +Ek = {x : u(x) ≥ KMk}, +where K and M are constants to be determined. Define +Sk = B µ2 +0 +C (1− 1 +4−...− +1 +2k+1 )(z0) +for k ≥ 1 and S0 = B µ2 +0 +C +(z0). We want to prove that: +µ(Ek+1 ∩ Sk) ≤ δ1µ(Ek ∩ Sk−1) +for some 0 < δ1 < 1. Before we get the Calderon-Zygmund decomposition of +S1∩E2, we need to verify that for a.e. x ∈ A, the assumptions (1) and (2) in +the Theorem 4.1 hold. The assumption (1) holds because of the Lemma 7.3. +Suppose that the assumption (2) is not true. Then there exists t0 ∈ (µ4 +0, µ3 +0] +such that: +µ(St0(x) ∩ E2 ∩ S1) > λµ(St0(x)). +Then we can use the Lemma 7.1 to get that: +inf +x∈St0(x) u(x) ≥ KM2 +M0 +. +Since +inf +Sµ3 +0(z0) u ≤ 1 +and t0 ≤ µ3 +0, we can use the Lemma 7.2 to get that +t0 ≤ θµ3 +0( M0L +KM2 ) +1 +δ . +Let M be big such that t0 ≤ µ4 +0. This is a contradiction because we already +assume that t0 > µ4 +0. So the assumptions (1) and (2) in the Theorem 4.1 +hold. + +28 +YULUN XU +Then we can use the Theorem 4.1 to get a Calderon-Zygmund decompo- +sition of S1 ∩ E2 at the level +λ +1−Cσ +1 +2 : {St1 +i (x(1) +i )} (We assume that σ and ǫ2 +are small such that +λ +1−Cσ +1 +2 < 1). Moreover, we have that: +µ(St1 +i (x(1) +i )) ∩ S1 ∩ E2 +µ(St1 +i (x(1) +i )) +≥ (1 − Cσ +1 +2 ) +λ +1 − Cσ +1 +2 += λ. +Then we can use the Theorem 7.1 to get that: +u ≥ KM2 +M0 +≥ KM on St(1) +i (x(1) +i ). +Here we assume that M ≥ M0. This implies that +St(1) +i (x(1) +i ) ⊂ E1. +Using the Lemma 7.2, we can get that: +t(1) +i +≤ θµ3 +0( M0L +KM2 ) +1 +δ . +This implies that +St(1) +i (x(1) +i ) ⊂ B +[θµ3 +0( M0L +KM2 ) +1 +δ ] +1 +2 +logµ0 (1+Cσ +1 +2 )(x(1) +i ), +by the Lemma 4.8. We can let M be big enough such that: +[θµ3 +0( M0L +KM2 ) +1 +δ ] +1 +2 +logµ0(1+Cσ +1 +2 ) ≤ 1 +22 +µ2 +0 +C . +This implies that: +St(1) +i (x(1) +i ) ⊂ S0. +So we have proved that +St(1) +i (x(1) +i ) ⊂ S0 ∩ E1. +Using the Theorem 4.1, we can get that +µ(S1 ∩ E2) ≤ δ0µ(∪St(1) +i (x(1) +i )) ≤ δ0µ(S0 ∩ E1). +Suppose that we have proved that: +µ(Sk−1 ∩ Ek) ≤ δ0µ(Sk−2 ∩ Ek−1). +Follow the proof above we can get a Calder´on-Zygmund decomposition for +Sk ∩ Ek+1 at the level +λ +1−Cσ +1 +2 from the Theorem 4.1: {St(k) +i (x(k) +i +)}. We have +that: +(7.5) +µ(Sk ∩ Ek+1) ≤ δ0µ(∪St(k) +i (x(k) +i +)) + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +29 +Moreover, the Theorem 4.1 implies that +µ(St(k) +i (x(k) +i +) ∩ Ek+1) +µ(St(k) +i (x(k) +i +)) +≥ (1 − Cσ +1 +2 ) +λ +1 − Cσ +1 +2 += λ. +Then we can use the Lemma 7.1 to get that: +u ≥ KMk+1 +M0 +≥ KMk on St(k) +i (x(k) +i +). +This implies that +St(k) +i (x(k) +i +) ⊂ Ek. +Using the Theorem 7.2, we can get that: +t(k) +i +≤ θµ3 +0( M0L +KMk+1 ) +1 +δ . +This implies that +St(k) +i (x(k) +i +) ⊂ B +[θµ3 +0( +M0L +KMk+1 ) +1 +δ ] +1 +2 +logµ0 (1+Cσ +1 +2 )(x(k) +i +), +by using the diameter estimate. +We can let M be big enough which is +independent of k such that: +[θµ3 +0( M0L +KMk+1 ) +1 +δ ] +1 +2 +logµ0(1+Cσ +1 +2 ) ≤ +1 +2k+1 +µ2 +0 +C . +This implies that: +St(k) +i (x(k) +i +) ⊂ Sk−1. +So we have proved that +St(k) +i (x(k) +i +) ⊂ Sk−1 ∩ Ek. +So the Inequality 7.5 implies that: +µ(Sk ∩ Ek+1) ≤ δ0µ(Sk−1 ∩ Ek). +So we have that: +µ(B µ2 +0 +2C +(z0) ∩ Ek+1) ≤ µ(Sk ∩ Ek+1) ≤ δk +0µ(S0 ∩ E1). +So we have that: +� +B µ2 +0 +2C +(z0) +up = p +� ∞ +0 +sp−1Area({x : u(x) ≥ s} ∩ B µ2 +0 +2C +(z0))ds += p +� KM +0 +sp−1ds + Σ∞ +i=1 +� KMi+1 +KMi +sp−1Area({x : u(x) ≥ s} ∩ B µ0 +2C (z0))ds +≤ C + Σ∞ +i=1p +� KMi+1 +KMi +sp−1Area({u ≥ KMi} ∩ B µ2 +0 +2C +(z0))ds +≤ C + CΣ∞ +i=1(KMi+1)pδi−1 +0 +. + +30 +YULUN XU +Then we can take p be small such that Mpδ0 < 1. So the last line in the +above formula is finite. +□ +Next we need a Lemma which is proved in [2]: +Lemma 7.6. Suppose that u ≥ 0 satisfies in B1 ⊂ Rd: +∂i(aij∂ju) ≥ 0. +Here +1 +λ(x) ≤ aij(x) ≤ λ(x), with λ(x) ∈ Lp(B1) for some p > 3d +2 , then for +any ǫ5 > 0, there exists a constant C, depending on p, ||λ||Lp(B1) and ǫ5 +such that: +sup +B 1 +2 +u ≤ C||u||Lǫ5(B1). +Combining the Theorem 7.4 and the Lemma 7.6, we can prove the fol- +lowing Harnack inequality: +Theorem 7.7. There exists a constant C such that for any z0 ∈ B 1 +2, we +have that: +sup +B µ2 +0 +C +(z0) +u ≤ C +inf +B µ2 +0 +C +(z0) u +Proof. Using the main theorem of the w2,p paper, we can derive the W 2,p +estimate for φ and p can be made arbitrarily big if we let ǫ be small enough. +This implies that maxl λl(φi¯j) ∈ Lp(B0.8). Since 1 − ǫ ≤ det(φi¯j) ≤ 1 + ǫ, +we can have that +max +k +1 +λk(φi¯j) ≤ 2 max +l +λl(φi¯j)n−1 +Thus we can also get that maxk +1 +λk(φi¯j) ∈ Lp(B0.8). So we can use the Lemma +7.6 to get that: +sup +B µ2 +0 +2C +(z0) +u ≤ C||u||Lǫ5(B µ2 +0 +C +(z0)). +Using the Lemma 7.4, we have that: +||u||Lp0(B µ2 +0 +C +(z0)) ≤ C +inf +Sµ3 +0(z0) u ≤ C +inf +B µ2 +0 +2C +(z0) u +for some p0 > 0. Then we can let ǫ5 be small enough such that ǫ5 ≤ p0. +Then we have that: +sup +B µ2 +0 +2C +(z0) +u ≤ C||u||Lǫ5(B µ2 +0 +C +(z0)) ≤ C||u||Lp0(B µ2 +0 +C +(z0)) ≤ C +inf +B µ2 +0 +2C +(z0) u. +□ +We should be able to prove the Theorem 1.5: + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +31 +Proof. (of the Theorem 1.5.) +We first normalize St(z0) to be close to a +ball. Define a coordinate w by z − z0 = +√ +tTt,z0w. Let hz0 be a degree two +pluriharmonic polynomial such that when we define +�φ(w) = φ(z0 + +√ +tTt,z0w) +t|det(Tt,z0)| +2 +n ++ hz0, +�u(w) = u(z0 + +√ +tTt,z0w) +we have that �φ = 0 on ∂St(z0) in the coordinate w. Using the Theorem 1.3 +in the W 2,p paper, we can get that +||�φ||W 2,p(B0.9) ≤ C, +|| +� +i +1 +�φi¯i +||Lp(B0.9) ≤ C, +for any big p if we let µ0 be small and ǫ be small. Then we can apply the +Theorem 7.7 to get that: +sup +B µ2 +0 +C +�u ≤ C inf +B µ2 +0 +C +�u +Let �Sµ(w0) be the sections defined in the coordinate w in the same way that +we define Sµ. From the Lemma 4.3, we have that: +1 +1 − c1(γ) +�S µ4 +0t +C0 +(0) ⊂ B µ2 +0 +C +, +if we let C0 be big enough and assume that c1(γ) ≤ 1 +2. Using the above +formula and the relation between u and �u, we have that: +sup +1 +1−c1(γ) �S µ4 +0t +C0 +(0) +u ≤ C +inf +1 +1−c1(γ) �S µ4 +0t +C0 +(0) +u. +Then use the Lemma 4.11 to get that: +S µ4 +0t +C0 +(z0) ⊂ +1 +1 − c1(γ) +�S µ4 +0t +C0 +(0). +So we have proved that: +sup +S µ4 +0t +C0 +(z0) +u ≤ C +inf +S µ4 +0t +C0 +(z0) u. +□ +Proof. (of the Corollary 1.6) Define τ = µ4 +0 +C0 . Define ¯u = u − inf u so that +¯u is nonnegative and we can apply the Harnack inequality to ¯u. +Define +Mt(z) = supSt(z) ¯u and mt(z) = infSt(z) ¯u. We can use the Corollary 1.5 +with u replaced by Mt(z) − ¯u and ¯u − mt(z) for any t ≤ µ5 +0 +C0 to get that: +Mt(z) − mτt(z) ≤ β(Mt(z) − Mτt(z)) +Mτt(z) − mt(z) ≤ β(mτt(z) − mt(z)). + +32 +YULUN XU +Add these two inequalities together, we have that: +Mτt(z) − mτt(z) ≤ β − 1 +β + 1(Mt(z) − mt(z)). +This implies that +oscS τkµ5 +0 +C0 +(z)¯u ≤ (β − 1 +β + 1)koscS µ5 +0 +C0 +(z)¯u ≤ 2(β − 1 +β + 1)k|¯u|L∞. +Using the Lemma 4.8, we have that +B +1 +C ( +τkµ5 +0 +C0 ) +1 +2 +logµ0 (1−Cσ +1 +2 )(z) ⊂ S τkµ5 +0 +C0 +(z). +So we have that: +oscB +C( +τkµ5 +0 +C0 +)1+logµ0 (1−Cσ +1 +2 ) +(z)¯u ≤ 2(β − 1 +β + 1)k|¯u|L∞ ≤ 4(β − 1 +β + 1)k|u|L∞. +Since β−1 +β+1 < 1, this implies that: +oscBr(z)u = oscBr(z)¯u ≤ Crα, +which concludes the proof of the lemma. +□ +8. Some corollaries of the main theorem +Proof. (of the Corollary 1.8) +Now we take some point p0 ∈ M and take normal coordinates (z1, · · · , zn) +at p0 so that gi¯j(p0) = δij and ∇g(p0) = 0. We can choose local potential +ρ(z), such that ω0 = √−1∂ ¯∂ρ near p0, say on B1(p0) (under local coordi- +nates z). So that on this neighborhood, the equation can be written as: +(8.1) +det((ρ + φ)i¯j) = f det(gi¯j), +in B1. +In order to use Theorem 1.7, we need to zoom in (8.1) at p0 at a suitable +scale so that the right hand side is close to a contant. +Let 0 < r0 < 1, we perform a change of variable z = r0w. Next we define +˜ur0(w) = 1 +r2 +0 +u(r0w), ˜ρr0 = 1 +r2 +0 +ρ(r0w), ˜φr0(w) = 1 +r2 +0 +φ(r0w). +So we have that : +L˜ρr0+˜φr0(˜ur0 − ˜φr0) = 0. +From the proof of the Corollary 1.1 in the W 2,p paper, we have that if we +let r0 be small enough depending on ω0 and ǫ be small, The assumptions of +the Theorem 1.7 hold with φ replaced by ˜ρr0 + ˜φr0 and with u replaced by +˜ur0 − ˜φr0. Then we can use the Theorem 1.7 to get that: +||˜ur0 − ˜φr0||Cα(B 1 +2 (0)) ≤ C. +This implies that: +||˜ur0||Cα(B 1 +2 (0)) ≤ C. + +INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION +33 +Using an elementary covering argument, we have that: +||u||Cα(M) ≤ C. +□ +9. Bibliography +References +[1] E. Bedford and B. A. Taylor. The Dirichlet problem for a complex Monge-Ampere +equation. Invent. Math. vol 37 (1976), 1-44. +[2] Chen, Xiuxiong, and Jingrui Cheng. On the constant scalar curvature K¨ahler metrics +(I)—A priori estimates. Journal of the American Mathematical Society 34.4 (2021): +909-936. +[3] Caffarelli, Luis, Cristian Guti´errez. Real analysis related to the Monge-Ampere equa- +tion. Transactions of the American Mathematical Society 348.3 (1996), 1075-1092. +[4] Caffarelli, Luis, and Cristian Guti´errez. Properties of the solutions of the linearized +Monge-Ampere equation. American Journal of Mathematics 119.2 (1997), 423-465. +[5] Chen, Xiuxiong, Hongnian Huang, and Li Sheng. The interior regularity of the Calabi +flow on a toric surface. Calculus of Variations and Partial Differential Equations 55.4 +(2016): 1-28. +[6] Cheng, Jingrui, and Xu Yulun. Interior W2,p estimate for small perturbations to the +complex Monge-Ampere equation arXiv preprint arXiv:2301.00940 (2022). +[7] Guti´errez, Cristian, and Truyen Nguyen. Interior second derivative estimates for solu- +tions to the linearized Monge-Amp`ere equation. Transactions of the American Math- +ematical Society 367.7 (2015): 4537-4568. +[8] Guti´errez, Cristian E., and Truyen Nguyen. Interior gradient estimates for solutions +to the linearized Monge–Amp`ere equation. Advances in Mathematics 228.4 (2011): +2034-2070. +[9] Gilbarg, David, Neil S. Trudinger. Elliptic partial differential equations of second order. +Vol. 224. No. 2. Berlin: springer, (1977). +[10] Huang, Qingbo. Sharp regularity results on second derivatives of solutions to the +Monge-Amp`ere equation with VMO type data. Communications on Pure and Applied +Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences +62.5 (2009): 677-705. +[11] Le, Nam. Boundary Harnack inequality for the linearized Monge-Ampere equations +and applications. Transactions of the American Mathematical Society 369.9 (2017): +6583-6611. +[12] Le, N. Q., and Ovidiu Savin. Boundary regularity for solutions to the linearized +Monge–Amp`ere equations. Archive for Rational Mechanics and Analysis 210.3 (2013): +813-836. +[13] Tang, Lin, and Qian Zhang. Global W-2, Wp Regularity on the Linearized Monge- +Ampere Equation with VMO Type Coefficients. RESULTS IN MATHEMATICS 77.2 +(2022). +[14] Tang, Lin, and Qian Zhang. Interior C1,α regularity for the linearized Monge–Amp`ere +equation with VMO type coefficients. Advances in Operator Theory 5.1 (2020): 204- +218. +[15] Zhou, Bin. Variational solutions to extremal metrics on toric surfaces. Mathematische +Zeitschrift 283.3 (2016): 1011-1031. + diff --git a/UdAzT4oBgHgl3EQf0_6m/content/tmp_files/load_file.txt b/UdAzT4oBgHgl3EQf0_6m/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..94bf3e94402e2c09244e96ce7d631eee80b72e92 --- /dev/null +++ b/UdAzT4oBgHgl3EQf0_6m/content/tmp_files/load_file.txt @@ -0,0 +1,976 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf,len=975 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='01793v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='CV] 4 Jan 2023 INTERIOR H¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION YULUN XU Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let w0 be a bounded, C3, strictly plurisubharmonic func- tion defined on B1 ⊂ Cn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then w0 has a neighborhood in L∞(B1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that we have a function φ in this neighborhood with 1 − ε ≤ MA(u) ≤ 1+ ε and there exists a function u solving the linearized com- plex Monge-Ampere equation: det(φk¯l)φi¯jui¯j = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then one has an estimate on |u|Cα(B 1 2 ) for some α > 0 depending on n, as long as ǫ is small depending on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This partially generalizes Caffarelli’s estimate for linearized real Monge-Ampere equation to the complex version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' introduction Monge-Ampere equations are second-order partial differential equations whose leading term is the determinant of the Hessian of a real unknown function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The Hessian is required to be positive or at least nonnegative, so that the equations are elliptic or degenerate elliptic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Monge-Ampere equations can be divided into real or complex, depending on whether one is considering real Hessian or complex Hessian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the real case, the Hessian is φij, so that the positivity of the Hessian is a convexity condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the complex case, the Hessian is φi¯j, and its positivity is a plurisubharmonicity condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let φ be a convex solution to a real Monge-Ampere equation: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1) detD2φ = g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let E ⊂ Cn be a set and x0 ∈ E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We will sometimes denote E to be E(x0) to indicate it is a “pointed set”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let c > 0, we define: cE(x0) = {x0 + c(y − x0) : y ∈ E(x0)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Namely cE(x0) is the image of the dilation map centered at x0 by a factor c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let µ be the Monge-Ampere measure of φ, (In the case of φ ∈ C2, µ(A) = � A det(D2φ) for any set A) We say µ satisfies the doubling property if there exist constants C > 0 and 0 < α < 1 such that: µ(St(x)) ≤ Cµ(αSt(x)), Date: December 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 1 2 YULUN XU for any section St(x) = {y ∈ Rn : φ(y) < l(y) + t}, where l is a supporting hyperplane of φ at x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Note that if we have that λ < g < Λ for some positive constants λ and Λ, then the doubling property holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Next we consider the following linearized Monge-Ampere equation: Lφ = det(D2φ)φijuij = f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' When we take first derivatives of φ, we can see that φj = Djφ, j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=', n satisfy the linearized Monge-Ampere equation: Lφ(φj) = gj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since φ is convex, the linearized Monge-Ampere equation is elliptic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' How- ever, the linearized Monge-Ampere equation is not uniformly elliptic unless we have the estimate for the second derivatives of φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The standard H¨older estimates for the solutions to linear second order elliptic equations usually require the uniform ellipticity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' However, Caffarelli prove the H¨older estimate for the solutions to the linearized Monge-Ampere equations under a weak condition on g which doesn’t imply the uniform ellipticity of the linearized Monge-Ampere equation, see[4]: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Assume that the Monge-Ampere measure µ satisfies the dou- bling property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let u be a nonnegative solution to the equation: Lφu = 0 in a section SR(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exist constants C0 > 0 and α > 0 depending on n and |u|∞ and the constants in the doubling property such that: ||u||Cα(S R 2 (x0)) ≤ C0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The boundary Harnack inequality for the linearized real Monge-Ampere equation is derived in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' There are estimates for the high order derivatives of the solutions to the linearized real Monge-Ampere equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' When g is continuous, the C1,α estimate is derived in [8] and the W 2,p estimate is derived in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The boundary H¨older gradient estimates is derived in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' If g is not continuous but belongs to some VMO-type space, the interior W 2,p estimate is derived in [10] while the global W 2,p estimate is derived in [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The C1,α estimate is derived in [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' There are some applications of the Real linearized Monge-Ampere equa- tion to the complex geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' It can be used to prove the interior regularity of the Calabi flow on a toric surface, see [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' It can also be applied to the extremal metrics on toric surfaces, see [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' However, as far as I am con- cerned, the theory of the real linearized Monge-Ampere equation can only be applied to the toric case where a complex Monge-Ampere equation can be reduced to a real Monge-Ampere equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Besides, the complex lin- earized Monge-Ampere equation appears in the complex geometry such as INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION3 the study of the csck problem [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So a natural question is: how to adapt the method for the real linearized Monge-Ampere equation to the complex linerized Monge-Ampere equation directly?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Thanks to [6], we can give a partial answer to this question: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let Ω ⊂ Cn be a bounded domain with B1−γ0 ⊂ Ω ⊂ B1+γ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let φ ∈ C2(Ω) ∩ PSH(Ω) ∩ C(¯Ω) be such that 1 − ε ≤ det φi¯j ≤ 1 + ε in Ω and φ = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that γ0 and ǫ are small constants depending on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let St(z0) be defined in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exist constants β > 1, µ0 and C0 depending on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that u ∈ C2(St(z0)) is a nonnegative solution to Lφu = 0 on St(z0) with t ≤ µ5 0 C0 and z0 ∈ B 1 2 (0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we have that: sup St(z0) u ≤ β inf St(z0) u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let Ω ⊂ Cn be a bounded domain with B1−γ0 ⊂ Ω ⊂ B1+γ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let φ ∈ C2(Ω) ∩ PSH(Ω) ∩ C(¯Ω) be such that 1 − ε ≤ det φi¯j ≤ 1 + ε in Ω and φ = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that γ0 and ǫ are small constants depending on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let St(z0) be defined in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that u ∈ C2(Ω) is a solution to Lφu = 0 on Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we have that: ||u||Cα(B 1 2 ) ≤ C, Here α > 0 is a constant depending on n and C is a constant depending on n and |u|L∞(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' More generally, we have that: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let w0 be a smooth function in the unit ball such that for some C0 > 1: 1 C0 I ≤ (w0)zi¯zj ≤ C0I, |D3w0| ≤ C0 in B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exists δ0 > 0 small enough, depending only on C0 and n, such that for all φ ∈ C2(B1)∩PSH(B1)∩C(B1) with |φ−w0| ≤ δ0 on B1, solving 1 − ε ≤ MA(φ) ≤ 1 + ε, and for any solution u ∈ C2(B1) solving Lφu = det(φk¯l)φi¯jui¯j = 0, we have that: ||u||Cα(B 1 2 ) ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here α > 0 is a constant depending on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here C is a constant depending on C0, |u|L∞(B1) and n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' ε is small enough depending only on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the above, MA(φ) is the complex Monge-Ampere operator defined for continuous plurisubharmonic functions, in the Bedford-Taylor sense (see [1]), so that MA(φ) = det φi¯j when φ ∈ C2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' From now on, we use Lφ for the complex linearized Monge-Ampere equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For the manifold setting, we have the following Corollary: 4 YULUN XU Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let (M, ω0) be a compact K¨ahler manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let φ ∈ C2(M)∩ PSH(M, ω0) be the solution to: (ω0 + √ −1∂ ¯∂φ)n = fωn 0 , ω0 + √ −1∂ ¯∂φ > 0, where |f − 1| < ε and � M fωn 0 = � M ωn 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let u ∈ C2(M) be the solution to the equation: ∆φu = gi¯j φ ui¯j = n − trgφg Suppose that ǫ is small enough depending on n, ω0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we have that ||u||Cα ≤ C, Here α > 0 is a constant depending on n and C is a constant depending on n, ω0 and ||u||L∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the section 3, we reduce the Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7 to the Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we prove the Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6 starting by proving a version of Calderon-Zygmund decomposition in the section 4(Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we prove that the level sets of solutions have uniform critical density in the section 5(Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we prove that solutions that are large on a section are uniformly large on a bigger section in the section 6(Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the section 7, we first prove the power decay of the distribution function of solutions and then prove the Harnack inequality (Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7) and the H¨older estimate of the solutions (Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the section 8, we prove some corollaries of the main theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' preliminary We want to show that the equations in the main theorem is invariant un- der affine transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let z be the original coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any affine transformation T and any positive constant λ, we can define a new coordi- nate w by z = √ λTw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let h be a degree two pluriharmonic polynomial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we can normalize φ and u by: �φ(w) = φ( √ λTw) λ|det(T)| 2 n + h �u(w) = u( √ λTw).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then by calculation, we have that: L�φ�u(w) = λ|det(T)| 2 n Lφu(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So if Lφu = 0, we can get that L�φ�u = 0 Recall that We denote the complex Monge-Ampere measure as µ = MA(φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We denote the Lebesgue measure as m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Reduction of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7 to Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6 We first need the following lemmas from [6]: Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let w0 be as stated in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Denote ax0,ij = (w0)i¯j(x0) and hx0 = Re(Σi2(w0)izi) + Re(Σi,j(w0)ijzizj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Namely we assume that w0 ∈ C3(B1), and 1 C0 I ≤ (w0)i¯j ≤ C0I, |D3w0| ≤ C0 on B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let δ ≥ 0 and φ0 be a function on B1 with |φ0 − w0| ≤ δ on B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exists C1 > 0 large enough and µ0 > 0 small enough depending only on C0, such that for all µ with 4C1δ ≤ µ ≤ µ0, we have: (1−C1γ)Eµ(x0) ⊂ {z ∈ B 1 2C2 0 (x0) : (φ0−hx0)(z) ≤ φ0(x0)+µ} ⊂ (1+C1γ)Eµ(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Moreover, (φ0 − hx0)(z) = φ0(x0) + µ on ∂{z ∈ B 1 2C2 0 (x0) : (φ0 − hx0)(z) ≤ φ0(x0) + µ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here γ = δ µ +µ 1 2 and Eµ(x0) = {z ∈ Cn : �n i,j=1 ax0,ij(z−x0)i(z − x0)j ≤ µ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let φ and w0 be as stated in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let µ > 0 and x0 ∈ B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let Tµ,x0 be a C-linear transformation such that Tµ,x00 = 0 and x0 + Tµ,x0(B√µ(0)) = Eµ(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Define (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='2) φµ,x0(ζ) = 1 µ| det Tµ,x0| 2 n (φ − hx0 − µ)(x0 + Tµ,x0(√µζ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since Eµ(x0) is defined in terms of ax0,ij, with 1 C0 ≤ ax0,ij = (w0)i¯j(x0) ≤ C0I, it is easy to see that: ||Tµ,x0|| ≤ C2, ||T −1 µ,x0|| ≤ C2, 1 C2 ≤ | det Tµ,x0|2 ≤ C2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here C2 is a large enough constant depending only on C0 and n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Define Ωµ = T −1 µ,x0({z ∈ B 1 2C2 0 : (φ − hx0)(z) ≤ φ(x0) + µ} − x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then by straightforward calculation and Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1, we can see the following: Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' There is µ0 > 0 small enough depending only on C0 such that for all 4C1δ0 ≤ µ ≤ µ0 (with C1 > 0 being the constant given by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1), we have (1) B1−C1γ ⊂ Ωµ ⊂ B1+C1γ, with γ = δ0 µ + µ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (2) det(φµ,x0)ζi¯ζj = f(x0 + Tµ,x0(√µζ)) in Ωµ, uµ,x0 = 0 on ∂Ωµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The renormalized function uµ,x0 fits in the assumptions for Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6 after suitably choosing the parameters, and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7 follows as a direct consequence: Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7 holds, if we assume Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 6 YULUN XU Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We wish to apply Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6 to each uµ,x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In order to do so, we just need: C1γ = C1(δ0 µ + µ 1 2 ) ≤ γ0(n), |f(x0 + Tµ,x0(√µζ)) − 1| ≤ ε(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here γ0(n) and ε(n) are the constants given by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we could just take µ so that 2C1µ 1 2 ≤ 1 2γ0(n) and also µ ≤ µ0 (given by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' With this µ, we can take δ0 so that C1 δ0 µ ≤ 1 2γ0(n) and also that 4C1δ0 ≤ µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We fix this choice from now on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since we assumed that Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6 holds, we conclude that: ||uµ,x0||Cα(B 1 2 ) ≤ C, where C is a constant depending only on n and |u|L∞(B1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='2) we may go back to u and obtain that ||u||Cα(E 1 2 µ(x0)) ≤ C′, for any x0 ∈ B 1 2 (0) where C′ is a constant depending only on n, C0 (defined in the statement of the Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7) and |u|L∞(B1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since there exists a constant C1 depending on C0 such that B √µ C ⊂ E 1 2µ(x0) ⊂ BC√µ(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can use en elementary covering argument to get that: ||u||Cα(B 1 2 (0)) ≤ C′′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' C′′ is a constant depending only on n, C0 and |u|L∞(B1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Calderon-Zygmund decomposition From now on we focus on the Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In this section we want to prove the following theorem which is a version of Calderon-Zygmund decomposition using the sections St(x) defined in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let 0 < σ < 1 and 0 < δ < 1 be given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let µ0 (This is the constant we use to define sections in the W 2,p paper) and ǫ (This is the same constant in the Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6) be small depending on σ, δ and n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let A be a bounded subset of B 1 2 (0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' x ∈ A, (1) limt→0 µ(St(x)∩A) µ(St(x)) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (2) µ(St(x) ∩ A) ≤ δµ(St(x)) for any µ4 0 < t ≤ µ3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then for such x, we can define tx = sup{t ≤ µ4 0 : µ(St(x)∩A) ≥ δµ(St(x))}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exist a countable family of sections {Sk = Stk(xk)}, where xk ∈ A and tk ≤ µ3 0, with the following properties: (a) (1 − Cσ 1 2 )δ ≤ µ(Sk∩A) µ(Sk) ≤ δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (b) For a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' x ∈ A, x ∈ ∪kSk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (c) µ(A) ≤ δ0µ(∪∞ 1 Sk), where δ0 = δ0(δ) < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION7 Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' As can be seen from the proof, tk may not be equal to txk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' First we need the following proposition from the [6] paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let Ω and u be as stated in Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6, with γ0 small enough depending only on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let 0 < σ < 1 be given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exists ε > 0 depending only on σ and n, such that if |f −1| ≤ ε, the following hold: (1) There exists µ0 > 0 small enough depending only on n and σ, such that for all x0 ∈ B0,8 and all µ ≤ µ0, there exists a degree 2 pluri- harmonic polynomial hµ,x0(z) with hµ,x0(x0) = 0, such that (1 − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1σ)Eµ(x0) ⊂ Sµ(x0) := {z ∈ Ω : (u − hµ,x0)(z) ≤ u(x0) + µ} ⊂ (1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1σ)Eµ(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the above, Eµ(x0) = {z ∈ Cn : �n i,j=1 aµ,x0,ij(z − x0)i(z − x0)j ≤ µ}, with aµ,x0,ij being positive Hermitian and det aµ,x0,ij = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (2) There is a function c(σ) : σ ∈ (0, 1) → R>0, such that for any x0 ∈ B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8 and any 0 < µ1 ≤ µ2 ≤ µ0 1+c(σ), one has Sµ1(x0) ⊂ S(1+c(σ))µ2(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Moreover, 0 < c(σ) ≤ C2,nσ 1 2 for some dimensional constant C2,n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (3) There is a dimensional constant C3,n > 0 such that for all 0 < µ ≤ µ0 and any x0 ∈ B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8, there exists a C-linear transformation Tµ,x0, such that | det Tµ,x0| = 1, Tµ,x00 = 0, x0 + Tµ,x0(B√µ(0)) = Eµ(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Moreover, for any 0 < µ1 < µ2 ≤ µ0 and any x0 ∈ B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8: ||Tµ1,x0 ◦ T −1 µ2,x0|| ≤ C3,n(µ2 µ1 ) C3,nσ 1 2 − log(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1σ) , ||Tµ2,x0 ◦ T −1 µ1,x0|| ≤ C3,n(µ2 µ1 ) C3,nσ 1 2 − log(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='1σ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The following conclusion is proved in the ”induction hypothesis” part of the [6] paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any µk+1 0 < t ≤ µk 0, we can write Tµ,x0 = Tx0,k = �Tx0,1 ◦ �Tx0,2 ◦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' ◦ �Tx0,k, where Tµ,x0 is used in the statement of the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' we have that | �Tx0,1| ≤ C, | �T −1 x0,1| ≤ C | �Tx0,k − I| ≤ Cσ 1 2 for k ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We need the following engulfing property of the sections which is proved in the [6] paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Assume that x1, x2 ∈ B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8, 0 < µ1, µ2 ≤ µ0 and µ1 ≤ 4µ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let σ > 0 be small enough (depending only on dimension).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Assume also that Sµ1(x1) ∩ Sµ2(x2) ̸= ∅, then Sµ1(x1) ⊂ 10Sµ2(x2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' There is another version of the engulfing property: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let σ be small and ǫ be small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' There exists a constant θ > 0 such that if St(z) is a section with y ∈ St(z), then St(z) ⊂ Sθt(y) for t ≤ µ0 θ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 8 YULUN XU Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Using the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3 we can get that (1 − σ) √ tE1 ⊂ St(y) ⊂ (1 + σ) √ tE1 (1 − σ)√t2E2 ⊂ St2(y) ⊂ (1 + σ)√t2E2, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='7) where E1 and E2 are ellipsoids centered at z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let k1 and k2 be integers such that: µk1+1 0 < t ≤ µk1 0 , µk2+1 0 < t2 ≤ µk2 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' If t and t2 are in the same generation or adjacent generations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' k1 − 1 ≤ k2 ≤ k1 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So by the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4, we have that |Ty,k2 ◦ T −1 y,k1 − I| ≤ Cσ 1 2 , |Ty,k1 ◦ T −1 y,k2 − I| ≤ Cσ 1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we can define T = Ty,k1 ◦T −1 y,k2 such that |T −I| ≤ Cσ 1 2 and TE2 = E1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that: 10St(y) ⊂ 10 √ t(1 + σ)E1 ⊂ 10 √ t(1 + σ)(1 + Cσ 1 2)E2 = (1 − σ) � 100(1 + σ)2(1 + Cσ 1 2 )2 1 (1 − σ)2 tE2 ⊂ S100(1+σ)2(1+Cσ 1 2 )2 1 (1−σ)2 t(y) Then we use the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='5 to get that: St(z) ⊂ 10St(y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that: St(z) ⊂ S100(1+σ)2(1+Cσ 1 2 )2 1 (1−σ)2 t(y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can assume that σ ≤ 1 2 and let µ0 be small such that t and 100(1 + σ)2(1 + Cσ 1 2)2 1 (1−σ)2 t are in the same generation or adjacent generations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In conclusion we can take θ = 100(1 + σ)2(1 + Cσ 1 2 )2 1 (1−σ)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we finish the proof the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' □ We also need the following lemma from the [6] estimating the shape of the sections in the original coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In particular, we can get an estimate of the diameter of the sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Suppose that µ0 is small and ǫ is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We have that: B 1 C µ 1 2 +logµ0 (1−Cσ 1 2 )(x0) ⊂ Sµ(x0) ⊂ B Cµ 1 2 +logµ0 (1+Cσ 1 2 )(x0), for any µ ≤ µ2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can assume that µ ≤ µ2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any µ, there exists an integer k such that µk+1 0 < µ ≤ µk 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' As in the proof of the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3 and the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4 we have that x0 + (1 − σ) �Tx0,1 ◦ �Tx0,2 ◦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' ◦ �Tx0,k(B√µ(0)) ⊂ Sµ(x0) ⊂ x0 + (1 + σ) �Tx0,1 ◦ �Tx0,2 ◦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' ◦ �Tx0,k(B√µ(0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='9) INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION9 Using the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4, we have that | �Tx0,1| ≤ C, | �T −1 x0,1| ≤ C | �Tx0,k − I| ≤ Cσ 1 2 for k ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So the Formula 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='9 becomes: B 1 C µ 1 2 +logµ0 (1−Cσ 1 2 )(x0) ⊂ 1 C (1 − σ)(1 − Cσ 1 2)k−2B√µ(x0) ⊂ Sµ(x0) ⊂ C(1 + σ)(1 + Cσ 1 2 )k−2B√µ(x0) ⊂ B Cµ 1 2 +logµ0 (1+Cσ 1 2 )(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This concludes the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' □ The following lemma is a characterization of sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The lemma follows directly from the construction of the sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any x0 ∈ B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8 and µ ≤ µ0, there exists a degree 2 pluriharmonic polynomial hx0,µ such that hx0,µ(x0) = 0 and Sµ(x0) = {x : φ(x) ≤ hx0,µ(x) + φ(x0) + µ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The following is the key lemma in the [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' It basically means that two sec- tions of the same height of two functions which differ by a plurisubharmonic function are close to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let φ be a function defined on an open set U ⊂ Cn and let h(z) be pluriharmonic function on U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let 0 < µ < 1 and σ > 0 be such that: (1 − γ) �E√σ ⊂ {φ ≤ σ} ⊂ (1 + γ) �E√σ ⊂ U, (1 − γ)E√σ ⊂ {φ ≤ h + σ} ⊂ (1 + γ)E√σ ⊂ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the above, �E√σ = {z ∈ Cn : �n i,j=1 �ai¯jzi¯zj ≤ σ} and E√σ = {z ∈ Cn : �n i,j=1 ai¯jzi¯zj ≤ σ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exists c1(γ) which is universal (depending only on dimension and you can explicitly calculate) and c1(γ) → 0 as γ → 0 such that (1 − c1(γ)) �E√σ ⊂ E√σ ⊂ (1 + c1(γ)) �E√σ Now we want to prove the following lemma which implies that if two sections have nonempty intersection, then they are comparable to each other: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let St0(x0) and St(x) be two sections such that t ≤ t0 ≤ µ0 2 and St0(x0) ∩ St(x) ̸= ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let Tt0,x0 be the affine transformation defined in the W 2,p paper that nor- malize St0(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we have that 1 C B( t t0 ) 1 2 +ǫ1( 1 √t0 T −1 t0,x0x) ⊂ 1 √t0 T −1 t0,x0St(x) ⊂ CB( t t0 ) 1 2 −ǫ1( 1 √t0 T −1 t0,x0x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' ǫ1 is a positive constant that can be made arbitrarily small if we let µ0 be small enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The constant C depends on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 10 YULUN XU Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can use the part (3) of the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3 to get that St0(x0) ⊂ S2t0(x0), St(x) ⊂ S2t0(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since St0(x0) ∩ St(x) ̸= ∅, we have that S2t0(x0) ∩ S2t0(x) ̸= ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' By the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='5 we have that: S2t0(x0) ⊂ 10S2t0(x), S2t0(x) ⊂ 10S2t0(x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that: (1 − σ)(x0 + T2t0,x0B√2t0(0)) ⊂ (1 − σ)E2t0(x0) ⊂ S2t0(x0) ⊂ 10S2t0(x) ⊂ 10(1 + σ)E2t0(x) = 10(2 + σ)(x + T2t0,xB√2t0(0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (1 − σ)(x + T2t0,xB√2t0(0)) ⊂ (1 − σ)E2t0(x) ⊂ S2t0(x) ⊂ 10S2t0(x0) ⊂ 10(1 + σ)E2t0(x0) = 10(2 + σ)(x0 + T2t0,x0B√2t0(0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So T2t0,x0 and T2t0,x are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='13) |T −1 2t0,x0 ◦ T2t0,x| ≤ C, |T −1 2t0,x ◦ T2t0,x0| ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' By the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4, T2t0,x and Tt0,x differ by a linear transformation which is Cσ 1 2 −close to Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that T2t0,x and Tt0,x are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Similarly, T2t0,x0 and Tt0,x0 are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In conclusion, Tt0,x and Tt0,x0 are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We consider the following two cases: (1) t ≤ 2t0µ02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In a coordinate w where S2t0(x) is close to a ball, we can define �St(x) just like how we define St(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In the same coordinate, using the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='8, we can get that: B 1 C ( t 2t0 ) 1 2 +logµ0 (1−Cσ 1 2 )(0) ⊂ �St(x) ⊂ B C( t 2t0 ) 1 2 +logµ0 (1+Cσ 1 2 )(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here we use t 2t0 because in the coordinate w, S2t0(x) is close to the unit ball.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So the height of the section �St(x) is scaled to t 2t0 accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' By the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='11, St(x) is comparable to �St(x), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 1 C St(x) ⊂ �St(x) ⊂ CSt(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that B 1 C ( t 2t0 ) 1 2 +logµ0 (1−Cσ 1 2 )(0) ⊂ St(x) ⊂ B C( t 2t0 ) 1 2 +logµ0 (1+Cσ 1 2 )(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' If we go back to the original coordinate, we have that: B 1 C ( t 2t0 ) 1 2 +logµ0 (1−Cσ 1 2 )(0) ⊂ 1 √2t0 T −1 2t0,x(St(x)−x) ⊂ B C( t 2t0 ) 1 2 +logµ0 (1+Cσ 1 2 )(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION 11 We already prove that T2t0,x and T2t0,x0 are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' B 1 C ( t 2t0 ) 1 2 +logµ0 (1−Cσ 1 2 )(0) ⊂ 1 √2t0 T −1 2t0,x0(St(x) − x) ⊂ B C( t 2t0 ) 1 2 +logµ0 (1+Cσ 1 2 )(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we can take ǫ1 = max {−logµ0(1 + Cσ 1 2 ), logµ0(1 − Cσ 1 2)} to get that: B 1 C ( t 2t0 ) 1 2 +ǫ1(0) ⊂ 1 √2t0 T −1 2t0,x0(St(x) − x) ⊂ BC( t 2t0 ) 1 2 −ǫ1(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Recall that T2t0,x0 and Tt0,x0 are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that 1 C B( t t0 ) 1 2 +ǫ1( 1 √t0 T −1 t0,x0x) ⊂ 1 √t0 T −1 t0,x0St(x) ⊂ CB( t t0 ) 1 2 −ǫ1( 1 √t0 T −1 t0,x0x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (2) t0µ2 0 < t ≤ t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' By the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3 we have that: (1 − γ)B√ t(0) ⊂ T −1 t,x (St(x) − x) ⊂ (1 + γ)B√ t(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' By the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='4, we have that: T2t0,x0 and Tt,x0 are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' T2t0,x and Tt,x are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Combining these facts and the Inequalities 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='13 we have that Tt,x and Tt0,x0 are bounded from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that: B 1 C � t t0 ( 1 √t0 T −1 t0,x0x) ⊂ 1 √t0 T −1 t0,x0St(x) ⊂ BC � t t0 ( 1 √t0 T −1 t0,x0x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' □ Consider the following Dirichlet problem on a domain Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' det((v0)i¯j) = 1 in Ω v0 = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='14) To start the process, we need that v0 is smooth in the interior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is guaranteed by the fact that Ω is close to B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' More precisely, we proved in [6]: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let Ω ⊂ Cn be a bounded domain and B1−γ(0) ⊂ Ω ⊂ B1+γ(0) for some 0 < γ < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let v0 be the solution to the Dirichlet problem in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='14), then |z|2 − 1 − 3γ ≤ v0 ≤ |z|2 − 1 + 3γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Moreover, there exists γn > 0 small enough, such that if γ ≤ γn, we have v0 ∈ C4( ¯B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='9) with ||v0 − (|z|2 − 1)||C4,B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='9 ≤ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here C depends only on n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' The following Lemma is also proved in the W 2,p paper: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Assume that det ui¯j = f in Ω and u|∂Ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let v0 be the solution to the Dirichlet problem (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Assume that 1 − ε ≤ f ≤ 1 + ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then we have (1 + ε) 1 n v0 ≤ u ≤ (1 − ε) 1 n v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In particular |v0 − u| ≤ 4ε in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 12 YULUN XU Then we can prove the following lemma: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' There exists a constant δ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any ¯ǫ ∈ (0, e−1), we can choose µ0 and ǫ to be small depending on ¯ǫ and n such that given a section St(x) with t ≤ µ2 0 and y /∈ St(x), we have that Bǫδ 2(T(y)) ∩ T(S(1−ǫ2)t(x)) = ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' for any ¯ǫ < ǫ2 < e−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Here T = 1 √ tT −1 t,x is an affine transformation such that B(1−σ)(0) ⊂ 1 √ tT −1 t,x (St(x) − x) ⊂ B(1+σ)(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We prove this lemma in two cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (1) (1−ǫ2)t and t are in the same generation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' there exists k such that µk+1 0 < (1 − ǫ2)t ≤ t ≤ µk 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Define a new coordinate w = 1 √ tT −1 t,x (z − x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' In this coordinate, St(x) is σ− close to the unit ball as in the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3 and there is a plurisubharmonic function h such that φ−h = 0 on ∂St(x) by the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' From the argument of the section 2, we can just assume that φ = 0 on ∂St(x) and use the coordinate w in the rest of the proof because the linearized Monge-Ampere equation is invariant under the normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Using the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3, the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='15 and the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='16 we can get that: |w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' After the normalization, φ(0) = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is because the height of the section is determined by the value of φ at 0 and the height is scaled to be 1 under the normalization (The details can be found in the construction of the sections in the W 2,p paper).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So for any w ∈ ∂S(1−ǫ2)t(x), we have that φ = −ǫ2 and: |w|2 − 1 − 3ǫ − Cσ ≤ −ǫ2 ≤ |w|2 − 1 + 3ǫ + Cσ This implies that: 1 − 3ǫ − Cσ − ǫ2 ≤ |w|2 ≤ 1 + 3ǫ + Cσ − ǫ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since y /∈ St(x) and (1 − σ)B1(0) ⊂ T(St(x) − x), we have that: T(y − x) /∈ (1 − σ)B1(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This implies that Bǫδ 2(T(y − x)) ⊂ Bc 1−σ−ǫδ 2(0), Here Bc 1−σ−ǫδ 2(0) = Cn \\ B1−σ−ǫδ 2(0) In order to make sure that Bǫδ 2(T(y − x)) ∩ B√1+3ǫ+Cσ−ǫ2(0) = ∅, it suffices to require that � 1 + 3ǫ + Cσ − ǫ2 ≤ 1 − σ − ǫδ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is equivalent to 3ǫ + (C + 2)σ ≤ ǫ2 + ǫ2δ 2 − 2ǫδ 2 − 2σǫδ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' INTERIOR H ¨OLDER ESTIMATE FOR THE LINEARIZED COMPLEX MONGE-AMPERE EQUATION 13 If we let δ be big(this is independent of ¯ǫ), then the minimum of ǫ2 + ǫ2δ 2 − 2ǫδ 2 − 2σǫδ 2 with ¯ǫ ≤ ǫ2 ≤ 1 2 is ¯ǫ + ¯ǫ2δ − 2¯ǫδ − 2σ¯ǫδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we just require that 3ǫ + (C + 2)σ ≤ ¯ǫ + ¯ǫ2δ − 2¯ǫδ − 2σ¯ǫδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Noting that ¯ǫ ≤ e−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' When δ is big we have that: ¯ǫ + ¯ǫ2δ − 2¯ǫδ − 2σ¯ǫδ ≥ 1 2¯ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we just require that: 3ǫ + (C + 2)σ ≤ 1 2¯ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (2) (1 − ǫ2)t and t are not in the same level, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' there exists an integer k such that: µk+1 0 < (1 − ǫ2)t ≤ µk+1 0 < t ≤ µk 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Recall that the sections St(x) for µk+1 0 < t ≤ µk 0 can be written as St(x) = {z : φ − h(z) ≤ φ(x) + t}, according to the Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can define �St(x): (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='18) �St(x) = {z : φ − h(z) ≤ φ(x) + t}, for t ≤ µk+1 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Note that when we define St(x) for t ≤ µk+1 0 , we subtract φ by a pluriharmonic function that is different from h and then take sublevel sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So �St(x) is different from St(x) for t ≤ µk+1 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Similar to the Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='3, we can show that S(1−ǫ2)t(x) ⊂ �S(1+c(σ))(1−ǫ2)t(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' When we let µ0 be small and ǫ be small, σ can be arbitrarily small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then c(σ) can be arbitrarily small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' As is defined in the case (1), we use the coordinate w where St(x) is normalized to be close to the unit ball.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So using the Equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='18, for any w ∈ ∂ �S(1+c(σ))(1−ǫ2)t(x), we have that φ = (1 + c(σ))(1 − ǫ2) − 1 = c(σ) − ǫ2 − ǫ2c(σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since in this coordinate |w|2 − 1 − 3ǫ − Cσ ≤ φ ≤ |w|2 − 1 + 3ǫ + Cσ, we have that: |w|2 − 1 − 3ǫ − Cσ ≤ c(σ) − ǫ2 − ǫ1c(σ) ≤ |w|2 − 1 + 3ǫ + Cσ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is equivalent to that: 1 − 3ǫ − Cσ + c(σ) − ǫ2 − ǫ2c(σ) ≤ |w|2 ≤ 1 + 3ǫ + Cσ + c(σ) − ǫ2 − ǫ2c(σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Since y /∈ St(x) and (1 − σ)B1(0) ⊂ T(St(x) − x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that T(y − x) /∈ (1 − σ)B1(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we have that Bǫδ 2(T(y − x)) ⊂ Bc 1−σ−ǫδ 2(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' 14 YULUN XU So if we want Bǫδ 2(T(y − x)) ∩ B√ 1+3ǫ+Cσ−ǫ2+c(σ)−ǫ2c(σ)(0) = ∅, we just require that Bc 1−σ−ǫδ 2(0) ∩ B√ 1+3ǫ+Cσ−ǫ2+c(σ)−ǫ2c(σ)(0) = ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is implied by � 1 + 3ǫ + Cσ − ǫ2 + c(σ) − ǫ2c(σ) ≤ 1 − σ − ǫδ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is equivalent to 3ǫ + Cσ + c(σ) + 2σ − σ2 ≤ ǫ2 + ǫ2c(σ) + ǫ2δ 2 + 2ǫδ 2 + 2σǫδ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let c(σ) be small and let δ be the same as in the case (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can see that f(t) = t + tc(σ) + t2δ + 2tδ + 2σtδ takes the minimum at t = ¯ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we just require that: 3ǫ + Cσ + c(σ) + 2σ − σ2 ≤ ¯ǫ + ¯ǫc(σ) + ¯ǫ2δ + 2¯ǫδ + 2σ¯ǫδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' We can assume that δ is big and ¯ǫ ∈ (0, 1 2) such that ¯ǫ + ¯ǫc(σ) + ¯ǫ2δ + 2¯ǫδ + 2σ¯ǫδ ≥ 1 2¯ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' So we jut require that 3ǫ + Cσ + c(σ) + 2σ − σ2 ≤ 1 2¯ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' This is true if we let µ0 be small and ǫ be small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' □ Then we can prove the following lemma which is similar to the Lemma 1 in [3] with some modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any ¯ǫ < e−1, we can let µ0 be small and let ǫ be small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' For any A ⊂ B 1 2 (0) which is a bounded set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Fix a positive function �t defined on A satisfying 0 < ˜t ≤ µ0 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Let us denote by F the family of all the sections S�t(x)(x) with x ∈ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' Then there exists a countable subfamily of F, {S�t(xk)(xk)}∞ k=1 with the following properties ( For simplicity we denote �t(xk) as tk from now on ): (i) For a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' x ∈ A, x ∈ ∪∞ k=1Stk(xk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdAzT4oBgHgl3EQf0_6m/content/2301.01793v1.pdf'} +page_content=' (ii) xk /∈ ∪jThis dataset describes the features and the class of the iris +dataset. +What is the name of the field 0? (Value example: 5.1) +>Sepal length in cm +What is the type of field Sepal length in cm? (1. Feature 2. +Predicted value 3. Class +4. Class (to be converted ONE-HOT +for neural network) +>1 +What is the normalization applied to Sepal length in cm? +(1. None 2. MinMax) +>1 +(... And so on for each feature and class.) +Saving dataset configuration... +The configuration is saved to iris.json +Processing to the file conversion... +The configuration is saved to iris_preprocessed.csv +Figure 5: An example of the exchange between the chatbot and the user for the data preprocessing. +Table 3: Machine learning algorithms are included in the AI2 framework. +No. +Modules +Algorithms +1. +Pre-processing +IQR +2. +SMOTE +3. +KNNImputer +4. +xGEWFI metric +5. +Supervised learning +Neural network regressor +6. +Neural network classifier +7. +Random Forest +8. +Unsupervised learning +K-Means +9. +CK-Means +10. +Silouette metric +11. +PCA +12. +DPDRC +13. +DPDR +14. +FRSD +10 + +The pre-processing methods (Module 1) are regrouped into one callable function. This function +can do the whole process of finding the outliers, augmenting the data and imputing the missing +data. The recent explainable metric named xGEWFI [9] is used to evaluate the performance of the +data generation (imputation and augmentation). It considers the importance of the feature and +each feature error to evaluate the global error of the data generation process. Inter Quartile Range +(IQR) algorithm is used to find the outliers. Data generation (augmentation and imputation of +missing data) are made with a SMOTE algorithm [7] and a KNNImputer [36], respectively. +Some neural networks (multilayer perceptron doing regressions and classifications) [31] are avail- +able for supervised learning functions (Module 2). A Random Forest (RF) algorithm [5] is used as +a classifier and regressor. It is also used to evaluate the importance of the features. +Some unsupervised learning methods (Module 3) are also available. The K-means algorithm [4] +can be executed for clustering problems. The CK-Means algorithm [11] can be called to extract +data from the cluster’s intersection. The metric to evaluate the cluster consistencies of those first +two algorithms is the Silhouette Index (SI) [34]. +Concerning the dimensionality reduction, the +Principal Component Analysis (PCA) algorithm [17] is included in the AI2 framework. Two new +decision processes are also included to help with the dimensionality reduction problems. 1. Decision +Process for Dimensionality Reduction before Clustering (DPDRC) [12] and 2. Decision Process +for Dimensionality Reduction (DPDR) [10]. +Those two are used in unsupervised learning and +supervised learning contexts, respectively. In an unsupervised learning context, Feature Ranking +Process Based on Silhouette Decomposition (FRSD) [40] helps evaluate the importance of the +features. +2.5 +GHG Methods - CodeCarbon integration in AI2 +Climate change is an essential issue for humanity. It is our responsibility to be aware of it and +to do everything that can be done to contribute to lower GHG. We know that computer sciences, +particularly machine learning, can significantly generate GHG while executing on CPU and GPU. +The CodeCarbon library is an important initiative available to data scientists, so they can be +aware of their impact on GHG. The following quote can be found on the CodeCarbon website (at +pypi.org/project/codecarbon/) based on [22]: While computing currently represents roughly 0.5% +of the world’s energy consumption, that percentage is projected to grow beyond 2% in the coming +years, which will entail a significant rise in global CO2 emissions if not done properly. Given this +increase, it is important to quantify and track the extent and origin of this energy usage, and to +minimize the emissions incurred as much as possible. For this purpose, we created CodeCarbon, a +Python package for tracking the carbon emissions produced by various kinds of computer programs, +from straightforward algorithms to deep neural networks. By taking into account your computing +infrastructure, location, usage and running time, CodeCarbon can provide an estimate of how much +CO2 you produced, and give you some comparisons with common modes of transportation to give +you an order of magnitude. +The contribution of this paper is to embed this library’s features in a +machine learning framework, add some machine learning-based functions to predict the subsequent +request amount of GHG, and try to spare its execution by proposing some alternatives. Fig. 6 +explains those embedded GHG functionalities. +First, every GHG statistic (request name, machine learning algorithm used, dataset, number of +data, fields, elapsed time, GHG emissions) is stored in a file. When a user is about to launch a +new request, from this stored historic, AI2 framework will try to predict the amount of GHG this +subsequent request will generate. A multilayer perceptron (MLP) is used to evaluate this GHG +11 + +Figure 6: GHG module architecture +amount. This MLP have 5 hidden layers of 25 neurons. It uses a relu activation function and an +adam solver. Then, a k-means clustering algorithm is used to regroup every similar request to the +current request. The list is proposed to the user so he can spare his execution, with some similar +results available from the historic. Knowing how much GHG will be generated and knowing the +similar results of the past, the user will finally decide if yes or no he wants to execute his new +request. Fig. 10 presents an example of the information from the chatbot concerning the GHG +before launching a new request. +2.6 +Explainability methods +The goal of this part is to get rid of the famous "black box" problem in machine learning. When +most frameworks usually display the results for every executed algorithm, AI2 will systematically +display the ad-hoc graphics, tables and texts that will ensure a better explainability for a particular +algorithm. It could be some learning curves, some scalability curves, and some confusion matrices. +For instance, for a clustering process, some stacked radar graphics (one per cluster) are produced, +plus a Silhouette index graphic that shows the cluster’s consistency. A cluster table and a text (in +LaTeX format) are also created to complete the explainability of the process. For each machine +learning algorithm, the totality of the graphics, tables and texts are generated using the explain() +method. +12 + +AnyMLcommand +User +GHG +GHG stats +MLP training and +Methods +of all +predicting processing +requests +time and GHG +↑ +Clusteringbased on +similar previous requests +Yes +No +Launch the +request? +ML request +Results and +Request aborted +GHG data +(Similar request +accepted,GHGsaved)Predicted execution time (in sec): 4.498 +Predicted generated GHG: 4.899e-05 kg CO2 +Here are the most similar requests in case launching another +request can be avoided. +Request _2022-11-21_21-23-43 using dataset make_blob +Request _2022-11-22_13-54-45 using dataset make_blob +Request _2022-11-22_14-29-32 using dataset make_blob +Launch the request (y/n)? +Figure 7: Information from the chatbot concerning the GHG before launching a new request. +3 +Results +The following presents five functional use cases. They emphasise the singularity of the AI2 frame- +work. It shows how a user can execute some requests to this framework and what type of results +are presented as output. The output graphics, tables, and texts are not presented in this paper +for two reasons: 1. It is not what this paper intends to demonstrate. For instance, there is no +need to show result for a simple clustering K-mean process. 2. There would have needed too many +graphics, tables and texts to present in this paper. Case 1 to case 5 present a clustering, a reduction +of dimensionality, a classification, a prediction, and an evaluation of the feature’s importance. +3.1 +Case 1: Clustering +The first case is about a clustering process. As mentioned earlier, the user must write his query +in English in the chatbot. For this first case, the following command has been entered: I want to +perform a clustering using iris dataset and having 3 clusters. +From the Parameters.csv file where a sample is presented in Table 4, the following questions +(Table 4) will be generated by the chatbot to fill the required information about a clustering process : +Table 4: Required information and questions to access it. +Key +Type +Return value +Questions +PROBLEM +Y/N +CLUSTERING +Is this clustering? +PROBLEM +Y/N +CLUSTERING +Is this a clustering problem? +PROBLEM +Y/N +CLUSTERING +Is this regrouping? +PROBLEM +Y/N +CLUSTERING +Is this a regrouping problem? +PROBLEM +Y/N +CLUSTERING +Do you want to regroup data? +PROBLEM +Y/N +CLUSTERING +Do you want to cluster data? +DATASET +Std. +What is the dataset? +DATASET +Std. +Which data are used? +NB_CLST +Std. +How many clusters? +NB_CLST +Std. +How many groups? +13 + +At this first step, AI2 transparently tries to find the answers in the command entered by the +user. After this first step, if AI2 misses some information, the chatbot will ask for it until every +critical information is defined. From this example, the iris dataset is loaded, a k-means algorithm +is launched with the parameter nclusters = 3 and using the default parameters randomstate = 1 +and init = ”k − means + +”. +The primary results are displayed, presenting a data table along with their clusters, that what +most of the frameworks would do. Using AI2, each graphic, table and text can be called using the +explain() method. In this first case, stacked radar graphics are generated for each cluster, allowing +to visualize the profile of every cluster. It also generates a graphic of the Silouhette Index, showing +and measuring the consistency of every cluster, and finding the mean of the whole clustering process. +For each table and graphic, a short text describing it is generated in LaTeX format. +3.2 +Case 2: Reduction of dimensionnality +The second case is about the reduction of dimensionality. The entered command was: reduction +of dimensionality with iris dataset and having 3 components. The only required parameter is the +targeted number of components that should be used to downsize the dataset. If this parameter is +not specified in the command, the chatbot will directly ask to specify it. Since it is defined in this +case command, AI2 will extract three components of the dataset using the PCA algorithm. Always +from the Parameters.csv file, the questions shown in Table 5 will be generated by the chatbot to +fill the required information about a reduction of dimensionality process : +Table 5: Required information and questions to access it. +Key +Type +Return value +Questions +PROBLEM +Y/N +DIMENSIONALITY +Is this about dimensionality? +PROBLEM +Y/N +DIMENSIONALITY +Is this about dimensionality +reduction? +PROBLEM +Y/N +DIMENSIONALITY +Is this about reduction +of dimensionality? +PROBLEM +Y/N +DIMENSIONALITY +Is this a regrouping problem? +PROBLEM +Y/N +DIMENSIONALITY +Is this a dimensionality problem? +PROBLEM +Y/N +DIMENSIONALITY +Is this a dimensionality +reduction problem? +DATASET +Std. +What is the dataset? +DATASET +Std. +Which data are used? +NB_CMPS +Std. +How many components? +The result is a dataset having three principal components (reduced with the PCA algorithm). +The explain() method generated two graphics: 1. the covariance heatmap of the initial features. +2. a bar graph of the three extracted features’ importance (explained variance ratio). For both +graphics, a short LaTeX explaining it is generated. +3.3 +Case 3: Classification +The following case is about the typical problem of classification. For this case, a multiple sentences +English is given: Perform a classification of the iris dataset. I want this request to be reproducible. +14 + +Test [4.8,3.0,1.4,0.2] value. The first sentence of the command is straightforward. Those two sen- +tences are written in a single command. It calls a classification of the iris dataset. To do so, it will +call a multilayer perceptron (MLPClassifier from the Scikit-learn framework). The second sentence +mention that it requires reproducible results. This will set the seed of the random_state parameter +to the "1" integer value, assuring the request gives the same result every time. The opposite would +have been a "random request". The seed would have been set to None, allowing the request to give +slightly different results due to some random synaptic connection initialization. If it is not specified, +the request is reproducible. The final sentence commands to try some values. In other words, it +aims to classify the specified values [4.8,3.0,1.4,0.2]. The questions in Table 6 will be extracted from +the text command. +Table 6: Required information and questions to access it. +Key +Type +Return value +Questions +PROBLEM +Y/N +CLASSIFICATION +Is this about classification? +PROBLEM +Y/N +CLASSIFICATION +Is this a classification problem? +PROBLEM +Y/N +CLASSIFICATION +Do you want to classify data? +DATASET +Std. +What is the dataset? +DATASET +Std. +Which data are used? +RANDOM +Y/N +RANDOM +Is this a random request? +RANDOM +Y/N +REPRODUCTIBLE +Is this a reproductible request? +TEST +Std. +What are the test values? +TEST +Std. +What values do you want +to be tested? +The classification result will then be shown. The training is done with cross-validation having +the parameter k = 10. The whole dataset is split k times, and the subsets are used to validate to +process. The training and validation scores are returned for each step of the cross-validation. While +both scores are increasing, the training may continue the learning process. When the training score +is still increasing while the validation score starts to decrease, it is precisely the right time to stop +the training process. Stopping before that moment creates under-fitted training, and stopping after +that point results in overfitted training. Calling the explain() method, a learning curve is generated +of both the training score and validation score based on the cross-validation. +A state-of-the-art method executes the neural network to classify the data. Earlier in the process, +the train and the test data were split, allowing the algorithm to train and evaluate the performances. +Performance graphics is also created, showing the performance of the training. Scalability graphics +show the ratio of the number of processed data/processing time. Like the other cases, LaTeX texts +are generated to explain every graphic. +3.4 +Case 4: Prediction +This case aims to demonstrate the prediction feature of the AI2 framework, using the MLPRegres- +sor from Scikit-learn. It also shows how to preprocess a dataset before calling an algorithm. This +preprocessing can be called in the chatbot. In this case, the following English command is given: +Do the preprocess of the iris2 dataset. Note that the iris2 dataset is identical to the iris dataset, +except that the class field is not included. Selecting the columns of a dataset is not included in +15 + +this first version of AI2, but it will be in a different version. The iris2 dataset remains with four +features: Sepal length, sepal width, petal length and petal width. The value of the petal width +must be predicted. When responding to the chatbot’s questions, the user must specify that the +first three fields are non-normalized features and the fourth is a regression value. After responding +to the questions in the chatbot, the iris2.json file is created, containing the information about the +configuration. The iris2_preprocessed.csv data file is also created containing the preprocessed data. +A second command can be sent to AI2 using the chatbot: I want to make a prediction using the +iris dataset. Test [4.5,3.1,1.2]. The questions in Table 7 will be extracted from the text command. +Table 7: Required information and questions to access it. +Key +Type +Return value +Questions +PROBLEM +Y/N +PREDICTION +Do you want to make +a prediction? +PROBLEM +Y/N +PREDICTION +Is this a prediction problem? +PROBLEM +Y/N +PREDICTION +Do you want to predict something? +DATASET +Std. +What is the dataset? +DATASET +Std. +Which data are used? +TEST +Std. +What are the test values? +TEST +Std. +What values do you want +to be tested? +Three graphics are generated to explain the results as in 3.3. A learning curve is displayed +to ensure no training underfitting or overfitting. A second graphic shows the performance of the +training process. Moreover, a third graphic shows the scalability of the training. As always, LaTeX +texts are created to explain the figures, ready to be cut and pasted in a LaTeX document. +3.5 +Case 5: Feature’s importance +This next case shows how to evaluate the feature importance in the AI2 framework. The following +command has been typed in AI2’s chatbot: Find the importance of the features with the iris dataset. +This command calls a Random Forest algorithm. More precisely, the RandomForestClassifier and +the RandomForestRegressor from the Scikit-learn framework. According to the configuration file’s +content (iris.json in this case), it will detect whether it is a dataset made for regression or classi- +fication. In this case, the iris is a dataset made for classification, so the RandomForestClassifier +algorithm will be used. From the Parameters.csv file, the questions shown in Table 8 are asked by +the chatbot to fill in the information about the feature importance algorithm: +16 + +Table 8: Required information and questions to access it. +Key +Type +Return value +Questions +PROBLEM +Y/N +FEAT_IMP +Is this about feature importance? +PROBLEM +Y/N +FEAT_IMP +Is this about the importance +of the features? +PROBLEM +Y/N +FEAT_IMP +Is this a feature importance problem? +PROBLEM +Y/N +FEAT_IMP +Do you want to know the +feature importance? +DATASET +Std. +What is the dataset? +DATASET +Std. +Which data are used? +The explain() method gives a graphic where the X axe represents the index of the features, +and the Y axe shows each feature’s normalized level of importance. A LaTeX explanation text is +generated as usual. +3.6 +GHG algorithms validation +As stated in 2.5, the AI2 framework predicts GHG for each algorithm to be executed. Execution +time is also predicted before calling the machine learning algorithm. To validate those predictions, +a clustering algorithm has been called within 50 iterations loops. For each execution, a random- +sized dataset of 10,000 to 50,000 rows and 5 to 20 features have been used. Those datasets were +generated by the make_blob() function of the scikit-learn framework. Fig. 8 shows the validation +of the predicted and real values of the generated GHG. X axe displays the 50 iterations, and the Y +axe shows the level of GHG (in kgCO2 unit). The regression algorithm was trained from a dataset +containing 1382 rows containing the request’s historical. +Figure 8: Validation of the predicted and real GHG +Fig. 9 displays the validation of the predicted and actual values of the execution time for every +iteration of the loop. X axe shows the 50 iterations, and the Y axe shows the execution time (in +sec.). +17 + +0.000175 +Generated GHG +Predicted GHG +0.000150 +0.000125 +GHG +0.000100 +0.000075 +0.000050 +0.000025 +0 +10 +20 +30 +40 +50 +SimulationsFigure 9: Validation of the predicted and real execution time +Here are the most similar requests in case launching another request can be avoided. +Request _2022-11-21_21-23-43 using dataset make_blob +Request _2022-11-22_13-54-45 using dataset make_blob +Request _2022-11-22_14-29-32 using dataset make_blob +Figure 10: iris.json structure file +Concerning the predicted and real GHG and execution time, it can be seen that the signal is +reasonably reconstructed. +Finally, before launching each request, AI2 proposes similar requests from the request’s historic +after extracting this information using a clustering process. Fig. 10 presents an example of the AI2 +propositions of the similar requests. +4 +Discussions +The first contribution of this paper is to present an accessible framework. With its state-of-the-art +NLP methods, this machine learning framework is a pioneer in communicating with a non-expert +user in English. +The new Transformers technology allows the AI2 framework to receive native +language commands extracted, parsed and executed. When there is an essential missing parameter, +AI2 will use its chatbot to communicate with the user, asking him to enter the missing information. +With this NLP interface, a user can exploit the AI2 framework without knowing how to code with +a programming language like Python or others. +The AI2 framework is GHG-aware, and this is the second contribution of this paper. +The +CodeCarbon library is encapsulated in each of its ML functions, allowing the calculation of the +GHG for each algorithm executed. Those GHG records are kept in a register and used to predict, +based on ML, the GHG generated before the execution. AI2 also propose some similar registered +18 + +14 +Execution time +Predicted time +12 +10 +Time +8 +6 +4 +2 +0 +0 +10 +20 +30 +40 +50 +Simulationsrequests, also based on ML, to save this execution and save GHG. +The opposite of most other frameworks, AI2 systematically encapsulates the most important +format of explanations about the data and the results. This aspect of the framework is crucial to +solving the famous black-box problem. This is the third contribution of this paper. Most of the +machine learning framework is not systematically offering some explainability with the results. AI2 +does. It generates, for each request, some graphics, some tables, and some texts explaining the +results and the data, thus, making this framework more ethical than others. +The final contribution of this paper is data preprocessing. It usually takes time to code a suitable +preprocessing of the data. The AI2 framework proposes a method based on communication with +the chatbot to automatize this process. Guided by the AI2 chatbot, the user may do some basic +preprocessing of its datasets by establishing the dataset’s structures. Having a structure stored in +a JSON file, the preprocessing module can generate a new preprocessed dataset. +Comparing AI2 with other machine learning frameworks, what is the advantage of using it? +For now, there are frameworks more complete and more sophisticated. The AI2 framework targets +non-expert users who need a machine-learning algorithm to process their data. Typical AI2 users +would be, for instance, researchers, engineers, teachers and students in natural science, and so +on. +A significant part of the scientific community cannot program complex algorithms using a +programming language. An NLP interface is the best solution since it requires no programming +skills. +Table 9 shows a comparison between AI2 and the other popular machine learning framework, +according to 3 criteria: 1. NLP interface, 2. GHG awareness, 3. Explainability, and 4. NLP +Preprocessing. +Table 9: Comparison of the popular machine learning frameworks, specialized frameworks, and AI2 +Framework +NLP +GES +Explain. +Prepro. +Code +Ref. +Aware +req. +AIX360 +NO +NO +YES +NO +YES +[3] +ELI5 +NO +NO +YES +NO +YES +[3] +Gluon +NO +NO +NO +NO +YES +[27] +Keras +NO +NO +NO +NO +YES +[27] +LIME +NO +NO +YES +NO +YES +[25] +Matlab +NO +NO +NO +NO +YES +[27] +MXNet +NO +NO +NO +NO +YES +[41] +Orange +NO +NO +NO +NO +NO +[8] +PyTorch +NO +NO +NO +NO +YES +[27] +Scikit-learn +NO +NO +NO +NO +YES +[27] +SHAP +NO +NO +YES +NO +YES +[25] +Skater +NO +NO +YES +NO +YES +[3] +Tensorflow +NO +NO +NO +NO +YES +[41] +What-if Tool +NO +NO +YES +NO +YES +[25] +XAI +NO +NO +YES +NO +YES +[25] +CodeCarbon +NO +YES +NO +NO +NO +[22] +AI2 +YES +YES +YES +YES +NO +[13] +Note that some well-known frameworks may seem absent from the list: CNTK and Theano +are no longer supported. Caffe2 is merged with PyTorch. According to Table 9, we can regroup +19 + +the frameworks into three categories: 1. The general, multi-purpose frameworks (Gluons, Keras, +MXNet, Tensorflow, PyTorch, Matlab, Orange and Scikit-learn) 2. The Explainability frameworks +(AIX360, ELI5, LIME, SHAP, Skater and XAI), and 3. The GHG-aware framework (CodeCarbon). +This table shows AI2’s novelty. It is the only framework that combines all the studied criteria +(NLP interface, GES awareness, Explainability, Preprocessing, and Coding required). It is the first +framework to have an NLP interface to send the instructions to the framework. Several frameworks +integrate the explainability of the data and the models, but no general and multi-purpose framework +includes it. AI2: The next leap toward native language-based, GHG-aware and explainable ML +framework. +5 +Conclusion +This framework proposes a tool for the non-expert to use machine learning methods. +It offers +an NLP interface so the user can communicate with the framework using a chatbot. It encap- +sulates some very concrete functions to provide ecological awareness. It includes the principle of +explainability, proposing expanded results explications for different algorithms. It finally allows +preprocessing of data using an English chatbot. +This framework could be the first draft of a long series of improvements. +There are many +future works to do for each of its contributions. +Regarding its NLP interface, this framework +can be improved by training the pre-trained Transformer on a specific machine learning-oriented +text corpus. Likely, the NLP’s performance will significantly improve. The chatbot method can +also be optimized to minimize errors and recognize the user’s intentions. Questions used to extract +command information can be improved by increasing the quality and the number of questions. GHG +awareness can be improved. Better methods can be found to minimize wasted energy, maximize +the GHG estimation before calling an algorithm, and cluster similar requests. There is a lot to +do, but this framework has the merit of being aware of the climate change problem and proposing +a modest solution. Explanations available for each data and machine learning algorithm can also +be optimized in quantity and quality. Some essential explanations are included in this framework, +but those need to be systematically included. Regarding the preprocessing module, there are many +things to add. For instance, some normalization methods can be added. The rows and columns +selection can be added to this module, also. +Some graphics can be added to plot data at the +preprocessing stage. Finally, this framework contains a limited number of ML algorithms. Some +more ML algorithms can be easily added to the AI2 framework. +6 +Acknowledment +This work has been supported by the "Cellule d’expertise en robotique et intelligence artificielle" of +the Cégep de Trois-Rivières and the Natural Sciences and Engineering Research Council. +References +[1] Martín Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed +Systems. +20 + +[2] Acheampong Francisca Adoma, Nunoo-Mensah Henry, and Wenyu Chen. Comparative anal- +yses of bert, roberta, distilbert, and xlnet for text-based emotion recognition. In 2020 17th +International Computer Conference on Wavelet Active Media Technology and Information Pro- +cessing (ICCWAMTIP), pages 117–121, 2020. +[3] Namita Agarwal and Saikat Das. Interpretable machine learning tools: A survey. In 2020 +IEEE Symposium Series on Computational Intelligence (SSCI), pages 1528–1534. +[4] Mohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam. +The k-means al- +gorithm: A comprehensive survey and performance evaluation. Electronics, 9(8):1295, 2020. +Number: 8 Publisher: Multidisciplinary Digital Publishing Institute. +[5] Gérard Biau and Erwan Scornet. A random forest guided tour. TEST, 25(2):197–227, 2016. +[6] Jia-Wei Chang, Neil Yen, and Jason C. Hung. Design of a NLP-empowered finance fraud +awareness model: The anti-fraud chatbot for fraud detection and fraud classification as an +instance. 13(10):4663–4679. +[7] Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. SMOTE: +synthetic minority over-sampling technique. +Journal of Artificial Intelligence Research, +16(1):321–357, 2002. +[8] Janez Demšar et al. Orange: data mining toolbox in python. the Journal of machine Learning +research, 14(1):2349–2353, 2013. +[9] Jean-Sébastien Dessureault and Daniel Massicotte. +[2206.08980] explainable global error +weighted on feature importance: The xGEWFI metric to evaluate the error of data impu- +tation and data augmentation. +[10] Jean-Sébastien Dessureault and Daniel Massicotte. +[2206.08974] DPDR: A novel machine +learning method for the decision process for dimensionality reduction. 2022. +[11] Jean-Sébastien Dessureault and Daniel Massicotte. [2206.08982] ck-means, a novel unsuper- +vised learning method that combines fuzzy and crispy clustering methods to extract intersecting +data. 2022. +[12] Jean-Sébastien Dessureault and Daniel Massicotte. DPDRC, a novel machine learning method +about the decision process for dimensionality reduction before clustering. AI, 3(1):1–21, 2022. +Number: 1 Publisher: Multidisciplinary Digital Publishing Institute. +[13] Jean-Sebastien Dessureault and Daniel Massicotte. Ai2: a novel explainable machine learning +framework using an nlp interface. +[14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of +deep bidirectional transformers for language understanding, 2019. +[15] Lisa R. Goldberg. The Book of Why: The New Science of Cause and Effect, volume 19. 2019. +Publisher: Routledge _eprint: https://doi.org/10.1080/14697688.2019.1655928. +[16] Raffaele Guarasci, Stefano Silvestri, Giuseppe De Pietro, Hamido Fujita, and Massimo Espos- +ito. Assessing BERT’s ability to learn Italian syntax: A study on null-subject and agreement +phenomena. +21 + +[17] Ian T. Jolliffe and Jorge Cadima. +Principal component analysis: a review and recent de- +velopments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and +Engineering Sciences, 2016. Publisher: The Royal Society Publishing. +[18] M. I. Jordan. Serial order: A parallel distributed processing approach. Technical report, June +1985-March 1986. (AD-A-173989/5/XAB; ICS-8604). +[19] Shigeki Karita et al. A comparative study on transformer vs RNN in speech applications. 2019 +IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 449–456, +2019. +[20] The Institute for Ethical Ai \& Machine Learning. The institute for ethical AI & machine +learning. +[21] Yinhan Liu et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. +[22] Kadan Lottick, Silvia Susai, Sorelle A. Friedler, and Jonathan P. Wilson. +Energy Usage +Reports: Environmental awareness as part of algorithmic accountability. +[23] Shivani Malhotra, Vinay Kumar, and Alpana Agarwal. Bidirectional transfer learning model +for sentiment analysis of natural language. 12(11):10267–10287. +[24] Maria das Graças Bruno Marietto et al. Artificial Intelligence MArkup Language: A Brief +Tutorial. +[25] Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. A Multidisciplinary Survey and Framework +for Design and Evaluation of Explainable AI Systems. +[26] Anand Motwani, Piyush Kumar Shukla, and Mahesh Pawar. Novel framework based on deep +learning and cloud analytics for smart patient monitoring and recommendation (SPMR). +[27] Giang Nguyen et al. Machine learning and deep learning frameworks and libraries for large- +scale data mining: a survey. Artificial Intelligence Review, 52(1):77–124, 2019. +[28] Long Ouyang et al. Training language models to follow instructions with human feedback. +[29] Sebastian Palacio, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, Jörn Hees, and Andreas +Dengel. XAI handbook: Towards a unified framework for explainable AI, 2021. +[30] The-Hanh Pham, Vinitha Sree, John Mapes, Sumeet Dua, Oh Shu Lih, Joel E. W. Koh, Ed- +ward J. Ciaccio, and U. Rajendra Acharya. A novel machine learning framework for automated +detection of arrhythmias in ECG segments. 12(11):10145–10162. +[31] Hassan Ramchoun, Youssef Ghanou, Mohamed Ettaouil, and Mohammed Amine Janati Idrissi. +Multilayer perceptron: Architecture optimization and training. +2016. +Accepted: 2021-07- +07T10:37:59Z Publisher: International Journal of Interactive Multimedia and Artificial Intel- +ligence (IJIMAI). +[32] Denis Rothman. Transformers for Natural Language Processing: Build Innovative Deep Neural +Network Architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and +More. Packt Publishing Ltd. +22 + +[33] Denis Rothman. Transformers for Natural Language Processing: Build innovative deep neural +network architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and +more. Packt Publishing Ltd, 2021. +[34] Peter J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster +analysis. Journal of Computational and Applied Mathematics, 20:53–65, 1987. +[35] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning internal represen- +tations by error propagation. +[36] Olga G. Troyanskaya, David Botstein, and Russ B. Altman. Missing value estimation. In +Daniel P. Berrar, Werner Dubitzky, and Martin Granzow, editors, A Practical Approach to +Microarray Data Analysis, pages 65–75. Springer US, 2003. +[37] Ashish Vaswani et al. Attention is all you need. In Advances in Neural Information Processing +Systems, volume 30. Curran Associates, Inc., 2017. +[38] Joost Verbraeken et al. A survey on distributed machine learning. ACM Computing Surveys, +53(2):30:1–30:33, 2020. +[39] Zhaobin Wang, Ke Liu, Jian Li, Ying Zhu, and Yaonan Zhang. +Various frameworks and +libraries of machine learning and deep learning: A survey. Archives of Computational Methods +in Engineering, 2019. +[40] Jaehong Yu, Hua Zhong, and Seoung Bum Kim. An ensemble feature ranking algorithm for +clustering analysis. Journal of Classification, 37(2):462–489, 2020. +[41] Kuo Zhang, Salem Alqahtani, and Murat Demirbas. A comparison of distributed machine +learning platforms. In 2017 26th International Conference on Computer Communication and +Networks (ICCCN), pages 1–9, 2017. +[42] Xingzhou Zhang, Yifan Wang, and Weisong Shi. pcamp: Performance comparison of machine +learning packages on the edges. 2018. +23 + diff --git a/UdE1T4oBgHgl3EQfuwU5/content/tmp_files/load_file.txt b/UdE1T4oBgHgl3EQfuwU5/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..d9b668adb2b5462fce846c2a7937dabf8f96b402 --- /dev/null +++ b/UdE1T4oBgHgl3EQfuwU5/content/tmp_files/load_file.txt @@ -0,0 +1,951 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf,len=950 +page_content='AI2: The next leap toward native language based and explainable machine learning framework Jean-Sébastien Dessureault, Daniel Massicotte January 10, 2023 ABSTRACT The machine learning frameworks flourished in the last decades, allowing artificial intelligence to get out of academic circles to be applied to enterprise domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This field has significantly advanced, but there is still some meaningful improvement to reach the subsequent expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The proposed framework, named AI2, uses a natural language interface that allows a non-specialist to benefit from machine learning algorithms without necessarily knowing how to program with a programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The primary contribution of the AI2 framework allows a user to call the machine learning algorithms in English, making its interface usage easier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The second contribution is greenhouse gas (GHG) awareness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It has some strategies to evaluate the GHG generated by the algorithm to be called and to propose alternatives to find a solution without executing the energy-intensive algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Another contribution is a preprocessing module that helps to describe and to load data properly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Using an English text-based chatbot, this module guides the user to define every dataset so that it can be described, normalized, loaded and divided appropriately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The last contribution of this paper is about explainability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For decades, the scientific community has known that machine learning algorithms imply the famous black-box problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Traditional machine learning methods convert an input into an output without being able to justify this result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The proposed framework explains the algorithm’s process with the proper texts, graphics and tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The results, declined in five cases, present usage applications from the user’s English command to the explained output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Ultimately, the AI2 framework represents the next leap toward native language-based, human-oriented concerns about machine learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' machine learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' framework;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NLP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI ethics;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' explainability 1 Introduction Two decades ago, some popular algorithms existed and were well documented in scientific literacy, but there was still no easy way to use them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Scientists had to read the equations and the algorithm before implementing it in the desired programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Every matrix had to be multiplied, and every derivative had to be computed by the scientist’s code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In the last two decades, machine learning has finally flourished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' One of the most meaningful frameworks was certainly TensorFlow [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This powerful tool helped the community accelerate development and democratize the machine learning field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It helped this field of knowledge reach a more comprehensive range of applicative projects instead of being restricted to academics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='03391v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='LG] 9 Jan 2023 A few years after the first version of Tensorflow, many others came to the machine learning community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Among the most popular: Scikit-Learn, CNTK, Torch, Matlab, and Keras [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In the last few years, a user-friendly framework with a graphical interface named Orange [8] became available, aiming to be even more accessible for the community, especially for the non-expert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' While consistently more accessible over time, requiring less mathematics and fewer programming skills, none of those frameworks has made the ultimate step: the ability to communicate in the native human language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some recent studies compare the most popular machine learning software framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For in- stance, framework performances have been recently analysed in [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For this same purpose of performance analysis, [38] divides frameworks into some topics (computational distribution, Tensor Processing Units and Field-Programmable Gate Array (FPGAs)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [42] compares machine learning frameworks on different hardware platforms, such as Raspberry Pi 3 B+, NVIDIA Jetson, MacBook Pro, Huawei Nexus 6P and Intel FogNode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' in [27] have an essential paper regarding this current research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Their work es- tablishes evaluation criteria for supervised, unsupervised, and reinforcement learning, which are the three prominent families of machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [27] presents an overview of machine learning frameworks and gives the advantages and disadvantages of each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Frameworks are applied in dif- ferent domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, [30] applies it to the Automated Detection of Arrhythmias in ECG Segments, while [26] is a framework application in the health domain for smart patient monitor- ing and recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The work of [25][3] present and compares explainable and interpretable frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This framework, called AI2, proposes a natural language interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' To the authors’ best knowl- edge, there is no machine learning framework offering an Natural language Processing (NLP) in- terface using a chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This first AI2 version proposes an English chatbot, but some other native languages might be proposed later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The NLP domain has flourished recently, especially when using the Transformers technology [33] [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This recent NLP breakthrough created the opportunity to fill the last gap between humans and machine learning frameworks: the ability to communicate in the native human language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This last step has just been done with this proposed AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A state-of-the-art, Transformer-based NLP agent can now correctly interpret users’ English requests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Outperforming older methods like Recurrent Neural Networks (RNN) [18] [35] and Artifi- cial Intelligence Markup Language (AIML) [24], Transformer technology [19] delivers better results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Transformer-based applications exist in multiple domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, [23] uses it for sentiment analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [16] evaluates a Transformer’s ability to learn Italian syntax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Finally, [6] proposes a chatbot that helps detect and classify fraud in a finance context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Bidirectional Encoder Representation from Transformers (BERT) [14][32] [2] is a widely used NLP model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It performs exceptionally well when evaluating the context and understanding the intent behind the user’s query [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Using the BERT NLP model, two pre-trained datasets have been used to build the AI2 frame- work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The first one, BERT (BERT-large), is helpful to answer common questions like "Which dataset has been used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The second one is RoBERTa (roberta-large) [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is only used to an- swer Yes/No questions like "Is it a clustering problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Besides launching the requests, a minor contribution of AI2 is its ability to preprocess the datasets using its NLP chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Even if the NLP interface is the main contribution of this paper, other contributions are also proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, another contribution of the AI2 framework is the awareness of greenhouse gases (GHG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' CodeCarbon [22] recently proposed a library of functions about GHG awareness and AI2 integrates some of those functions and enhances it with machine learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Based 2 on [29], explainability is an essential contribution of this proposed framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It aims to include ethics principles from the Institute for Ethical AI & Machine Learning [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This UK-based research centre develops frameworks that support the responsible development, deployment and operation of machine learning systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Explainability is a concept intending to eliminate the "black box" problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Yoshua Bengio has addressed it, and Judea Perl [15], two Turing awards winners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Over the last decade, ML has reached a certain level of maturity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' One of the differences is our expectations of machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There is a need to democratize the methods to non-expert users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Until recently, the scientific community was concerned about lowering the error when using ML algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' They were concerned about the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Now the expectation is higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The community still wants good results, but those results have to be found in an explainable, interpretable and ethical context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Human well-being must be the main interest of the ML systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The results must be explainable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For decades, the "black box" problem was neglected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Now, there are some methods to explain the results and make them understandable to a human.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The expectations are also higher regarding the accessibility to the ML methods, GHG awareness and preprocessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Now, the expectations are higher at different levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Disposing of the previously presented technologies and based on [13], the contributions of this framework aim to reach expectations with the following targets: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' democratizing ML frameworks using NLP methods, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' being GHG aware with a built-in structure to monitor it, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' being more ethics with a built-in structure systematically explain the results, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' having the preprocessing of the data more accessible with an automated NLP based chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The following sections of this paper are organized with the following structure: Section 2 de- scribes the proposed methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Section 3 presents the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Section 4 discusses the results and their meaning, and Section 5 concludes this research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2 Methodology of the AI2 framework 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='1 Architecture Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 1 presents the architecture of the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The NLP method, through a chatbot, allows communication with the framework methods and the data using the English language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The kernel of the AI2 framework includes four types of methods: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Preprocessing methods, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Machine learning methods, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' GHG methods, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Explainability methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The preprocessing interface method is done systematically once for each dataset when used for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot guides this user throughout the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It consists of a series of questions to the user about the dataset and each feature/class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot asks about the type of each field and its normalization method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The machine learning methods are the classic supervised and unsupervised learning methods: classifiers, regressors, clustering, dimensionality reduction and a method to evaluate the importance of the features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There are also some new methods like Decision Process for Dimensionality Reduction (DPDR) [10], Decision Process for Dimensionality Reduction before Clustering DPDRC [12], and CK-Means [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There are also some functions assuring the GHG awareness of the framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Based on the CodeCarbon library [22], those functions compute the generated GHG for each request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Before launching a request, the GHG functions will predict the GHG generated for this request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' They will try to find equivalent requests using clustering methods to save the execution of the subsequent request, thus, saving the generation of GHG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The explainability methods offer a complement to the standard machine learning results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The user gets more than the expected results for his request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' He gets a well-documented explanation for 3 Figure 1: Architecture of the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' every result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The form of the explanation varies according to the used algorithm and data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some examples (like learning curves and the importance of features graphic) are described in the use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The different machine learning methods are divided into three modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Module 1 includes the preprocessing tools (Encoding, normalization, data augmentation/imputation, graphics).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Module 2 consists of the supervised learning tools (classifiers, regressors and the computation of the feature’s importance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Finally, module 3 exploits the unsupervised learning tools (clustering and reduction of dimensionality methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' At last, all the results are given in 2 forms: the expected and the explained results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI2’s functions can be called without using its NLP interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Calling the Python function directly without using the English chatbot is very straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The user is responsible for obtaining his own datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No sample dataset is included in this first version of AI2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='2 NLP methods (chatbot) This machine learning framework is its ability to communicate with a user, exploiting a chatbot based on NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot used by the AI2 user interface is made with the Transformers technology, thus, being a state-of-the-art NLP model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In the AI2 context, the Transformer technology is used with the "BERT" technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2 presents the NLP architecture of the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot uses two types of questions, requiring two different types of NLP pre-trained data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is essential to note the difference between the datasets that can be processed by the AI’s methods 4 User Data A12 framework NLP Methods (chatbot) Preprocessing Machine learning GHG Explainability Interface methods methods methods methods ↑ ★ Module 1 Pre-processing Module 2 Module 3 tools Supervised Unsupervised learning tools learning tools (Encoding,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Normalization,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (Classifiers,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (Clustering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Data Regressors,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Reductionof augmentation/ Feature dimensionnality) imputation importance) Graphics) 4 Standard Explained results resultsFigure 2: Architecture of the NLP interface in the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' and those NLP-based pre-trained datasets used by the chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The Standard NLP answering mod- ule using Bert-large-uncased-whole-word-masking-finetuned-squad can help in responding to open questions like: What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='. As displayed in Table 1, this question is associated with the DATASET key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot will try every question having this key to filling out the dataset information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A typical answer to this request can be iris, for the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The pre-trained dataset Bert-large-uncased-whole-word-masking-finetuned-squad [14] is used to answer this type of question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is a pretrained model on English language using a Masked Language Modeling (MLM) objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Using the Roberta-large [21] pre-trained dataset, the second type of question is the Yes/No question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A typical question would be Is this a clustering problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='. The two possible answers are Yes and No, both associated with a certain level of confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' As presented in Table 1, this question is associated with the PROBLEM key and the CLUSTERING return value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If the answer to this question is Yes, it will return CLUSTERING as an answer to fill out the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' As mentioned earlier, every question related to the key is asked in these two types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The NLP system returns an answer for each question and confidence level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The answer related to the best level of confidence is kept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The methodology used to train both pretrained datasets, including the level of confidence formulas are documented in [14] and [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It consists of applying a softmax function on the logits values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The logits variable is known to be the output of a BERT-based 5 English command User NLP methods (chatbot Chatbot Standard NLP answeringmodule Yes/No NLP answering using Bert-large- module using using uncased-whole-word- Roberta-large masking-finetuned- squad Best answers analysis (Problemand parameters) Problem and parameters to send to the machine learning modulesTransformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is a list of the most probable answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The following describes how the chatbot works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot first asks "Please, enter your English command to the framework".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The system specifies writing the English command to avoid confounding with a specific programming language-based command used in other frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The expected command is the English instruction to the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A typical command could be "I want to perform a clustering using 3 clusters on the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' From this first answer from AI2, the chatbot will read a Parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv file storing the structure of the required keys, the returned values and the questions to send to the chatbot to access the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There is no specific order for the keys in this file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The system will request the keys to get the related information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For now, there are 73 rows defined in this file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Those rows designate 19 keys and the questions to access them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Many questions may retrieve each key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is essential to understand that the framework uses those questions to extract pieces of information from the user command.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Those questions are entirely transparent for the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This file will grow following the new releases of the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 1 presents a sample of this file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key field identifies the information to retrieve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, if AI2 seeks the type of problem in the user’s command, it will find all the PROBLEM rows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It will then interrogate the user’s command with all the corresponding Questions field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It will keep the answer having to higher level of confidence according to the Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The answer to the question will be returned, except if it is a Yes/No question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In this case, the Return value field will be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, if the AI2 system replies Yes to the question Is this a clustering problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' then the returned value will be CLUSTERING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The Type field indicates Y/N for Yes/No questions and Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' for standard questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 1: Sample of the Parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Only data used for this example is presented (14 rows on a total of 73).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key Type Return value Questions to the command PROBLEM Y/N DIMENSIONALITY Is this about dimensionality?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N DIMENSIONALITY Is this about dimensionality reduction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLASSIFICATION Is this about classification?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLASSIFICATION Is this a classification problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this clustering?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this a clustering problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this regrouping?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this a regrouping problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Do you want to regroup data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Do you want to cluster data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Which data are used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NB_CLST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' How many groups?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NB_CLST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' How many clusters?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Systematically, the chatbot will try to fill the PROBLEM key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It must know what kind of problem it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' To find it out, a question list corresponding to the PROBLEM key, is processed by the chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If the chatbot can return the answer, the problem information (corresponding to 6 PROBLEM in the Key field, Table 1) will be filled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If the first question can return no answer, other questions (corresponding to the key) will be tried to extract information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If no answer can be found after having tried all the questions, the chatbot will prompt to directly ask the user: problem to resolve has been found in your text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Please clearly identify the type of problem to solve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Then, the algorithm will go to the second and the third required keys: the DATASET key and the NB_CLST (number of clusters) key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The interface will ask for every crucial information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' When the parameter is not mandatory, its default value will be assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The same principle is repeated for every required parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' An example of a complete sequence is illustrated in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Remember that the questions are not directly addressed to the user but to his command, aiming to extract meaningful information to execute his request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 2: Example of a typical command and the question sequence used to extract the information of the command: I want to perform a clustering using 3 clusters on the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Questions Answer Ret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' value (To extract the type of the problem) Is this about dimensionality?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No None Is this about dimensionality No None reduction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Is this about classification?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No None Is this a classification problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No None Is this clustering?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Yes CLUSTERING (To extract the name of the dataset) What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Iris Iris (To extract the number of clusters) How many groups?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (No suitable answer) None How many clusters?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3 3 In this example, the answer is No for the first four questions, since the command is not about reduction of dimensionality nor classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Since the command is about a clustering problem, the answer will be Yes to the question Is this clustering?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='. Since the value Yes would not mean anything, the corresponding return value (Table 1) CLUSTERING is returned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' After having extracted the problem type, the dataset name is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The question What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' answer the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The answer is Iris, and the returned value is also Iris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The last required information is about the number of clusters needed for the clustering algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There are at least two ways of asking this question since groups and clusters are synonyms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The question How many groups?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' is tried to extract the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Since the command uses the term clusters, no suitable answer is found for this question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The second question will be: How many clusters?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='. The answer and the returned value will be 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' From this point, AI2 has all the required information to launch a clustering algorithm using the Iris dataset and 3 clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some more complete examples are shown in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='3 Preprocess module The preprocessing method is done systematically once for each dataset when used for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI2 detects when no dataset configuration has been done and stored in a JSON file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot then asks for the correct configuration for every field, like their name, role in the dataset, and normalization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In the end, the dataset’s configuration is stored in a JSON file, and the 7 dataset is preprocessed and stored using the same file name, added with a _preprocessed suffix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot finally asks the user if he wants to process a data imputation of the missing data and a data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3 presents the functionalities of the preprocess modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' First, a dataset name is given to the module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If a preprocessed version of the dataset already exists, the module will open it, dividing it into train and test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If the preprocessed files do not exist, the system will try to find the corresponding JSON file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If the JSON file exists, the system will use it to build the preprocessed file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If it does not exist, the AI2 chatbot will guide the user through some questions about the field and create the final JSON file containing the structure of the dataset, and it will create the preprocessed dataset from this JSON file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Ultimately, it will also split the data into train and test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Figure 3: Preprocessing architecture Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 4 shows an example of a structure configuration JSON file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The included fields in JSON format are the following: dataset_name is the name of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' dataset_description is a description of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' feat_no is the number of the feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' feat_label is the label given to this feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The type of the feature is given by feat_type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Possible values are 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Feature field, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Regression value field, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Class field, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Class for neural network field (to be one-hot encoded).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The last field is feat_normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Possible values are 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No normalization, and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' MinMax 8 Questions/answers Datasetname User Preprocessing interface methods Yes Dataset already No preprocessed?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Yes JSONstructure exists?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No Preprocess Build Chatbot dataset JSON guide userto build based onJSON structure dataset structure structure Divide dataset divided in x train, y train, x_test and y_test 4 Preprocesseddatasetdividedin x train,y train,x testandy testiris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='json { "dataset_name": "iris", "dataset_description": "iris dataset", "feat_no": [ 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ], "feat_label": [ "Petal length in cm", .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ], "feat_type": [ "1", .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ], "feat_normalization": [ "1", .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ] } Figure 4: iris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='json structure file normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 5 shows an example of an exchange between the chatbot and the user, aiming to propro- cessing the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='4 Machine learning methods Any framework requires a tremendous amount of development hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This framework is still in development, yet it has some contributions to bring to the scientific community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some known algorithms are included, resolving most machine learning problems (prediction, classification, and others).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 3 shows algorithms included in AI2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 9 Let us preprocess the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Please, answer the following questions: What is the description of the iris dataset (ENTER to skip)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' >This dataset describes the features and the class of the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the name of the field 0?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (Value example: 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='1) >Sepal length in cm What is the type of field Sepal length in cm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Feature 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Predicted value 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Class 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Class (to be converted ONE-HOT for neural network) >1 What is the normalization applied to Sepal length in cm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' None 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' MinMax) >1 (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' And so on for each feature and class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=') Saving dataset configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The configuration is saved to iris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='json Processing to the file conversion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The configuration is saved to iris_preprocessed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv Figure 5: An example of the exchange between the chatbot and the user for the data preprocessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 3: Machine learning algorithms are included in the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Modules Algorithms 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Pre-processing IQR 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' SMOTE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' KNNImputer 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' xGEWFI metric 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Supervised learning Neural network regressor 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Neural network classifier 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Random Forest 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Unsupervised learning K-Means 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' CK-Means 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Silouette metric 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PCA 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DPDRC 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DPDR 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' FRSD 10 The pre-processing methods (Module 1) are regrouped into one callable function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This function can do the whole process of finding the outliers, augmenting the data and imputing the missing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The recent explainable metric named xGEWFI [9] is used to evaluate the performance of the data generation (imputation and augmentation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It considers the importance of the feature and each feature error to evaluate the global error of the data generation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Inter Quartile Range (IQR) algorithm is used to find the outliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Data generation (augmentation and imputation of missing data) are made with a SMOTE algorithm [7] and a KNNImputer [36], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some neural networks (multilayer perceptron doing regressions and classifications) [31] are avail- able for supervised learning functions (Module 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A Random Forest (RF) algorithm [5] is used as a classifier and regressor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is also used to evaluate the importance of the features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some unsupervised learning methods (Module 3) are also available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The K-means algorithm [4] can be executed for clustering problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The CK-Means algorithm [11] can be called to extract data from the cluster’s intersection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The metric to evaluate the cluster consistencies of those first two algorithms is the Silhouette Index (SI) [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Concerning the dimensionality reduction, the Principal Component Analysis (PCA) algorithm [17] is included in the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Two new decision processes are also included to help with the dimensionality reduction problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Decision Process for Dimensionality Reduction before Clustering (DPDRC) [12] and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Decision Process for Dimensionality Reduction (DPDR) [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Those two are used in unsupervised learning and supervised learning contexts, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In an unsupervised learning context, Feature Ranking Process Based on Silhouette Decomposition (FRSD) [40] helps evaluate the importance of the features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='5 GHG Methods - CodeCarbon integration in AI2 Climate change is an essential issue for humanity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is our responsibility to be aware of it and to do everything that can be done to contribute to lower GHG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' We know that computer sciences, particularly machine learning, can significantly generate GHG while executing on CPU and GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The CodeCarbon library is an important initiative available to data scientists, so they can be aware of their impact on GHG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The following quote can be found on the CodeCarbon website (at pypi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='org/project/codecarbon/) based on [22]: While computing currently represents roughly 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='5% of the world’s energy consumption, that percentage is projected to grow beyond 2% in the coming years, which will entail a significant rise in global CO2 emissions if not done properly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Given this increase, it is important to quantify and track the extent and origin of this energy usage, and to minimize the emissions incurred as much as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For this purpose, we created CodeCarbon, a Python package for tracking the carbon emissions produced by various kinds of computer programs, from straightforward algorithms to deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' By taking into account your computing infrastructure, location, usage and running time, CodeCarbon can provide an estimate of how much CO2 you produced, and give you some comparisons with common modes of transportation to give you an order of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The contribution of this paper is to embed this library’s features in a machine learning framework, add some machine learning-based functions to predict the subsequent request amount of GHG, and try to spare its execution by proposing some alternatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 6 explains those embedded GHG functionalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' First, every GHG statistic (request name, machine learning algorithm used, dataset, number of data, fields, elapsed time, GHG emissions) is stored in a file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' When a user is about to launch a new request, from this stored historic, AI2 framework will try to predict the amount of GHG this subsequent request will generate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A multilayer perceptron (MLP) is used to evaluate this GHG 11 Figure 6: GHG module architecture amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This MLP have 5 hidden layers of 25 neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It uses a relu activation function and an adam solver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Then, a k-means clustering algorithm is used to regroup every similar request to the current request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The list is proposed to the user so he can spare his execution, with some similar results available from the historic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Knowing how much GHG will be generated and knowing the similar results of the past, the user will finally decide if yes or no he wants to execute his new request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 10 presents an example of the information from the chatbot concerning the GHG before launching a new request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='6 Explainability methods The goal of this part is to get rid of the famous "black box" problem in machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' When most frameworks usually display the results for every executed algorithm, AI2 will systematically display the ad-hoc graphics, tables and texts that will ensure a better explainability for a particular algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It could be some learning curves, some scalability curves, and some confusion matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, for a clustering process, some stacked radar graphics (one per cluster) are produced, plus a Silhouette index graphic that shows the cluster’s consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A cluster table and a text (in LaTeX format) are also created to complete the explainability of the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For each machine learning algorithm, the totality of the graphics, tables and texts are generated using the explain() method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 12 AnyMLcommand User GHG GHG stats MLP training and Methods of all predicting processing requests time and GHG ↑ Clusteringbased on similar previous requests Yes No Launch the request?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ML request Results and Request aborted GHG data (Similar request accepted,GHGsaved)Predicted execution time (in sec): 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='498 Predicted generated GHG: 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='899e-05 kg CO2 Here are the most similar requests in case launching another request can be avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Request _2022-11-21_21-23-43 using dataset make_blob Request _2022-11-22_13-54-45 using dataset make_blob Request _2022-11-22_14-29-32 using dataset make_blob Launch the request (y/n)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Figure 7: Information from the chatbot concerning the GHG before launching a new request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3 Results The following presents five functional use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' They emphasise the singularity of the AI2 frame- work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It shows how a user can execute some requests to this framework and what type of results are presented as output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The output graphics, tables, and texts are not presented in this paper for two reasons: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is not what this paper intends to demonstrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, there is no need to show result for a simple clustering K-mean process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There would have needed too many graphics, tables and texts to present in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Case 1 to case 5 present a clustering, a reduction of dimensionality, a classification, a prediction, and an evaluation of the feature’s importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='1 Case 1: Clustering The first case is about a clustering process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' As mentioned earlier, the user must write his query in English in the chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For this first case, the following command has been entered: I want to perform a clustering using iris dataset and having 3 clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' From the Parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv file where a sample is presented in Table 4, the following questions (Table 4) will be generated by the chatbot to fill the required information about a clustering process : Table 4: Required information and questions to access it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key Type Return value Questions PROBLEM Y/N CLUSTERING Is this clustering?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this a clustering problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this regrouping?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Is this a regrouping problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Do you want to regroup data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLUSTERING Do you want to cluster data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Which data are used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NB_CLST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' How many clusters?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NB_CLST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' How many groups?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 13 At this first step, AI2 transparently tries to find the answers in the command entered by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' After this first step, if AI2 misses some information, the chatbot will ask for it until every critical information is defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' From this example, the iris dataset is loaded, a k-means algorithm is launched with the parameter nclusters = 3 and using the default parameters randomstate = 1 and init = ”k − means + +”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The primary results are displayed, presenting a data table along with their clusters, that what most of the frameworks would do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Using AI2, each graphic, table and text can be called using the explain() method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In this first case, stacked radar graphics are generated for each cluster, allowing to visualize the profile of every cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It also generates a graphic of the Silouhette Index, showing and measuring the consistency of every cluster, and finding the mean of the whole clustering process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For each table and graphic, a short text describing it is generated in LaTeX format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='2 Case 2: Reduction of dimensionnality The second case is about the reduction of dimensionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The entered command was: reduction of dimensionality with iris dataset and having 3 components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The only required parameter is the targeted number of components that should be used to downsize the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If this parameter is not specified in the command, the chatbot will directly ask to specify it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Since it is defined in this case command, AI2 will extract three components of the dataset using the PCA algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Always from the Parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv file, the questions shown in Table 5 will be generated by the chatbot to fill the required information about a reduction of dimensionality process : Table 5: Required information and questions to access it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key Type Return value Questions PROBLEM Y/N DIMENSIONALITY Is this about dimensionality?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N DIMENSIONALITY Is this about dimensionality reduction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N DIMENSIONALITY Is this about reduction of dimensionality?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N DIMENSIONALITY Is this a regrouping problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N DIMENSIONALITY Is this a dimensionality problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N DIMENSIONALITY Is this a dimensionality reduction problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Which data are used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NB_CMPS Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' How many components?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The result is a dataset having three principal components (reduced with the PCA algorithm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The explain() method generated two graphics: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' the covariance heatmap of the initial features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' a bar graph of the three extracted features’ importance (explained variance ratio).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For both graphics, a short LaTeX explaining it is generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='3 Case 3: Classification The following case is about the typical problem of classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For this case, a multiple sentences English is given: Perform a classification of the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' I want this request to be reproducible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 14 Test [4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='8,3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='0,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='4,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='2] value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The first sentence of the command is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Those two sen- tences are written in a single command.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It calls a classification of the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' To do so, it will call a multilayer perceptron (MLPClassifier from the Scikit-learn framework).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The second sentence mention that it requires reproducible results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This will set the seed of the random_state parameter to the "1" integer value, assuring the request gives the same result every time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The opposite would have been a "random request".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The seed would have been set to None, allowing the request to give slightly different results due to some random synaptic connection initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' If it is not specified, the request is reproducible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The final sentence commands to try some values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In other words, it aims to classify the specified values [4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='8,3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='0,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='4,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The questions in Table 6 will be extracted from the text command.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 6: Required information and questions to access it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key Type Return value Questions PROBLEM Y/N CLASSIFICATION Is this about classification?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLASSIFICATION Is this a classification problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N CLASSIFICATION Do you want to classify data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Which data are used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' RANDOM Y/N RANDOM Is this a random request?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' RANDOM Y/N REPRODUCTIBLE Is this a reproductible request?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' TEST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What are the test values?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' TEST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What values do you want to be tested?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The classification result will then be shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The training is done with cross-validation having the parameter k = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The whole dataset is split k times, and the subsets are used to validate to process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The training and validation scores are returned for each step of the cross-validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' While both scores are increasing, the training may continue the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' When the training score is still increasing while the validation score starts to decrease, it is precisely the right time to stop the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Stopping before that moment creates under-fitted training, and stopping after that point results in overfitted training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Calling the explain() method, a learning curve is generated of both the training score and validation score based on the cross-validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A state-of-the-art method executes the neural network to classify the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Earlier in the process, the train and the test data were split, allowing the algorithm to train and evaluate the performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Performance graphics is also created, showing the performance of the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Scalability graphics show the ratio of the number of processed data/processing time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Like the other cases, LaTeX texts are generated to explain every graphic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='4 Case 4: Prediction This case aims to demonstrate the prediction feature of the AI2 framework, using the MLPRegres- sor from Scikit-learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It also shows how to preprocess a dataset before calling an algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This preprocessing can be called in the chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In this case, the following English command is given: Do the preprocess of the iris2 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Note that the iris2 dataset is identical to the iris dataset, except that the class field is not included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Selecting the columns of a dataset is not included in 15 this first version of AI2, but it will be in a different version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The iris2 dataset remains with four features: Sepal length, sepal width, petal length and petal width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The value of the petal width must be predicted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' When responding to the chatbot’s questions, the user must specify that the first three fields are non-normalized features and the fourth is a regression value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' After responding to the questions in the chatbot, the iris2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='json file is created, containing the information about the configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The iris2_preprocessed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv data file is also created containing the preprocessed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A second command can be sent to AI2 using the chatbot: I want to make a prediction using the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Test [4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='5,3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='1,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The questions in Table 7 will be extracted from the text command.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 7: Required information and questions to access it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key Type Return value Questions PROBLEM Y/N PREDICTION Do you want to make a prediction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N PREDICTION Is this a prediction problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N PREDICTION Do you want to predict something?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Which data are used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' TEST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What are the test values?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' TEST Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What values do you want to be tested?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Three graphics are generated to explain the results as in 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A learning curve is displayed to ensure no training underfitting or overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A second graphic shows the performance of the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Moreover, a third graphic shows the scalability of the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' As always, LaTeX texts are created to explain the figures, ready to be cut and pasted in a LaTeX document.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='5 Case 5: Feature’s importance This next case shows how to evaluate the feature importance in the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The following command has been typed in AI2’s chatbot: Find the importance of the features with the iris dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This command calls a Random Forest algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' More precisely, the RandomForestClassifier and the RandomForestRegressor from the Scikit-learn framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' According to the configuration file’s content (iris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='json in this case), it will detect whether it is a dataset made for regression or classi- fication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In this case, the iris is a dataset made for classification, so the RandomForestClassifier algorithm will be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' From the Parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='csv file, the questions shown in Table 8 are asked by the chatbot to fill in the information about the feature importance algorithm: 16 Table 8: Required information and questions to access it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Key Type Return value Questions PROBLEM Y/N FEAT_IMP Is this about feature importance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N FEAT_IMP Is this about the importance of the features?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N FEAT_IMP Is this a feature importance problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' PROBLEM Y/N FEAT_IMP Do you want to know the feature importance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' What is the dataset?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DATASET Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Which data are used?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The explain() method gives a graphic where the X axe represents the index of the features, and the Y axe shows each feature’s normalized level of importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A LaTeX explanation text is generated as usual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='6 GHG algorithms validation As stated in 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='5, the AI2 framework predicts GHG for each algorithm to be executed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Execution time is also predicted before calling the machine learning algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' To validate those predictions, a clustering algorithm has been called within 50 iterations loops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For each execution, a random- sized dataset of 10,000 to 50,000 rows and 5 to 20 features have been used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Those datasets were generated by the make_blob() function of the scikit-learn framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 8 shows the validation of the predicted and real values of the generated GHG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' X axe displays the 50 iterations, and the Y axe shows the level of GHG (in kgCO2 unit).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The regression algorithm was trained from a dataset containing 1382 rows containing the request’s historical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Figure 8: Validation of the predicted and real GHG Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 9 displays the validation of the predicted and actual values of the execution time for every iteration of the loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' X axe shows the 50 iterations, and the Y axe shows the execution time (in sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000175 Generated GHG Predicted GHG 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000150 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000125 GHG 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000075 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='000025 0 10 20 30 40 50 SimulationsFigure 9: Validation of the predicted and real execution time Here are the most similar requests in case launching another request can be avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Request _2022-11-21_21-23-43 using dataset make_blob Request _2022-11-22_13-54-45 using dataset make_blob Request _2022-11-22_14-29-32 using dataset make_blob Figure 10: iris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='json structure file Concerning the predicted and real GHG and execution time, it can be seen that the signal is reasonably reconstructed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Finally, before launching each request, AI2 proposes similar requests from the request’s historic after extracting this information using a clustering process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 10 presents an example of the AI2 propositions of the similar requests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 4 Discussions The first contribution of this paper is to present an accessible framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' With its state-of-the-art NLP methods, this machine learning framework is a pioneer in communicating with a non-expert user in English.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The new Transformers technology allows the AI2 framework to receive native language commands extracted, parsed and executed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' When there is an essential missing parameter, AI2 will use its chatbot to communicate with the user, asking him to enter the missing information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' With this NLP interface, a user can exploit the AI2 framework without knowing how to code with a programming language like Python or others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The AI2 framework is GHG-aware, and this is the second contribution of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The CodeCarbon library is encapsulated in each of its ML functions, allowing the calculation of the GHG for each algorithm executed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Those GHG records are kept in a register and used to predict, based on ML, the GHG generated before the execution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI2 also propose some similar registered 18 14 Execution time Predicted time 12 10 Time 8 6 4 2 0 0 10 20 30 40 50 Simulationsrequests, also based on ML, to save this execution and save GHG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The opposite of most other frameworks, AI2 systematically encapsulates the most important format of explanations about the data and the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This aspect of the framework is crucial to solving the famous black-box problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This is the third contribution of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Most of the machine learning framework is not systematically offering some explainability with the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI2 does.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It generates, for each request, some graphics, some tables, and some texts explaining the results and the data, thus, making this framework more ethical than others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The final contribution of this paper is data preprocessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It usually takes time to code a suitable preprocessing of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The AI2 framework proposes a method based on communication with the chatbot to automatize this process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Guided by the AI2 chatbot, the user may do some basic preprocessing of its datasets by establishing the dataset’s structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Having a structure stored in a JSON file, the preprocessing module can generate a new preprocessed dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Comparing AI2 with other machine learning frameworks, what is the advantage of using it?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For now, there are frameworks more complete and more sophisticated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The AI2 framework targets non-expert users who need a machine-learning algorithm to process their data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Typical AI2 users would be, for instance, researchers, engineers, teachers and students in natural science, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A significant part of the scientific community cannot program complex algorithms using a programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' An NLP interface is the best solution since it requires no programming skills.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 9 shows a comparison between AI2 and the other popular machine learning framework, according to 3 criteria: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NLP interface, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' GHG awareness, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Explainability, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' NLP Preprocessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Table 9: Comparison of the popular machine learning frameworks, specialized frameworks, and AI2 Framework NLP GES Explain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Prepro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Code Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Aware req.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='AIX360 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[3] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='ELI5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[3] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Gluon ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[27] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Keras ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[27] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='LIME ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[25] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Matlab ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[27] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='MXNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[41] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Orange ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[8] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='PyTorch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[27] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Scikit-learn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[27] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='SHAP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[25] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Skater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[3] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Tensorflow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[41] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='What-if Tool ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[25] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='XAI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[25] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='CodeCarbon ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[22] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='AI2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='YES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='NO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='[13] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='Note that some well-known frameworks may seem absent from the list: CNTK and Theano ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='are no longer supported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Caffe2 is merged with PyTorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' According to Table 9, we can regroup 19 the frameworks into three categories: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The general, multi-purpose frameworks (Gluons, Keras, MXNet, Tensorflow, PyTorch, Matlab, Orange and Scikit-learn) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The Explainability frameworks (AIX360, ELI5, LIME, SHAP, Skater and XAI), and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The GHG-aware framework (CodeCarbon).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This table shows AI2’s novelty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is the only framework that combines all the studied criteria (NLP interface, GES awareness, Explainability, Preprocessing, and Coding required).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It is the first framework to have an NLP interface to send the instructions to the framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Several frameworks integrate the explainability of the data and the models, but no general and multi-purpose framework includes it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI2: The next leap toward native language-based, GHG-aware and explainable ML framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 5 Conclusion This framework proposes a tool for the non-expert to use machine learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It offers an NLP interface so the user can communicate with the framework using a chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It encap- sulates some very concrete functions to provide ecological awareness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It includes the principle of explainability, proposing expanded results explications for different algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' It finally allows preprocessing of data using an English chatbot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' This framework could be the first draft of a long series of improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There are many future works to do for each of its contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Regarding its NLP interface, this framework can be improved by training the pre-trained Transformer on a specific machine learning-oriented text corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Likely, the NLP’s performance will significantly improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The chatbot method can also be optimized to minimize errors and recognize the user’s intentions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Questions used to extract command information can be improved by increasing the quality and the number of questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' GHG awareness can be improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Better methods can be found to minimize wasted energy, maximize the GHG estimation before calling an algorithm, and cluster similar requests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' There is a lot to do, but this framework has the merit of being aware of the climate change problem and proposing a modest solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Explanations available for each data and machine learning algorithm can also be optimized in quantity and quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some essential explanations are included in this framework, but those need to be systematically included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Regarding the preprocessing module, there are many things to add.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' For instance, some normalization methods can be added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The rows and columns selection can be added to this module, also.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some graphics can be added to plot data at the preprocessing stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Finally, this framework contains a limited number of ML algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Some more ML algorithms can be easily added to the AI2 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 6 Acknowledment This work has been supported by the "Cellule d’expertise en robotique et intelligence artificielle" of the Cégep de Trois-Rivières and the Natural Sciences and Engineering Research Council.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' References [1] Martín Abadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 20 [2] Acheampong Francisca Adoma, Nunoo-Mensah Henry, and Wenyu Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Comparative anal- yses of bert, roberta, distilbert, and xlnet for text-based emotion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Pro- cessing (ICCWAMTIP), pages 117–121, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [3] Namita Agarwal and Saikat Das.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Interpretable machine learning tools: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1528–1534.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [4] Mohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The k-means al- gorithm: A comprehensive survey and performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Electronics, 9(8):1295, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Number: 8 Publisher: Multidisciplinary Digital Publishing Institute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [5] Gérard Biau and Erwan Scornet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A random forest guided tour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' TEST, 25(2):197–227, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [6] Jia-Wei Chang, Neil Yen, and Jason C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Hung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Design of a NLP-empowered finance fraud awareness model: The anti-fraud chatbot for fraud detection and fraud classification as an instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 13(10):4663–4679.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [7] Nitesh V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Chawla, Kevin W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Bowyer, Lawrence O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Hall, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Philip Kegelmeyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' SMOTE: synthetic minority over-sampling technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Journal of Artificial Intelligence Research, 16(1):321–357, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [8] Janez Demšar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Orange: data mining toolbox in python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' the Journal of machine Learning research, 14(1):2349–2353, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [9] Jean-Sébastien Dessureault and Daniel Massicotte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='08980] explainable global error weighted on feature importance: The xGEWFI metric to evaluate the error of data impu- tation and data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [10] Jean-Sébastien Dessureault and Daniel Massicotte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='08974] DPDR: A novel machine learning method for the decision process for dimensionality reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [11] Jean-Sébastien Dessureault and Daniel Massicotte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='08982] ck-means, a novel unsuper- vised learning method that combines fuzzy and crispy clustering methods to extract intersecting data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [12] Jean-Sébastien Dessureault and Daniel Massicotte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' DPDRC, a novel machine learning method about the decision process for dimensionality reduction before clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' AI, 3(1):1–21, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [13] Jean-Sebastien Dessureault and Daniel Massicotte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Ai2: a novel explainable machine learning framework using an nlp interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' BERT: Pre-training of deep bidirectional transformers for language understanding, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [15] Lisa R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Goldberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The Book of Why: The New Science of Cause and Effect, volume 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Publisher: Routledge _eprint: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='1080/14697688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content='1655928.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [16] Raffaele Guarasci, Stefano Silvestri, Giuseppe De Pietro, Hamido Fujita, and Massimo Espos- ito.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Assessing BERT’s ability to learn Italian syntax: A study on null-subject and agreement phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 21 [17] Ian T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Jolliffe and Jorge Cadima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Principal component analysis: a review and recent de- velopments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Publisher: The Royal Society Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Serial order: A parallel distributed processing approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Technical report, June 1985-March 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' (AD-A-173989/5/XAB;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ICS-8604).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [19] Shigeki Karita et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A comparative study on transformer vs RNN in speech applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 449–456, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [20] The Institute for Ethical Ai \\& Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' The institute for ethical AI & machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [21] Yinhan Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' RoBERTa: A Robustly Optimized BERT Pretraining Approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [22] Kadan Lottick, Silvia Susai, Sorelle A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Friedler, and Jonathan P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Wilson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Energy Usage Reports: Environmental awareness as part of algorithmic accountability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [23] Shivani Malhotra, Vinay Kumar, and Alpana Agarwal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Bidirectional transfer learning model for sentiment analysis of natural language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 12(11):10267–10287.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [24] Maria das Graças Bruno Marietto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Artificial Intelligence MArkup Language: A Brief Tutorial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [25] Sina Mohseni, Niloofar Zarei, and Eric D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Ragan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [26] Anand Motwani, Piyush Kumar Shukla, and Mahesh Pawar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Novel framework based on deep learning and cloud analytics for smart patient monitoring and recommendation (SPMR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [27] Giang Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Machine learning and deep learning frameworks and libraries for large- scale data mining: a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Artificial Intelligence Review, 52(1):77–124, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [28] Long Ouyang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Training language models to follow instructions with human feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [29] Sebastian Palacio, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, Jörn Hees, and Andreas Dengel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' XAI handbook: Towards a unified framework for explainable AI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [30] The-Hanh Pham, Vinitha Sree, John Mapes, Sumeet Dua, Oh Shu Lih, Joel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Koh, Ed- ward J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Ciaccio, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Rajendra Acharya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A novel machine learning framework for automated detection of arrhythmias in ECG segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 12(11):10145–10162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [31] Hassan Ramchoun, Youssef Ghanou, Mohamed Ettaouil, and Mohammed Amine Janati Idrissi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Multilayer perceptron: Architecture optimization and training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Accepted: 2021-07- 07T10:37:59Z Publisher: International Journal of Interactive Multimedia and Artificial Intel- ligence (IJIMAI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [32] Denis Rothman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Transformers for Natural Language Processing: Build Innovative Deep Neural Network Architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and More.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Packt Publishing Ltd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 22 [33] Denis Rothman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Transformers for Natural Language Processing: Build innovative deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Packt Publishing Ltd, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [34] Peter J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Rousseeuw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Silhouettes: A graphical aid to the interpretation and validation of cluster analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Journal of Computational and Applied Mathematics, 20:53–65, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [35] David E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Rumelhart, Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Hinton, and Ronald J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Williams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Learning internal represen- tations by error propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [36] Olga G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Troyanskaya, David Botstein, and Russ B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Altman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Missing value estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In Daniel P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Berrar, Werner Dubitzky, and Martin Granzow, editors, A Practical Approach to Microarray Data Analysis, pages 65–75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Springer US, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [37] Ashish Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [38] Joost Verbraeken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A survey on distributed machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' ACM Computing Surveys, 53(2):30:1–30:33, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [39] Zhaobin Wang, Ke Liu, Jian Li, Ying Zhu, and Yaonan Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Various frameworks and libraries of machine learning and deep learning: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Archives of Computational Methods in Engineering, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [40] Jaehong Yu, Hua Zhong, and Seoung Bum Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' An ensemble feature ranking algorithm for clustering analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' Journal of Classification, 37(2):462–489, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [41] Kuo Zhang, Salem Alqahtani, and Murat Demirbas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' A comparison of distributed machine learning platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' In 2017 26th International Conference on Computer Communication and Networks (ICCCN), pages 1–9, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' [42] Xingzhou Zhang, Yifan Wang, and Weisong Shi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' pcamp: Performance comparison of machine learning packages on the edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} +page_content=' 23' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE1T4oBgHgl3EQfuwU5/content/2301.03391v1.pdf'} diff --git a/UtFJT4oBgHgl3EQfNixZ/content/2301.11478v1.pdf b/UtFJT4oBgHgl3EQfNixZ/content/2301.11478v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..821ba1e28f3863f6ca725bb116e4d63e5a0aee2b --- /dev/null +++ b/UtFJT4oBgHgl3EQfNixZ/content/2301.11478v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4445d045eb1110f3032be7e203203552229c3df21077bc13ba8a5f8216f0bd8c +size 313118 diff --git a/UtFJT4oBgHgl3EQfNixZ/vector_store/index.faiss b/UtFJT4oBgHgl3EQfNixZ/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..060781d525e4ef93dab909021ec13f45151a0a3a --- /dev/null +++ b/UtFJT4oBgHgl3EQfNixZ/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72b905d5d201318e6c4e1135ff4b8b31d8b088f630db9dd14c4af70cc765b668 +size 983085 diff --git a/UtFJT4oBgHgl3EQfNixZ/vector_store/index.pkl b/UtFJT4oBgHgl3EQfNixZ/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..49229e0fc2c1b9217c3b54145dc56c1955b23563 --- /dev/null +++ b/UtFJT4oBgHgl3EQfNixZ/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b0868e38fc7c6738c313d77904608dca01732874ec5fc0564e2f51fc7245da2 +size 39338 diff --git a/W9E2T4oBgHgl3EQfYQe0/content/2301.03853v1.pdf b/W9E2T4oBgHgl3EQfYQe0/content/2301.03853v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8eaa632222f5c31dc06eadcc266ba24f12adc73b --- /dev/null +++ b/W9E2T4oBgHgl3EQfYQe0/content/2301.03853v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e151e0f1c43d9a16a01803d7b2e0ca65aa2928e76883933cb42ed51ada6d5c46 +size 1888873 diff --git a/W9E2T4oBgHgl3EQfYQe0/vector_store/index.pkl b/W9E2T4oBgHgl3EQfYQe0/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a79b56cc36353f991ee40cf1c835be0c4af64977 --- /dev/null +++ b/W9E2T4oBgHgl3EQfYQe0/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c1e1e44eee59f30c774cb49837d45f5e9900e85b8be33f78312dc845f4c7153 +size 187551 diff --git a/W9FLT4oBgHgl3EQfTS_4/content/tmp_files/2301.12045v1.pdf.txt b/W9FLT4oBgHgl3EQfTS_4/content/tmp_files/2301.12045v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..19213f48e14657ba29d4b228556826a5ca95de0b --- /dev/null +++ b/W9FLT4oBgHgl3EQfTS_4/content/tmp_files/2301.12045v1.pdf.txt @@ -0,0 +1,4287 @@ +Forward screening and post-screening inference in factorial designs +Lei Shi∗ +Jingshen Wang† +Peng Ding ‡ +Abstract +Ever since the seminal work of R. A. Fisher and F. Yates, factorial designs have been an +important experimental tool to simultaneously estimate the treatment effects of multiple factors. +In factorial designs, the number of treatment levels may grow exponentially with the number +of factors, which motivates the forward screening strategy based on the sparsity, hierarchy, and +heredity principles for factorial effects. Although this strategy is intuitive and has been widely +used in practice, its rigorous statistical theory has not been formally established. To fill this +gap, we establish design-based theory for forward factor screening in factorial designs based on +the potential outcome framework. We not only prove its consistency property but also discuss +statistical inference after factor screening. In particular, with perfect screening, we quantify +the advantages of forward screening based on asymptotic efficiency gain in estimating factorial +effects. With imperfect screening in higher-order interactions, we propose two novel strategies +and investigate their impact on subsequent inference. Our formulation differs from the existing +literature on variable selection and post-selection inference because our theory is based solely +on the physical randomization of the factorial design and does not rely on a correctly-specified +outcome model. +Keywords: Causal inference; Design-based inference; Forward selection; Post-selection infer- +ence. +∗Division of Biostatistics, University of California, Berkeley. leishi@berkeley.edu +†Division of Biostatistics, University of California Berkeley. jingshenwang@berkeley.edu. Corresponding author. +‡Peng Ding, Department of Statistics, University of California, Berkeley. pengdingpku@berkeley.edu +1 +arXiv:2301.12045v1 [stat.ME] 28 Jan 2023 + +1 +Introduction +1.1 +Factorial experiments: opportunities and challenges +Ever since the seminal work of Fisher (1935) and Yates (1937), factorial designs have been widely +used in many fields, including agricultural, industrial, and biomedical sciences (e.g., Box et al., 2005; +Wu and Hamada, 2011; Gerber and Green, 2012). For example, in social science, one government +funded research by Zhang (2022) studied the social construction of hate crime in the U.S. using +factorial experiments based on three factors: race, sexual orientation, and religious affiliation. As +another example, in ecology, Rillig et al. (2019) studied multiple global change factors in driving +soil functions and microbial biodiversity with factorial designs involving up to ten factors involving +drought, temperature, antibiotics, etc. Factorial experiments are popular partially because they +can simultaneously accommodate multiple factors and offer opportunities to estimate not only the +main causal effects of factors but also their interactions. +We focus on the 2K factorial design in which K binary factors are randomly assigned to N +experimental units. With a small K, we can simultaneously estimate the 2K − 1 main effects and +interactions. Nevertheless, when K is large, the number of factorial effects grows exponentially +with K. This motivates us to conduct factor screening based on sparsity, hierarchy, and heredity +principles for factorial effects. More precisely, Wu and Hamada (2011) summarized these three +principles as below: +(a) (sparsity) The number of important factorial effects is small. +(b) (hierarchy) Lower-order effects are more important than higher-order effects, and effects of +the same order are equally important. +(c) (heredity) Higher-order effects are significant only if their corresponding lower-order effects +are significant. +The sparsity principle motivates conducting factor screening in factorial designs. The hierarchy +principle motivates the forward screening strategy that starts from lower-order effects and then +moves on to higher-order effects. The heredity principle motivates using structural restrictions on +higher-order effects based on the selected lower-order effects. Due to its simplicity and computa- +tional efficiency, while the forward screening strategy has been widely used in data analysis (Wu +and Hamada, 2011; Espinosa et al., 2016), its design-based theory under the potential outcome +framework has not been formally established. Moreover, it is often challenging to understand the +2 + +impact of factor screening on the subsequent statistical inference. +The overarching goal of this +manuscript is to fill these gaps. +1.2 +Our contributions and literature review +We summarize our contribution from three perspectives: +First, our study adds to the growing literature of factorial designs with a growing number of +factors under the potential outcome framework (Dasgupta et al., 2015; Branson et al., 2016; Lu, +2016b; Espinosa et al., 2016; Egami and Imai, 2019; Blackwell and Pashley, 2021; Zhao and Ding, +2021; Pashley and Bind, 2023; Wu et al., 2022). To deal with a large number of factors, Espinosa +et al. (2016) and Egami and Imai (2019) informally used factor screening without studying its +statistical properties, whereas Zhao and Ding (2021) discussed parsimonious model specifications +that are chosen a priori and independent of data. The rigorous theory for factor screening is generally +missing in this literature, let alone the theory for statistical inference after factor screening. At a +high level, our contributions fill the gaps. +Second, we formalize forward factor screening and establish its consistency under the design- +based framework under few outcome modeling assumptions; see Section 3. +Factor screening in +factorial design sounds like a familiar statistical task if we formulate it as a variable selection problem +in a linear model. Thus, forward screening is reminiscent of the vast literature on forward selection. +Wang (2009) and Wieczorek and Lei (2022) proved the consistency of forward selection for the +main effects in a linear model, whereas Hao and Zhang (2014) and Hao et al. (2018) moved further +to allow for second-order interactions. Other researchers proposed various penalized regressions to +encode the sparsity, hierarchy, and heredity principles (e.g., Yuan et al., 2007; Zhao et al., 2009; +Bickel et al., 2010; Bien et al., 2013; Lim and Hastie, 2015; Haris et al., 2016), without formally +studying the statistical properties of the selected model. Our design-based framework departs from +the literature without assuming a correctly-specified linear outcome model. +This framework is +classic in experimental design and causal inference with randomness coming solely from the design +of experiments rather than the error terms in a linear model (Neyman, 1923/1990; Kempthorne, +1952; Freedman, 2008; Lin, 2013; Dasgupta et al., 2015). This framework invokes fewer outcome +modeling assumptions but consequently imposes technical challenges for developing the theory. +Bloniarz et al. (2016) discussed the design-based theory for covariate selection in treatment-control +experiments, but the corresponding theory for factorial designs is largely unexplored. +Third, we discuss statistical inference after forward factor screening with (Sections 4 and 6) or +without perfect screening (Section 5). On the one hand, we prove the screening consistency of the +3 + +forward screening procedure, which ensures that the selected factorial effects are the true, non-zero +ones. With this perfect screening property, we can then proceed as if the selected working model +is the true model. This allows us to ignore the impact of forward screening on the subsequent +inference, which is similar to the proposal of Zhao et al. (2021) for statistical inference after Lasso +(Tibshirani, 1996). In particular, we quantify the advantages of conducting forward screening based +on the asymptotic efficiency gain for estimating factorial effects. As an application under perfect +screening, we discuss statistical inference for the mean outcome under the best factorial combination +(Andrews et al., 2019; Guo et al., 2021; Wei et al., 2022). On the other hand, we acknowledge that +perfect screening can be too much to hope for in practice as it requires strong regularity conditions on +factorial effects. As a remedy, we propose two strategies to deal with imperfect screening in higher- +order interactions, and we study their impacts on post-screening inference. A key motivation for +our strategies is to ensure that the parameters of interest after forward factorial screening are not +data-dependent, avoiding philosophical debates in the current literature of post-selection inference +(Fithian et al., 2014; Kuchibhotla et al., 2022). +1.3 +Notation +We will use the following notation throughout. For asymptotic analyses, aN = O(bN) denotes that +there exists a positive constant C > 0 such that aN ≤ CbN; aN = o(bN) denotes that aN/bN → 0 +as N goes to infinity; aN = Θ(bN) denotes that there exists positive constants c and C such that +cbN ≤ aN ≤ CbN. +For matrix V , define ϱmax(V ) and ϱmin(V ) as the largest and smallest eigenvalues, respectively, +and define κ(V ) = ϱmax(V )/ϱmin(V ) as its condition number. For two positive semi-definite matrices +V1 and V2, we write V1 ≼ V2 or V2 ≽ V1 if V2 − V1 is positive semi-definite. +We will use different levels of sets. +For an integer K, let [K] = {1, . . . , K}. +We use K in +calligraphic to denote a subset of [K]. Let K = {K | K ⊂ [K]} denote the power set of [K]. We also +use blackboard bold font to denote subsets of K. For example, M ⊂ K denotes that M is a subset +of K. +We will use Ai ∼ Bi to denote the least-squares fit of Ai’s on Bi’s, which is purely a numerical +procedure without assuming a linear model. Let P−→ denote convergence in probability, and ⇝ denote +convergence in distribution. +4 + +2 +Setup of factorial designs +This section introduces the key mathematical components of factorial experiments. Section 2.1 +introduces the notation of potential outcomes and the definitions of the factorial effects. Section +2.2 introduces the treatment assignment mechanism, the observed data, and the regression analysis +of factorial experiment data. Section 2.3 uses a concrete example of a 23 factorial experiment to +illustrate the key concepts. +2.1 +Potential outcomes and factorial effects +We first introduce the general framework of a 2K factorial design, with K ≥ 2 being an integer. +This design has K binary factors, and factor k can take value zk ∈ {0, 1} for k = 1, . . . , K. Let +z = (z1, . . . , zK) denote the treatment combining all K factors. +The K factors in total define +Q = 2K treatment combinations, collected in the set below: +T = {z = (z1, . . . , zK) | zk ∈ {0, 1} for k = 1, · · · , K} +with +|T | = Q. +We follow the potential outcome notation of Dasgupta et al. (2015) for 2K factorial designs. +Unit i has potential outcome Yi(z) under each treatment level z. Corresponding to the Q = 2K +treatment levels, each unit i has Q potential outcomes, vectorized as Yi = {Yi(z)}z∈T using the +lexicographic order. Over units i = 1, . . . , N, the potential outcomes have finite-population mean +vector Y = (Y (z))z∈T and covariance matrix S = (S(z, z′))z,z′∈T , with elements defined as follows: +Y (z) = 1 +N +N +� +i=1 +Yi(z), +S(z, z′) = +1 +N − 1 +N +� +i=1 +(Yi(z) − Y (z))(Yi(z′) − Y (z′)). +We then use the potential outcomes to define factorial effects. For a subset K ⊂ [K] of the +K factors, we introduce the following “contrast vector” notation to facilitate the presentation. To +start with, we define the main causal effect for factor k. For a treatment level z = (z1, . . . , zK) ∈ T , +we use g{k}(z) = 2zk − 1 to denoted the “centered” treatment indicator zk. +We then define a +Q-dimensional contrast vector g{k} by aggregating these centered treatment variables into a vector +using the lexicographic order, that is +g{k} = {g{k}(z)}z∈T , where g{k}(z) = 2zk − 1. +(2.1) +Next, for the interactions of multiple factors with |K| ≥ 2, we define the contrast vector gK ∈ RQ as +gK = {gK(z)}z∈T , where gK(z) = +� +k∈K +g{k}(z). +(2.2) +5 + +As a special case, when no factor is considered, we define g∅ = 1Q. Stack the contrast vectors into +a Q × Q matrix +G = (g∅, g{1}, . . . , g{K}, g{1,2}, . . . , g{K−1,K}, . . . , g[K]), +which has orthogonal columns with G⊤G = Q · IQ. We refer to G as the contrast matrix. +Equipped with the contrast vector notation, we are ready to introduce the main effects and +interactions. More concretely, define the main causal effect of a single factor and the k-way inter- +action causal effect of multiple factors (k ≥ 2) as the inner product of the contrast vector gK, and +the averaged potential outcome Y , that is +τK = Q−1 · g⊤ +KY +for +K ⊂ [K]. +For convenience in description, we use τ∅ = Q−1g⊤ +∅Y to denote the average of potential outcomes. +We call the effect τK a parent of τK′ if K ⊂ K′ and |K| = |K′| − 1. More compactly, we summarize +the entire collection of causal parameters in factorial experiments as +τ = (τK)K⊂[K] = Q−1 · G⊤Y . +2.2 +Treatment assignment, observed data, and regression analysis +Under the design-based framework, the treatment assignment mechanism characterizes the com- +pletely randomized factorial design. In other words, the experimenter randomly assigns N(z) units +to treatment level z ∈ T , with � +z∈T N(z) = N. Assume N(z) ≥ 2 to allow for variance estima- +tion within each treatment level. Let Zi ∈ T denote the treatment level for unit i. The treatment +vector (Z1, . . . , ZN) is a random permutation of a vector with prespecified number N(z) of the +corresponding treatment level z, for z ∈ T . +For each unit i, the treatment level Zi only reveals one potential outcome. +We use Yi = +Yi(Zi) = � +z∈T Yi(z)1 {Zi = z} to denote the observed outcome. +We also use Ni = N(Zi) to +denote the number of units for the treatment group in which unit i is assigned to. The central task +of causal inference in factorial designs is to use the observed data (Zi, Yi)N +i=1 to estimate factorial +effects. Define +�Y (z) = N(z)−1 +N +� +i=1 +1 {Zi = z} Yi, +�S(z, z) = {N(z) − 1}−1 +N +� +i=1 +1 {Zi = z} (Yi − �Y (z))2 +as the sample mean and variance of the observed outcomes under treatment z. Vectorize the sample +means as �Y = (�Y (z))z∈T , which has mean Y and covariance matrix V�Y = D�Y − N−1S (Li and +Ding, 2017), where +D�Y = Diag +� +N(z)−1S(z, z) +� +z∈T . +6 + +An unbiased estimator for D�Y is +�V�Y = Diag +� +N(z)−1 �S(z, z) +� +z∈T , +whereas S does not have an unbiased sample analogue because the potential outcomes across treat- +ment levels are never jointly observed for the same units. Therefore, �V�Y is a conservative estimator +of the covariance matrix in the sense that E{�V�Y } = D�Y ≽ V�Y . +A dominant approach to estimate factorial effects from factorial designs is through estimating +least-squares coefficients based on appropriate model specifications. Let gi denote the row vector in +the contrast matrix G corresponding to unit i’s treatment level Zi, that is, gi = {gK(Zi)}K⊂[K] ∈ RQ +with gK(z) defined in (2.2). For a set of target effects {τK}K∈M indexed by M, we can run weighted +least squares (WLS) to obtain unbiased estimates: +�τ = arg min +τ +N +� +i=1 +wi(Yi − g⊤ +i τ)2 with wi = 1/Ni. +(2.3) +With a small K, we can simply fit the saturated regression by regressing the observed outcome Yi +on the regressor gi. The saturated regression involves Q = 2K coefficients without any restrictions +on the targeted factorial effects. +In contrast, an unsaturated regression involves fewer coefficients by regressing the observed +outcome Yi on gi,M, a subvector of gi, where M ⊂ K is a subset of the power set of all factors. That +is, +�τ = arg min +τ +N +� +i=1 +wi(Yi − g⊤ +i,Mτ)2 with wi = 1/Ni. +(2.4) +For the convenience of description, we will call M a working model. We use a working model to +generate estimates based on least squares without assuming its correctness. When M = K, (2.4) +incorporates the saturated regression (2.3). +Based on the unsaturated regression with working +model M, let +�τ(M) = {�τK}K∈M +and +τ(M) = {τK}K∈M +denote the vectors of estimated and true coefficients, respectively. Zhao and Ding (2021) showed +that if we run unsaturated regressions with weights 1/Ni for unit i, then the obtained estimated +coefficients are unbiased for the true factorial effects within the working model M. More precisely, +�τ(M) = Q−1G(·, M)�Y , where G(·, M) to denote the columns in G indexed by M. Because �τ(M) is +a linear transformation of �Y , we can use the following estimator for its covariance matrix: +�Σ(M) = 1 +Q2 G(·, M)⊤ �V�Y G(·, M). +(2.5) +7 + +See Lemma S1 in Section A.1 of the supplementary material for more discussions on the above +algebraic results for unsaturated regressions. +2.3 +An illustrating example of a 23 factorial design +We realize that the above notation can be rather abstract. In what follows, we provide an illustrative +Example 1 below with K = 3 factors. +Example 1 (23 factorial design). Suppose we have three binary factors z1, z2, and z3. These three +factors generate 8 treatment combinations, indexed by a triplet (z1z2z3) with z1, z2, z3 ∈ {0, 1}, in +the set +T = {(000), (001), (010), (011), (100), (101), (110), (111)}. +Each unit i has a potential outcome vector Yi = {Yi(z1z2z3)}⊤ +z1,z2,z3=0,1. The vector of factorial +effects in this experiment is +τ = 1 +23 G⊤Y ≜ +� +τ∅, τ{1}, τ{2}, τ{3}, τ{1,2}, τ{1,3}, τ{2,3}, τ{1,2,3} +�⊤ , +where G is the contrast matrix +G = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +τ∅ +τ{1} +τ{2} +τ{3} +τ{1,2} +τ{1,3} +τ{2,3} +τ{1,2,3} +(000) +1 +−1 +−1 +−1 +1 +1 +1 +−1 +(001) +1 +−1 +−1 +1 +1 +−1 +−1 +1 +(010) +1 +−1 +1 +−1 +−1 +1 +−1 +1 +(011) +1 +−1 +1 +1 +−1 +−1 +1 +−1 +(100) +1 +1 +−1 +−1 +−1 +−1 +1 +1 +(101) +1 +1 +−1 +1 +−1 +1 +−1 +−1 +(110) +1 +1 +1 +−1 +1 +−1 +−1 +−1 +(111) +1 +1 +1 +1 +1 +1 +1 +1 +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +. +We observe the pair (Yi, Zi) for unit i, where Zi = (zi,1, zi,2, zi,3) is the observed treatment combi- +nations. Let g{k}(Zi) = 2zi,k − 1 be the centered version of zi,k. For the factor-based regression, the +regressor gi corresponding to the treatment level Zi equals +ti = +� +1, g{1}(Zi), g{2}(Zi), g{3}(Zi), g{2,3}(Zi), g{1,3}(Zi), g{1,2}(Zi), g{1,2,3}(Zi) +� +. +For instance, when Zi = (101), the regressor gi corresponds to the row (101) of the contrast matrix +G. Then, a saturated regression is to regress Yi on gi. For the unsaturated regression, if we only +8 + +include indices ∅ (the intercept), {1}, {1, 2}, {1, 3}, {123} in our regression, we can form the working +model M = {∅, {1}, {1, 2}, {1, 3}, {1, 2, 3}} and perform weighted least squares Yi ∼ ti,M, where +ti,M = +� +1, g{1}(Zi), g{1,3}(Zi), g{1,2}(Zi), g{1,2,3}(Zi) +� +and the weight for unit i equals 1/Ni = 1/N(Zi). +3 +Forward screening in factorial experiments +In factorial designs with small K, we can simply run the saturated regression to estimate all factorial +effects simultaneously. +However, when K is large, saturated regression can be computationally +unwieldy and scientifically unreasonable by delivering potentially noisy estimates of all higher-order +interactions. As a remedy, forward screening is a popular strategy frequently adopted in practice +to analyze data collected from factorial experiments, due to its clear benefits in screening out a +large number of zero nuisance factorial effects. In this section, we formalize forward screening as a +principled procedure to carefully decide an unsaturated working model �M. We first present a formal +version of forward screening and then demonstrate its consistency property. +3.1 +A formal forward screening procedure +In this subsection, we introduce a principled forward screening procedure that not only fully respects +the effect hierarchy, sparsity, and heredity principles but also results in an interpretable parsimo- +nious model with statistical guarantees. More concretely, the algorithm starts by performing factor +screening over lower-order effects, then move forward to select the significant higher-order effects +following the heredity principle. Algorithm 1 summarizes the forward screening procedure. +In what follows, we illustrate why the proposed procedure in Algorithm 1 respects the three +fundamental principles in factorial experiments. +First, Algorithm 1 obeys the hierarchy principle as it performs factor screening in a forward +style (coded in the global loop from d = 1 to d = D, Step 2 in particular). More concretely, we +begin with an empty working model. We then select relevant main effects (Steps 4 and 8) and add +them into the working model. Once the working model is updated, we continue to select relevant +higher-order interaction effects in a forward fashion. Such a forward screening procedure is again +motivated by the hierarchy principle that lower-order effects are more important than higher-order +ones. +9 + +Algorithm 1: Forward factorial screening +Input: Factorial data {(Yi, Zi)}N +i=1; predetermined integer D ≤ K; initial working model +�M = {∅}; significance levels {αd}D +d=1. +Output: Selected working model �M. +1 Define an intermediate working model �M′ = �M for convenience. +2 for d = 1, . . . , D do +3 +Update the intermediate working model to include all the d-order (interaction) terms: +�M′ = �M ∪ {K | |K| = d} ≜ �M ∪ Kd. +4 +Screen out indices in �M′ according to either the weak/strong heredity principles, and +renew the screened working model as �M′. +5 +Run the unsaturated regression with the working model �M′: +Yi ∼ gi, �M′, with weights wi = N/Ni. +6 +Obtain coefficients �τ( �M′) and robust covariance estimation �Σ( �M′) defined in (2.5). +7 +Extract �τK( �M′) and �σK( �M′) for all K ∈ �M′ with |K| = d. +8 +Run marginal t-tests using the above �τK( �M′) and �σK( �M′) under the significance level +min{αd/(| �M′| − | �M|), 1} and remove the non-significant terms from �M′\ �M. +9 +Set �M = �M′. +10 return �M +Second, Algorithm 1 operates under the sparsity principle as it removes potentially unimportant +effects using marginal t-tests with the Bonferroni correction (Step 8). This step induces a sparse +working model and helps us to identify essential factorial effects. The sparsity-inducing step can +incorporate many popular selection frameworks, such as marginal t-tests, Lasso (Tibshirani, 1996), +sure independence screening (Fan and Lv, 2008), etc. For simplicity, we present Algorithm 1 with +marginal t-tests and relegate more general discussions to Section B of the supplementary material. +Third, Algorithm 1 incorporates the heredity principle as it screens out the interaction effects +(Wu and Hamada, 2011; Hao and Zhang, 2014; Lim and Hastie, 2015) when either none of their +parent effects is included (weak heredity) or some of their patient effects are excluded (strong +heredity) in the previous working model (Step 4). +Lastly, we note that our forward screening procedure enhances the interpretability of the selected +working model by iterating between the “Sparsity-screening” step (called the S-step in the rest of +the manuscript), captured by a data-dependent operator �S = �S(·; {Yi, Zi}N +i=1), and the “Heredity- +10 + +screening” step (called the H-step in the rest of the manuscript), captured by a deterministic +operator H = H(·). Because the working model is updated in an iterative fashion, +�M1 +H +−→ �M2,+ +�S +−→ �M2 · · · +�S +−→ �Md−1 +H +−→ �Md,+ +�S +−→ �Md → · · · +�S +−→ �MD. +(3.6) +the final working model includes a small number of statistically significant effects that fully respect +the heredity principle. +3.2 +Consistency of forward screening +We are now ready to analyze the screening consistency property of Algorithm 1. We shall show that +the proposed algorithm selects the targeted working model up to level D with probability tending to +one as the sample size goes to infinity. Here, the targeted working model at level k ∈ [K], denoted +as M⋆ +k, is the collection of K’s where |K| = k and τK ̸= 0. Define the full targeted working model +up to level D as +M⋆ +1:D = +D +� +d=1 +M⋆ +d. +In particular, when D = K, we omit the subscript and simply denote M⋆ = M⋆ +1:K. +We start by introducing the following condition on nearly uniform designs: +Condition 1 (Nearly uniform design). There exists a positive integer N0 and absolute constants +c ≤ c, such that +N(z) = c(z)N0 ≥ 2, where c ≤ c(z) ≤ c. +Condition 1 allows for diverging Q and bounded N(z)’s across all treatment levels (Shi and +Ding, 2022). It generalizes the classical assumption in the fixed Q regime where Q is fixed, and +each treatment arm contains a sufficiently large number of replications (Li and Ding, 2017). +Next, we quantify the order of the true factorial effect sizes τK’s and the tuning parameters +αd’s adopted in the Bonferroni correction. We allow these parameters to change with the sample +size N: +Condition 2 (Order of parameters). The true factorial effects τK’s and tuning parameters αd’s +have the following orders: +(i) True nonzero factorial effects: |τK| = Θ(Nδ) for some −1/2 < δ ≤ 0 and all K ∈ M⋆ +1:D. +11 + +(ii) Tuning parameters in Bonferroni correction: αd = Θ(N−δ′) for all d ∈ [D] with some δ′ > 0. +(iii) Size of the targeted working model: �D +d=1 |M⋆ +d| = Θ(Nδ′′) for some 0 ≤ δ′′ < 1/3. +Condition 2(i) specifies the allowable order of the true factorial effects. If this condition fails, +the effect size is of the same order as the statistical error and thus is too small to be detected by +marginal t-test. Similar conditions are also adopted in model selection literature, including Zhao +and Yu (2006) and Wieczorek and Lei (2022). Condition 2(ii) requires the tuning parameter αd +to converge to zero, which ensures that there is no Type I error in our procedure as N goes to +infinity, which is crucial for the selection consistency. Wasserman and Roeder (2009, Theorems 4.1 +and 4.2) assumed similar conditions in high-dimensional model selection settings for linear models. +Condition (iii) restricts the size of the targeted working model. The rate is due to our technical +analysis. Similar conditions also appeared in Zhao and Yu (2006), Wieczorek and Lei (2022) and +Wasserman and Roeder (2009). +The next condition specifies a set of regularity assumptions on the potential outcomes. +Condition 3 (Regularity conditions on the potential outcomes). The potential outcomes satisfy +the following conditions: +(i) Nondegenerate correlation matrix. Let V ⋆ be the correlation matrix of �Y . There exists σ > 0 +such that the condition number of V ⋆ is smaller than or equal to σ2. +(ii) Bounded fourth central moments. There exists a universal constant ∆ > 0 such that +max +z∈[Q] +1 +N +N +� +i=1 +{Yi(z) − Y (z)}4 ≤ ∆4. +(iii) Bounded standardization scales. There exists a constant M > 0 such that MN ≤ M where +MN = maxi∈[N],q∈[Q] |Yi(q) − Y (q)| +{minq∈[Q] S(q, q)}1/2 +. +Condition 3(i) requires the correlation matrix of �Y to be well-behaved. Condition 3(ii) controls +the moments of the potential outcomes. Condition 3(iii) imposes a universal bound on the standard- +ization of potential outcomes, which is required by Shi and Ding (2022) to prove the Berry–Esseen +bound based on Stein’s method. +Lastly, we impose the following structural conditions on the factorial effects: +Condition 4 (Hierarchical structure in factorial effects). The nonzero true factorial effects align +with the effect heredity principle: +12 + +• Weak heredity: τK ̸= 0 only if there exists K′ ⊂ K with |K′| = |K| − 1 such that τK′ ̸= 0. +• Strong heredity: τK ̸= 0 only if τK′ ̸= 0 for all K′ ⊂ K with |K′| = |K| − 1. +Finally, we present the screening consistency property of Algorithm 1: +Theorem 1 (Perfect screening property). Under Conditions 1-4, the working model selected by +Algorithm 1 converges to the targeted working model with probability one as the sample size goes to +infinity: +lim +N→∞ P +� +�M = M⋆ +1:D +� += 1. +4 +Inference under perfect screening +Statistical inference is relatively straightforward under the perfect screening of the factorial effects. If +forward screening correctly identifies the true, nonzero factorial effects with probability approaching +one, we can proceed as if the selected working model is predetermined. In Section 4.1, we present +the point estimators and confidence intervals for general causal parameters. +In Section 4.2, we +study the advantages of forward screening in terms of asymptotic efficiency in estimating general +causal parameters, compared with the corresponding estimators without forward screening. We +relegate the extensions to vector parameters to Section A.2 of the supplementary material since it +is conceptually straightforward. +4.1 +Post-screening inference for general causal parameters +Define a general causal parameter of interest as a weighted combination of average potential out- +comes: +γ = +� +z∈T +f(z)Y (z) ≜ f ⊤Y , +where f = {f(z)}z∈T is a pre-specified weighting vector. +For example, if one is interested in +estimating the main factorial effects, f can be taken as the contrast vectors g{k} given in (2.1). If +one wants to estimate interaction effects, then f can be constructed from (2.2). However, we allow +f to be different from the contrast vectors gK. For instance, if one wants to focus on the first two +arms in factorial experiments and estimate the average treatment effect, we shall choose +f = (1, −1, 0, . . . , 0)⊤. +13 + +In general, researchers may tailor the choice of f to the specific research questions of interest. +Without factor screening, a well-studied plug-in estimator of γ in the existing literature is to +replace Y with its sample analogue (Li and Ding, 2017; Zhao and Ding, 2021; Shi and Ding, 2022): +�γ = f ⊤ �Y = +� +z∈T +f(z)�Y (z). +(4.7) +Under regularity conditions in Shi and Ding (2022), the plug-in estimator �γ satisfies a central limit +theorem (�γ − γ)/v ⇝ N(0, 1) with the variance v2 = f ⊤V�Y f. When N(z) ≥ 2, its variance can be +estimated by: +�v2 = f ⊤ �V�Y f = +� +z∈T +f(z)2N(z)−1 �S(z, z). +With the help of factor screening, based on the selected working model �M, we consider a +potentially more efficient estimator of Y via the restricted least squares (RLS) +�Yr = arg min +µ∈RQ +� +∥�Y − µ∥2 +2 : G(·, �Mc)⊤µ = 0 +� +, +(4.8) +which leverages the information that the nuisance effects G(·, �Mc)⊤Y are all zero. The �Yr in (4.8) +has a closed form solution (see Lemma S6 in the supplementary material): +�Yr = Q−1G(·, �M)G(·, �M)⊤ �Y . +Under perfect screening, �Yr is also a consistent estimator for Y , so �γr = f ⊤ �Yr is also consistent for +γ. Introduce the following notation +f[M] = Q−1G(·, M)G(·, M)⊤f +(4.9) +to simplify �γr and its variance estimator as +�γr = f[ �M]⊤ �Y +and +�v2 +r = f[ �M]⊤ �V�Y f[ �M]. +Construct a Wald-type level-(1 − α) confidence interval for γ: +� +�γr ± z1−α/2 × �vr +� +, +(4.10) +where z1−α/2 is (1 − α/2)th quantile of a standard normal distribution. We can also obtain point +estimates and confidence intervals handily from WLS regression of Yi on gi, �M with weights 1/Ni. +See Section A.1 in the supplementary material for more details. +In the following subsection, we provide the theoretical properties of �γr and �v2 +r, and compare +their asymptotic behaviors with the plug-in estimators �γ and �v2 in various settings. +14 + +4.2 +Theoretical properties under perfect screening +In this section, we first present the asymptotic normality result for �γr. To simplify discussion, we +denote f ⋆ = f[M⋆]. Given M⋆ is the true working model, we have (f ⋆)⊤Y = f ⊤Y , for all f ∈ RQ. +This identify holds for the true working model, not a general model, suggested by the following +algebraic facts: +f ⊤Y = f ⊤{Q−1G(·, M⋆)G(·, M⋆)⊤ + Q−1G(·, M⋆c)G(·, M⋆c)⊤}Y (orthogonality of G) += (f ⋆)⊤Y + G(·, M⋆c)τ(M⋆c) (definition of f ⋆ based on (4.9)) += (f ⋆)⊤Y . (using τ(M⋆c) = 0) +We are now ready to present the asymptotic properties of �γr and �v2 +r: +Theorem 2 (Statistical properties of �γr and �v2 +r). Let N → ∞. Assume Conditions 1-4. We have +�γr − γ +vr +⇝ N(0, 1) +where v2 +r = f ⋆⊤V�Y f ⋆. Further assume ∥f ⋆∥∞ = O(Q−1). The variance estimator �v2 +r is conservative +in the sense that: +N(�v2 +r − v2 +r,lim) P−→ 0, +v2 +r,lim ≥ v2 +r, +where v2 +r,lim = f ⋆⊤D�Y f ⋆ is the limiting value of �v2 +r. +Theorem 2 above guarantees that the proposed confidence interval in (4.10) for γ attains the +nominal coverage probability asymptotically. Furthermore, it allows us to compare the conditions +for reaching asymptotic normality of �γ, which we formalize in the following remark: +Remark 1 (Comparison of conditions for asymptotic normality). Without factor screening, the +simple plug-in estimator �γ in (4.7) satisfies a central limit theorem if +N−1/2 +0 +· ∥f∥∞ +∥f∥2 +→ 0 +(4.11) +recalling the definition of N0 in Condition 1 (Shi and Ding, 2022, Theorem 1). Condition (4.11) +fails when N0 is small and f is sparse. Besides, it does not incorporate the sparsity information +in the structure of factorial effects. With factor screening, however, we can borrow the benefit of a +sparse working model and overcome the above drawbacks. Therefore, factor screening broadens the +applicability of our proposed estimator �γr by weakening the assumptions for the Wald-type inference. +15 + +To elaborate the benefits of conducting forward factorial screening in terms of asymptotic +efficiency, we make a simple comparison of the asymptotic variances of �γ and �γr in Proposition +1 below. In the most general setup, there is no ordering relationship between v2 +r and v2. That +is, the RLS based estimator may have higher variance than the unrestricted OLS estimator. This +is a known fact due to heteroskedasticity and the use of sandwich variance estimators (Meng and +Xie, 2014; Zhao and Ding, 2021). Nevertheless, in many interesting scenarios, we can prove an +improvement of efficiency by factor screening. Two conditions are summarized in Proposition 1: +Proposition 1 (Asymptotic relative efficiency comparison between �γ and �γr). Assume that both �γ +and �γr converge to a normal distribution as N → ∞. +(i) If the eigenvectors of the covariance matrix V�Y are given by the contrast matrix G, then +v2 +r +v2 ≤ 1. +(ii) Let s⋆ denote the number of nonzero elements in f. Then the asymptotic relative efficiency +between �γ and �γr is upper bounded by +v2 +r +v2 ≤ κ(V�Y ) · s⋆|M⋆| +Q +. +Now we add some interpretation for Proposition 1. Part (i) gives a sufficient condition when +the eigen-space of V�Y has a close connection with G. More concretely, G can be regarded as an +orthogonal representation of the potential outcome matrix. One can verify that such a condition +implies that the variance of �Y (z) does not change with its treatment group membership z. One +concrete problem of interest where Part (i) can be applied is testing the sharp null hypothesis of +constant effects in uniform factorial designs (with N0 replications in each arm), i.e., +H0F : Yi(z) = Yi for all i ∈ [N] and z ∈ T . +Under H0F, we have +V�Y = N−1 +0 σ2 · IQ − N−1σ21Q1⊤ +Q = N−1 +0 σ2GDiag {0, 1, . . . , 1} G⊤, +where σ2 = +1 +N − 1 +N +� +i=1 +(Yi − Y )2 and Y = 1 +N +N +� +i=1 +Yi. +Thus, the proposed RLS-based estimator �γr is in general more efficient than the plug-in estimator �γ. +Part (ii) studies a general heteroskedastic setting with sparse weighting vector f and small working +model size |M⋆|. The condition number κ(V�Y ) captures the variability of the variances of �Y (z) +16 + +across multiple treatment combination groups in T . When the variability of such changes is limited +in the sense that κ(V�Y ) < Q/(s⋆|M⋆|), the RLS-based estimator is more efficient than �γ. Moreover, +the above result can be extended to compare the length of the confidence intervals as well. The +conclusion is similar. See Proposition S1 in the supplementary material for the details. +5 +Post-screening inference under imperfect screening +Similar to many other consistency results for variable selection, the perfect screening property can +be too much to hope for in practical data analysis in factorial designs. This is because the perfect +screening property of forward screening requires the non-zero effects to be well separated from zero. +Such a theoretical requirement can be rather stringent for higher-order factorial effects. In other +words, implied by the hierarchy principle, while main factorial effects and lower-order factorial +effects are more likely to have non-negligible effect sizes, higher-order factorial effects tend to have +comparably smaller effect sizes. Perfect screening property is less likely to hold when applied to +screen out those higher-order effects. More rigorously, when Condition 2(i) is violated, Algorithm +1 may no longer enjoy the perfect screening property. +Statistical inference without perfect screening is a non-trivial problem in factorial designs. If +we do not put any restrictions on the factorial selection procedure, the selected model can be +anything, even without a stable limit. Classical strategies for post-selection inference (Kuchibhotla +et al., 2022) will encounter various drawbacks in our current setup. For example, data splitting +(Wasserman and Roeder, 2009) is a widely used strategy to validate inference after variable selection +due to its simplicity. However, it highly relies on the independent sampling assumption, which is +violated under our setting. On the other hand, selective inference (Fithian et al., 2014) is another +widely studied strategy, which can deliver valid inference for data-dependent parameters. However, +it cannot be directly applied to analyze data collected in factorial designs. This is because the +selective inference strategy often tends to rely on specific selection methods and parametric modeling +assumptions on the outcome variables. +Rather than directly generalizing classical post-selection inference methods to factorial experi- +ments, in this section, we shall discuss two alternative strategies leveraging the special data struc- +tures in factorial experiments, along with with their statistical inference results (summarized in +Figure 1). +17 + +5.1 +Two alternative strategies for imperfect screening and statistical inference +The two proposed strategies are built on a belief that perfect screening is more plausible for selecting +the main factorial effects and lower-order factorial effects up to level d⋆ than the high-order effects. +We will add more discussion on d⋆ after presenting these two strategies. +Select the +first d⋆ levels +Are higher-order +effects necessary? +Select higher-order +effects by heredity +(Strategy 2) +Exclude higher- +order effects +(Strategy 1) +Under-selection +with the targeted +working model M⋆ +Over-selection +with the targeted +working model M +⋆ +yes +no +Figure 1: Two strategies for factorial screening: Strategy 1 under-selects whereas Strategy 2 over- +selects. +For Strategy 1, when the higher-order factorial effects are considered to be not necessary, we +may stop our forward screening procedure in Algorithm 1 at d = d∗ (instead of d = D). Such a +strategy focus on recovering a targeted working model M⋆ up to level d⋆, that is, +M⋆ = ∪d⋆ +d=1M⋆ +d ⊆ M⋆, +which leads to an under-selected parsimonious working model. We summarize this strategy below. +Strategy 1 (Under-selection by excluding high-order interactions). In Algorithm 1, we stop the +screening procedure at d = d∗. Or equivalently, we set αd = ∞ for d ≥ d⋆ + 1 so that no effects +beyond level d⋆ will be selected and �M = ∪d⋆ +d=1 �Md. +Given the selected working model �M, we can again construct an estimator of γ = f ⊤Y (defined +in Section 4.1) based on RLS: +�γru = f[ �M]⊤ �Y , +and +�v2 +ru = f[ �M]⊤ �V�Y f[ �M]. +(5.12) +For Strategy 2, rather than excluding all higher-order interactions with negligible effects, we +may further leverage the heredity principle and continue our screening procedure beyond level d⋆. +This means that instead of selecting the higher-order interactions via marginal t-test and Bonferroni +correction, we select the higher-order interaction terms whenever either all of their parent effects +18 + +are selected (strong heredity), or one of their parent effects is selected (weak heredity). While such +a strategy takes higher-order factorial effects into account, it often targets a working model M +⋆ that +includes the true model M⋆, that is, +M⋆ ⊆ M +⋆ = +D +� +d=1 +M +⋆ +d, where M +⋆ +d = +� +� +� +M⋆ +d, +d ≤ d⋆; +H(d−d⋆)(M⋆ +d⋆), +d⋆ + 1 ≤ d ≤ D. +The selected model by this strategy is expected to introduce an over-selected model that includes +M⋆ as well. We summarize this strategy as follows: +Strategy 2 (Over-selection by including higher-order interactions through the heredity principle). +In Algorithm 1, set αd = 0, d ≥ d⋆ + 1 and apply a heredity principle (either weak or strong, +depending on people’s knowledge of the structure of the effects). Then the high-order effects beyond +level d⋆ are selected merely by the heredity principle and +�M = ∪D +d=1 �Md where �Md = +� +� +� +Algorithm 1 Output, +d ≤ d⋆; +H(d−d⋆)( �Md⋆), +d⋆ + 1 ≤ d ≤ D. +Here H(d−d⋆) is the (d − d⋆)-order composition of H, meaning applying H for (d − d⋆) times. +Given the selected working model �M, similarly, we can construct an estimator of γ = f ⊤Y +based on RLS: +�γro = f[ �M]⊤ �Y , +and +�v2 +ro = f[ �M]⊤ �V�Y f[ �M]. +(5.13) +In terms of implementation, one can use WLS to conveniently obtain the point estimators +in (5.12) and (5.13) and construct slightly more conservative variance estimators. +Due to the +orthogonality of the contrast matrix G, perfect screening is not required for computation. +See +Section A.1 in the supplementary material for more detailed discussions. +In real-world factorial experiments, how should practitioners decide which strategy to work +with? This relies on domain knowledge and the research question of interest. Strategy 1 is more +suitable when there are domain-specific messages indicating that higher-order interactions are neg- +ligible, or when the research question only involves lower-order factorial effects. Moreover, Strategy +1 is helpful when the number of active lower-order interaction is large and Strategy 2 cannot be +applied. Meanwhile, Strategy 2 works better when domain knowledge suggests non-negligible higher- +order interactions or the research question targets a more general parameter beyond factorial effects +themselves. It may also work well when the number of active lower-order interactions is small, and +we can include a small set of high-order terms according to the heredity principle. +19 + +In the following subsection, we study the statistical properties of �γro and �γru and demonstrate +the trade-offs between the two strategies for statistical inference from a theoretical perspective. +5.2 +Theoretical properties under imperfect screening +Throughout this subsection, we discuss the scenario where perfect screening is hard to achieve. We +work under a relaxed condition of Condition 2 defined as follows: +Condition 5 (Order of parameters up to level d⋆). Condition 2 holds with D = d⋆. +Condition 5 no longer imposes any restriction on the order of the parameters beyond level d⋆. +By Theorem 1, Condition 5 guarantees that Algorithm 1 perfectly screens the first d⋆ levels of +factorial effects in the sense that +P +� +�Md = M⋆ +d for d = 1, . . . , d⋆� +→ 1. +We start by analyzing the statistical property of �γru with �M obtained from the under-selection +Strategy 1. Because the selected working model might deviate from the truth beyond level d⋆, �γru +may not be a consistent estimator of γ. Therefore, we focus on weighting vectors f that satisfy +certain orthogonality conditions as introduced in Theorem 3 below: +Theorem 3 (Guarantee for Strategy 1). Recall the equation (4.9) and define f ⋆ = f[M⋆] = +Q−1G(·, M⋆)G(·, M⋆)⊤f. Assume Conditions 1, 3, 4, 5, and f satisfies the following orthogonality +condition: +G(·, M⋆ +d)⊤f = 0 for d⋆ + 1 ≤ d ≤ K. +(5.14) +Let N → ∞. We have +�γru − γ +vru +⇝ N(0, 1), +where v2 +ru = f ⋆⊤V�Y f ⋆. Further assume ∥f ⋆∥∞ = O(Q−1). The variance estimator �v2 +ru is conser- +vative in the sense that: +N(�v2 +ru − v2 +ru,lim) P−→ 0, +v2 +ru,lim ≥ v2 +ru, +where v2 +ru,lim = f ⋆⊤D�Y f ⋆ is the limiting value of �v2 +ru. +Now we add some discussion on Theorem 3. The orthogonality condition presented in (5.14) +restricts the weighting vector f to be orthogonal to the higher-order contrasts. Intuitively, because +20 + +the higher-order interactions are excluded from the model, making inference on a weighted combina- +tion of those excluded interactions is infeasible. One set of weighting vectors satisfying (5.14) is the +contrast vectors of nonzero canonical lower-order interactions, given by f = G(, ∪d⋆ +d=1M⋆ +d). In large +K settings, the lower-order interactions can also grow polynomially fast in K and add difficulty for +interpretation. As an example, when K = 10, for the first two levels of factorial effects without +screening, there are a total of more than 50 estimates. It can still greatly benefit the analysis and +interpretation to filter out the insignificant ones and obtain a parsimonious, structured working +model. +As for Strategy 2, similarly, we have the following results: +Theorem 4 (Guarantee for Strategy 2). Recall the equation (4.9) and define f +⋆ = f[M +⋆] = +Q−1G(·, M +⋆)G(·, M +⋆)⊤f. Assume Conditions 1, 3, 4 and 5. Let N → ∞. If |M +⋆|/N → 0, then +�γro − γ +vro +⇝ N(0, 1), +where v2 +ro = f +⋆⊤V�Y f +⋆. Further assume ∥f +⋆∥∞ = O(Q−1). The variance estimator �v2 +ro is conser- +vative in the sense that: +N(�v2 +ro − v2 +ro,lim) P−→ 0, +v2 +ro,lim ≥ v2 +ro, +where v2 +ro,lim = f +⋆⊤D�Y f +⋆ is the limiting value of �v2 +ro. +We comment that there is an additional technical requirement in Theorem 4 for over-selection: +we assume |M +⋆|/N → 0. This equation mainly serves as a sufficient condition for CLT. The reason +is that we need to control the size of the target model M +⋆ compared to the sample size N in order +to infer a general causal parameter. +When analyzing Strategies 1 and 2, Algorithm 1 recovers a targeted model with high probability. +Both strategies have advantages and disadvantages. Under-selection reflects a bias-variance trade- +off: it can induce more bias for certain weighting vectors, but the constructed estimator typically +enjoys smaller variance. Over-selection can reduce bias for estimation, but may not be feasible if +there are too many lower-order terms which can result in many redundant terms in the selected +model. In practice, if higher-order interactions are not crucial, Strategy 1 should be applied. If +high-order interactions are of interest and hard to select, one could pursue Strategy 2 as a practically +useful and interpretable solution. +Remark 2. Under the eigenvector condition that V�Y has eigenvector G, we can prove v2 +ru ≤ v2 +ro. +Therefore, in this case, by excluding higher-order terms and pursuing under-selection, we can ob- +tain an equal or smaller asymptotic variance compared with over-selection. +In general, due to +21 + +heteroskedasticity, the order of v2 +ru and v2 +ro depends on the choice of target weighing vector f. Here +we take a sparse f = e1 = (1, 0, . . . , 0)⊤ as an example. We can show that +v2 +ru +v2ro +≤ κ(V�Y ) · +��M⋆�� +��M +⋆��. +When the variability of V�Y between treatment arms is small in the sense that κ(V�Y ) < +��M +⋆��/ +��M⋆��, +under-selection leads to smaller asymptotic variance for inferring e⊤ +1 Y . +6 +Application to inference on the best arm in factorial experiments +In the previous sections, we consider the problem of making inference on a single factorial causal +effect γ = f ⊤Y . As an application of the proposed framework, we study the problem of inference +on “best” effect among many causal effects. Without loss of generality, we define the best effect as +the effect with the highest level. In what follows, Section 6.1 introduces our setup and an inferential +procedure. Section 6.2 presents theoretical guarantees. +6.1 +Best arms, tie set and statistical inference +Suppose we have a set of causal effects Γ defined by pre-specified weighting vectors f1, . . . , fL (L is +potentially large), that is +Γ = {γ1, . . . , γL}, +γl = f ⊤ +l Y . +We aim to perform statistical inference on their ordered values +γ(1) ≥ . . . ≥ γ(l0) +with l0 < L being a fixed positive integer. As a simple example, if we choose {fl}l∈[L] = {e(z)}z∈T +to be the set of the canonical bases {e(z)}z∈T , then our inferential targets include the maximal +potential outcome means: +Y (1) = max +z∈T Y (z). +(6.15) +A more practical consideration in factorial experiments is to incorporate structural constraints +into the choices of {fl}l∈[L], as it might be infeasible to consider all treatment levels T due to +budget or resource constraints especially when K is large. This means we might be only interested +22 + +in factor combinations z = (z1, . . . , zK) with at most K0(≤ K) 1’s; equivalently, we replace T with +the following T ′ in (6.15) and obtain: +T ′ = +� +z = (z1, . . . , zK) | +K +� +k=1 +zk ≤ K0 +� +, +Y (1) = max +z∈T ′ Y (z). +(6.16) +By focusing on {f}l∈[L] that are most relevant, the inferential target maxz∈T ′ Y (z) allows us to use +the available data to decide if the best causal parameter among those practically interesting ones +has a non-zero causal effect. +Two challenges exist in delivering valid statistical inference on γ(1), . . . , γ(l0) in factorial ex- +periments. On the one hand, sample analogs of the ordered parameters, (�γ(1), . . . , �γ(l0)), are often +biased estimates of (γ(1), . . . , γ(l0)) due to the well-known winner’s curse phenomenon (Andrews +et al., 2019; Guo et al., 2021; Wei et al., 2022). On the other hand, although one might argue +that existing approaches can be applied to remove the winner’s curse bias in �γ(l), these approaches +do not account for the special structural constraint in factorial experiments. Rigorous statistical +guarantees have been lacking in our context due to the unique presence of both large L and large +Q in factorial designs. +To simultaneously address the above challenges, we propose a procedure that tailors the tie-set +identification approach proposed in Claggett et al. (2014) and Wei et al. (2022) to our current +problem setup. We focus on making inference on the first ordered value γ(1) to simplify discussion, +and our approach extends naturally to other ordered values. The proposed procedure is provided +in Algorithm 2. +Algorithm 2 consists of three major components. First, we need to construct �γl = f ⊤ +l �YR with +feature screening (Step 1-2). These RLS-based estimators enjoy great benefits for large Q and small +N0 regimes based on our previous discussion. Second, we construct �L1 to include the estimates +that are close to �γ(1) (Step 3). Intuitively, these collected estimates are different due to random +error. We will show that with proper tuning, this procedure will include all the l for which γl are +statistically indistinguishable from γ(1) with high probability. Third, we construct estimators by +averaging over �L1 (Step 4). By averaging the estimates over the selected �L1 we alleviate the impact +of randomness and obtain accurate estimates for the maximal effect. +23 + +Algorithm 2: Inference on best causal effect(s) +Input: Factorial data (Yi, Zi); predetermined integer D; initial model for factorial effects +�M = {∅}; significance level {αd}D +d=1; set of weighting vectors {fl}l∈[L]; thresholds +ηN. +Output: Selected working model �M. +1 Perform forward effects screening with Algorithm 1 and obtain working model �M. +2 Obtain RLS-based estimates: use Equation (4.9) and definition of �Yr (4.8) to compute +fl[ �M] = Q−1G(·, �M)G(·, �M)⊤fl, +�γl = f ⊤ +l �Yr = fl[ �M]⊤ �Y , +l ∈ [L]. +3 Record the set of effects close to �γ(1): +�L1 = +� +l ∈ [L] | |�γl − �γ(1)| ≤ ηN +� +. +Here, ηN is a tuning parameter which can be selected using the algorithm provided in Wei +et al. (2022, Appendix C.1). +4 Define +f(1) = (Q| �L1|)−1 � +l∈ � +L1 +G(·, �M)G(·, �M)⊤fl. +Generate point estimates and variance estimator for γ(1): +�Y(1) = +1 +| �L1| +� +l∈ � +L1 +�γl = f ⊤ +(1) �Y , +�v2 +(1) = f ⊤ +(1) �VY f(1). +5 return �L1, �Y(1), �v2 +(1) +6.2 +Theoretical guarantees +In the following, we present theoretical guarantees for Algorithm 2. We introduce the following +notation L1 to include all effects that stay in a local neighborhood of γ(1): +L1 = +� +l ∈ [L] | |γl − γ(1)| = O(N−δ3) +� +, for some δ3 > 0. +A well-known fact is that the naive estimator maxz∈[Q] �Y (z) can be an overly optimistic estimate +for γ(1) when L1 contains more than one element (Andrews et al., 2019; Wei et al., 2022). Define +dh = max +z∈L1 |γl − γ(1)|, +d⋆ +h = min +z /∈L1 +|γl − γ(1)|. +as within- and between-group distances, respectively. We work under the following condition: +24 + +Condition 6 (Order of dh, d⋆ +h and ηN). Assume the within and between group distances satisfy: +d⋆ +h = Θ(Nδ1), +ηN = Θ(Nδ2), +dh = Θ(Nδ3). +with δ3 ≤ −1/2 < δ2 < δ1 ≤ 0. +Define the population counterpart of f(1) as +f ⋆ +(1) = (Q|L1|)−1 � +l∈L1 +G(·, M⋆)G(·, M⋆)⊤fl. +We establish the following result for the procedure provided in Algorithm 2. Recall δ2 from +Condition 6 and δ′′ from Condition 2(iii), which characterizes the magnitude of the within/between +group distances and the size of the true working model, respectively. +Theorem 5 (Asymptotic results on the estimated effects using Algorithm 2). Assume Condition +1–4 and 6. Let N → ∞. If +N−(1+2δ2−δ′′) → 0, +(6.17) +L · |L1| · N− 1−δ′′ +2 +→ 0, +(6.18) +then +�γ(1) − γ(1) +v(1) +⇝ N(0, 1), +where v2 +(1) = f ⋆⊤ +(1) VY f ⋆ +(1). Moreover, �v2 +(1) is conservative in the sense that +N(�v2 +(1) − v2 +(1),lim) P−→ 0, v2 +(1),lim ≥ v2 +(1), +where v2 +(1),lim = f ⋆⊤ +(1) D�Y f ⋆ +(1) is the limiting value of v2 +(1). +The conditions in Theorem 5 are mild and reveal a trade-off between some mathematical quan- +tities. For the first asymptotic condition in (6.17), when the size of the targeted working model +is small compared to N, say δ′′ = 0 (meaning |M⋆| does not grow with N), this condition always +holds. More generally, (6.17) is easier to satisfy with a larger between-group distance (larger δ2) +and smaller true working model size (smaller δ′′). The second condition (6.18) reflects the trade-off +among the total number of interested parameters (given by L, which is also |T ′|), the size of the +neighborhood of γ(1) (given by |L1|), and the size of the true working model (captured by δ′′). The +smaller these quantities are, the easier inference will be. Moreover, (6.18) is easily justifiable. Going +back to the previous example (6.16), (6.18) translates into +K0 +� +K=0 +�K +k +� +· |L1| · +�|M⋆| +N +�1/2 +→ 0. +(6.19) +25 + +One can check that (6.19) accommodates a variety of interesting regimes with different specifications +of K0, |L1| and |M⋆|. We omit the discussion here. +Theorem 5 also suggests the benefits of factor screening compared to procedures where no +screening is involved following similar reasoning provided in Remark 1. More precisely, without +screening, one requires Q to be small compared to N or {fl}l∈[L] are dense, which is violated in +large Q setups and many practical scenarios such as (6.15). +As a final comment, the result of our Theorem 5 relies on the perfect screening property (Theo- +rem 1), which are ensured by Conditions 1 - 4. Without perfect screening, there might be additional +sources of bias due to the uncertainty induced by the screening step and possible under-selection +results. Nevertheless, one can consider applying the over-selection strategy (Strategy 2 in Section +5.1) to facilitate inference on the best factorial effects. +7 +Simulation +In this section, we use simulation studies to demonstrate the finite-sample performance of the +proposed forward screening framework and the inferential properties of the RLS-based estimator. +More concretely, our simulation results verify the following properties of the proposed procedure +and estimators: +(G1) The RLS-based estimator �γr demonstrates efficiency gain (in terms of improved power and +shortened confidence interval) compared to the simple moment estimator �γ for general causal +parameters defined by sparse weighting vectors. +(G2) The factorial forward screening procedure provided in Algorithm 1 can improve the perfor- +mance of effect screening compared to naive procedure (i.e., screening without leveraging the +heredity principle). +(G1) echoes our discussion on the comparison of CLT conditions and asymptotic variance in Remark +1 and Proposition 1. (G2) verifies the results in Theorem 1 and 2 and checks the finite sample +behaviors of the proposed procedures. For both goals, we will vary the sample size and effect size +to provide a comprehensive understanding of their performance. +7.1 +Simulation setup +We set up a 28 factorial experiment (K = 8). There are N0 units in each treatment arm where +N0 is set to be a varying number. We generate independent potential outcomes from a shifted +26 + +exponential distribution: +Yi(z) ∼ EXP(1) − 1 + µ(z). +Here µ(z) are super population means of potential outcomes under treatment z. We choose µ(z) +such that the factorial effects satisfy the following structure: +• Main effects: the main effects corresponding to the first five factors, τ{1}, . . . , τ{5}, are nonzero; +the rest three main effects, τ{6}, . . . , τ{8}, are zero. +• Two-way interactions: +the two-way interactions associated with the first five factors are +nonzero, i.e., τ{kl} ̸= 0 for k ̸= l, k, l ∈ [5]. +All the rest of the two-way interactions are +zero. +• Higher-order interactions: all the higher-order interactions τK are zero if |K| ≥ 3. +The above setup of factorial effects guarantees that they are sparse and follow the strong heredity +principle. In the provided simulation results, we will vary the number of units in each treatment +arm and the size of the nonzero factorial effects. More details can be found in the R code attached +to the support materials. +7.2 +Simulation results supporting (G1) +In this subsection, we evaluate the performance of the RLS-based estimators (�γr, �vr) compared to +(�γ, �v) for testing a causal effect γtarget = f ⊤Y specified by a sparse vector: f = (0, . . . , 0, 1)⊤ ∈ RQ. +Intuitively, γtarget measures the average of potential outcomes in the last level. For each estimator, +we report: (i) power for testing H0 : γtarget = 0. (ii) coverage probability of the confidence intervals +for γtarget at level 0.95. Figure 2 summarizes the results. +Figure 2 demonstrates that the RLS-based estimator �γr has much higher power than the sim- +ple moment estimator �γ for inferring γtarget for all considered simulation settings. +This echoes +our conclusion in Proposition 1 that the RLS-based estimator has reduced variance than the sim- +ple moment estimator. Moreover, while the RLS-based estimator attains near nominal coverage +probability with reasonably large N0 and γtarget, the simple moment estimator tends to provide +under-covered confidence intervals in all cases. +7.3 +Simulation results for (G2) +In this subsection, we compare the performance of four candidate effect screening methods: +27 + +Method +Forward Bonferroni +No Selection +0.4 +0.6 +0.8 +1.0 +5 +10 +15 +20 +N0 +Power +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +5 +10 +15 +20 +N0 +Coverage probability +0.4 +0.6 +0.8 +1.0 +0.25 +0.50 +0.75 +1.00 +Effect size +Power +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0.25 +0.50 +0.75 +1.00 +Effect size +Coverage probability +Figure 2: Simulation results on (G1). (i) Top left panel: power curve with varying N0; (ii) Top +right panel: coverage probability with varying N0; (iii) Bottom left panel: power curve with varying +effect size γtarget; (iv) Bottom right panel: coverage probability with varying effect size γtarget. +• Forward Bonferroni. Forward screening based on Bonferroni corrected marginal t-tests; +• Forward Lasso. Forward screening based on Lasso; +• Naive Bonferroni. Screening with the full working model based on Bonferroni corrected margin +t-tests; +• Naive Lasso. Screening with the full working model based on Lasso. +For each screening method, we evaluate their performance with three measures: (i) perfect +screening probability P{ �M = M⋆}, (ii) power of �γr for testing H0 : γtarget = 0 for the same γtarget +defined in the previous section, and (iii) coverage probability of the RLS-based confidence interval +28 + +for γtarget with the nominal level at 0.95. The results are summarized in Figure 3. +Method +Forward Bonferroni +Forward Lasso +Naive Bonferroni +Naive Lasso +0.00 +0.25 +0.50 +0.75 +1.00 +5 +10 +15 +20 +N0 +Perfect selection probability +0.6 +0.7 +0.8 +0.9 +1.0 +5 +10 +15 +20 +N0 +Power +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +5 +10 +15 +20 +N0 +Coverage probability +0.00 +0.25 +0.50 +0.75 +1.00 +0.4 +0.6 +0.8 +1.0 +Effect size +Perfect selection probability +0.6 +0.7 +0.8 +0.9 +1.0 +0.4 +0.6 +0.8 +1.0 +Effect size +Power +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0.4 +0.6 +0.8 +1.0 +Effect size +Coverage probability +Figure 3: Simulation results on (G2). (i) Top left panel: perfect screening probability with a small +fixed effect size γtarget = 0.20 and varying N0; (ii) Top middle panel: power curve with a small fixed +effect size γtarget = 0.20 and varying N0; (iii) Top right panel: coverage probability with a small +fixed effect size γtarget = 0.20 and varying N0; (iv) Bottom left panel: perfect screening probability +with a small fixed replication N0 = 2 and varying effect size γtarget; (v) Bottom middle panel: power +curve with a small fixed replication N0 = 2 and varying effect size γtarget; (vi) Bottom right panel: +coverage probability with a small fixed replication N0 = 2 and varying effect size γtarget. +From Figure 3, all four effect screening methods lead to perfect selection with high probability +as N0 or γtarget increases. Nevertheless, with the forward screening procedure, the probability of +perfect screening is higher than the naive screening procedure. Besides, forward screening com- +plies with the heredity structure and demonstrates higher interpretability than the naive screening +methods. In terms of the power of �γr and �vr for testing H0 : γtarget = 0, while all four methods +have power approaching one as N0 and γtarget increases, forward screening based procedures pos- +sess higher power with small N0 and γtarget. Lastly, we can see an improvement in the coverage +29 + +probability of the RLS-based confidence intervals with the forward screening procedure. +8 +Discussion +In this manuscript, we have discussed the formal theory for forward screening and post-screening +inference in 2K factorial designs with large K. It is conceptually straightforward to extend the +theory to general factorial designs with multi-valued factors under more complicated notations, +and we thus omit the technical details to simplify the theoretical discussion. Another important +direction is covariate adjustment in factorial experiments. Lin (2013), Lu (2016a) and Liu et al. +(2022) demonstrated the efficiency gain of covariate adjustment with small K. +Zhao and Ding +(2023) discussed covariate adjustment in factorial experiments with factors and covariates selected +independent of data. We leave it to future research to establish the theory for factor screening and +covariate selection in factorial designs. +References +Andrews, I., Kitagawa, T., and McCloskey, A. (2019), “Inference on winners,” Tech. rep., National +Bureau of Economic Research. +Angrist, J. D. and Pischke, J.-S. (2009), Mostly Harmless Econometrics: An Empiricist’s Compan- +ion, Princeton: Princeton University Press. +Bai, Z., Choi, K. P., Fujikoshi, Y., and Hu, J. (2022), “Asymptotics of AIC, BIC and Cp model +selection rules in high-dimensional regression,” Bernoulli, 28, 2375–2403. +Bickel, P. J., Ritov, Y., Tsybakov, A. B., et al. (2010), “Hierarchical selection of variables in sparse +high-dimensional regression,” IMS Collections, 6, 28. +Bien, J., Taylor, J., and Tibshirani, R. (2013), “A lasso for hierarchical interactions,” Annals of +Statistics, 41, 1111. +Blackwell, M. and Pashley, N. E. (2021), “Noncompliance and instrumental variables for 2K factorial +experiments,” Journal of the American Statistical Association, in press. +Bloniarz, A., Liu, H., Zhang, C.-H., Sekhon, J. S., and Yu, B. (2016), “Lasso adjustments of +treatment effect estimates in randomized experiments,” Proceedings of the National Academy of +Sciences, 113, 7383–7390. +30 + +Box, G., Hunter, J., and Hunter, W. (2005), Statistics for Experimenters: Design, Innovation, and +Discovery, Hoboken, NJ: Wiley. +Branson, Z., Dasgupta, T., and Rubin, D. B. (2016), “Improving covariate balance in 2K factorial +designs via rerandomization with an application to a New York City Department of Education +High School Study,” Annals of Applied Statistics, 10, 1958–1976. +Claggett, B., Xie, M., and Tian, L. (2014), “Meta-analysis with fixed, unknown, study-specific +parameters,” Journal of the American Statistical Association, 109, 1660–1671. +Dasgupta, T., Pillai, N. S., and Rubin, D. B. (2015), “Causal inference from 2K factorial designs by +using potential outcomes,” Journal of the Royal Statistical Society: Series B (Statistical Method- +ology), 77, 727–753. +Egami, N. and Imai, K. (2019), “Causal interaction in factorial experiments: Application to conjoint +analysis,” Journal of the American Statistical Association, 114, 529–540. +Espinosa, V., Dasgupta, T., and Rubin, D. B. (2016), “A Bayesian perspective on the analysis of +unreplicated factorial experiments using potential outcomes,” Technometrics, 58, 62–73. +Fan, J. and Lv, J. (2008), “Sure independence screening for ultrahigh dimensional feature space,” +Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70, 849–911. +Fisher, R. A. (1935), The Design of Experiments, Edinburgh, London: Oliver and Boyd, 1st ed. +Fithian, W., Sun, D., and Taylor, J. (2014), “Optimal inference after model selection,” arXiv +preprint arXiv:1410.2597. +Freedman, D. A. (2008), “On regression adjustments to experimental data,” Advances in Applied +Mathematics, 40, 180–193. +Gerber, A. S. and Green, D. P. (2012), Field Experiments: Design, Analysis, and Interpretation, +New York, NY: Norton. +Guo, X., Wei, L., Wu, C., and Wang, J. (2021), “Sharp inference on selected subgroups in observa- +tional studies,” arXiv preprint arXiv:2102.11338. +Hao, N., Feng, Y., and Zhang, H. H. (2018), “Model selection for high-dimensional quadratic +regression via regularization,” Journal of the American Statistical Association, 113, 615–625. +31 + +Hao, N. and Zhang, H. H. (2014), “Interaction screening for ultrahigh-dimensional data,” Journal +of the American Statistical Association, 109, 1285–1301. +Haris, A., Witten, D., and Simon, N. (2016), “Convex modeling of interactions with strong heredity,” +Journal of Computational and Graphical Statistics, 25, 981–1004. +Hastie, T., Tibshirani, R., Friedman, J. H., and Friedman, J. H. (2009), The Elements of Statistical +Learning: Data Mining, Inference, and Prediction, vol. 2, New York: Springer. +Kempthorne, O. (1952), The Design and Analysis of Experiments, New York: Wiley. +Kuchibhotla, A. K., Kolassa, J. E., and Kuffner, T. A. (2022), “Post-selection inference,” Annual +Review of Statistics and Its Application, 9, 505–527. +Li, X. and Ding, P. (2017), “General forms of finite population central limit theorems with appli- +cations to causal inference,” Journal of the American Statistical Association, 112, 1759–1769. +Lim, M. and Hastie, T. (2015), “Learning interactions via hierarchical group-lasso regularization,” +Journal of Computational and Graphical Statistics, 24, 627–654. +Lin, W. (2013), “Agnostic notes on regression adjustments to experimental data: Reexamining +Freedman’s critique,” Annals of Applied Statistics, 7, 295–318. +Liu, H., Ren, J., and Yang, Y. (2022), “Randomization-based joint central limit theorem and efficient +covariate adjustment in randomized block 2K factorial experiments,” Journal of the American +Statistical Association, in press. +Lu, J. (2016a), “Covariate adjustment in randomization-based causal inference for 2K factorial +designs,” Statistics and Probability Letters, 119, 11–20. +— (2016b), “On randomization-based and regression-based inferences for 2K factorial designs,” +Statistics and Probability Letters, 112, 72–78. +Meng, X.-L. and Xie, X. (2014), “I got more data, my model is more refined, but my estimator is +getting worse! Am I just dumb?” Econometric Reviews, 33, 218–250. +Neyman, J. (1923/1990), “On the application of probability theory to agricultural experiments. +Essay on principles. Section 9.” Statistical Science, 465–472. +Pashley, N. E. and Bind, M.-A. C. (2023), “Causal inference for multiple non-randomized treatments +using fractional factorial designs,” Canadian Journal of Statistics, in press. +32 + +Rillig, M. C., Ryo, M., Lehmann, A., Aguilar-Trigueros, C. A., Buchert, S., Wulf, A., Iwasaki, A., +Roy, J., and Yang, G. (2019), “The role of multiple global change factors in driving soil functions +and microbial biodiversity,” Science, 366, 886–890. +Shi, L. and Ding, P. (2022), “Berry–Esseen bounds for design-based causal inference with possibly +diverging treatment levels and varying group sizes,” arXiv preprint arXiv:2209.12345. +Tibshirani, R. (1996), “Regression shrinkage and selection via the lasso,” Journal of the Royal +Statistical Society: Series B (Methodological), 58, 267–288. +Wainwright, M. J. (2019), High-dimensional Statistics: A Non-asymptotic Viewpoint, vol. 48, Cam- +bridge: Cambridge University Press. +Wang, H. (2009), “Forward regression for ultra-high dimensional variable screening,” Journal of the +American Statistical Association, 104, 1512–1524. +Wasserman, L. and Roeder, K. (2009), “High dimensional variable selection,” Annals of Statistics, +37, 2178. +Wei, W., Zhou, Y., Zheng, Z., and Wang, J. (2022), “Inference on the best policies with many +covariates,” arXiv preprint arXiv:2206.11868. +Wieczorek, J. and Lei, J. (2022), “Model selection properties of forward selection and sequential +cross-validation for high-dimensional regression,” Canadian Journal of Statistics, 50, 454–470. +Wu, C. J. and Hamada, M. S. (2011), Experiments: Planning, Analysis, and Optimization, vol. 552, +Hoboken, NJ: John Wiley & Sons. +Wu, Y., Zheng, Z., Zhang, G., Zhang, Z., and Wang, C. (2022), “Non-stationary a/b tests: Optimal +variance reduction, bias correction, and valid inference,” Bias Correction, and Valid Inference +(May 20, 2022). +Yates, F. (1937), “The design and analysis of factorial experiments,” Tech. Rep. Technical Commu- +nication 35, Imperial Bureau of Soil Science, London, U. K. +Yuan, M., Joseph, V. R., and Lin, Y. (2007), “An efficient variable selection approach for analyzing +designed experiments,” Technometrics, 49, 430–439. +Zhang, C. (2022), “Social construction of hate crimes in the U.S.: A factorial survey experiment,” +Theses and Dissertations–Sociology, 49. +33 + +Zhao, A. and Ding, P. (2021), “Regression-based causal inference with factorial experiments: esti- +mands, model specifications and design-based properties,” Biometrika, 109, 799–815. +— (2023), “Covariate adjustment in multi-armed, possibly factorial experiments,” Journal of the +Royal Statistical Society, Series B (Statistical Methodology), in press. +Zhao, P., Rocha, G., and Yu, B. (2009), “The composite absolute penalties family for grouped and +hierarchical variable selection,” Annals of Statistics, 37, 3468–3497. +Zhao, P. and Yu, B. (2006), “On model selection consistency of Lasso,” The Journal of Machine +Learning Research, 7, 2541–2563. +Zhao, S., Witten, D., and Shojaie, A. (2021), “In defense of the indefensible: A very naive approach +to high-dimensional inference,” Statistical Science, 36, 562–577. +34 + +Supplementary material +Section A provides more discussions/extensions to the results introduced in the main paper. +More concretely, Section A.1 presents detailed discussion of the use of weight least squares in +factorial experiments. Section A.2 extends the inference results in Section 4 to a vector of causal +effects. +Section B presents general results on consistency of forward factor screening. Theorem 1 is a +corollary of the results in Section B. +Section C gives the technical proofs of the results in the main paper and the Appendix. +A +Additional results +This section provides more extensions to the results in the main paper. Section A.1 discusses the +use of WLS in analyzing factorial experiments. Section A.2 extends the inference results under +perfect screening (Section 4) to a vector of causal effects. +A.1 +Weighted least squares for estimating factorial effects +In this subsection, we briefly state and prove some useful facts about weighted least squares in +estimating factorial effects. More discussions can be found in Zhao and Ding (2021). Denote the +design matrix as X = (g1,M, . . . , gN,M)⊤. Let W = Diag {wi}. The problem (2.4) has closed-form +solution: +�τ = (X⊤WX)−1(X⊤WY ) (closed form solution of WLS) += {G(·, M)⊤G(·, M)}−1{G(·, M)⊤ �Y } +(units under the same treatment arm share the same regressor) += Q−1G(·, M)⊤ �Y . +(S1) +The closed form (S1) motivates the variance estimation: +�V�τ = Q−2G(·, M)⊤ �V�Y G(·, M). +(S2) +Alternatively, one can use the Eicker–Huber–White (EHW) variance estimation with the HC2 cor- +rection (Angrist and Pischke, 2009): +�VEHW = (X⊤WX)−1X⊤WDiag +� +�ϵ2 +i +1 − N−1 +i +� +WX(X⊤WX)−1, +�ϵi = Yi − g⊤ +i,M�τ. +(S3) +S1 + +Again, because units under the same treatment arm share the same regressor, �VEHW simplifies to +�VEHW = Q−2G(·, M)⊤ �V ′ +�Y G(·, M), +(S4) +where +�V ′ +�Y = Diag +� +N(z)−1 �S′(z, z) +� +z∈T with �S′(z, z) = +1 +N(z) − 1 +� +Zi=z +(Yi − g⊤ +i,M�τ)2. +Following some algebra, we can show +�S′(z, z) = +1 +N(z) − 1 +� +Zi=z +(Yi − �Y (z))2 + +N(z) +N(z) − 1{�Y (z) − G(z, M)�τ}2 += �S(z, z) + +N(z) +N(z) − 1{�Y (z) − G(z, M)�τ}2. +Hence �S′(z, z) ≥ �S(z, z). In general �Y (z) ̸= G(z, M)�τ, so the difference is not negligible. The fol- +lowing Lemma S1 formally summarizes the statistical property of �τ and its two variance estimators, +�V�τ and �VEHW. The proof can be done by utilizing the moment facts from Section C.2 and C.3 of +Shi and Ding (2022), which we omit here. +Lemma S1. Assume Conditions 1 and 3. For the WLS in (2.4), we have +1. �τ = Q−1G(·, M)⊤ �Y is unbiased for the true factorial effects τ(M); i.e., E {�τ} = τ(M). +2. Both variance estimators are consistent and robust: N(�V�τ − V�τ,lim) = oP(1), N(�VEHW − +VEHW,lim) = oP(1), with V�τ,lim ≽ V�τ and VEHW ≽ V�τ, where +V�τ,lim = Q−2G(·, M)⊤D�Y G(·, M), +and +VEHW,lim = Q−2G(·, M)⊤Diag +� 1 − N−1 +N(z) − 1S(z, z) + +1 +N(z) − 1{Y (z) − G(z, M)τ(M)}2 +� +G(·, M). +3. EHW variance estimator is more conservative than the direct variance estimator: �VEHW ≽ �V�τ. +It is worthy of mentioning that in the fixed Q setting, if we assume that the factorial effects +that are not included in M are all zero, Lemma S1 implies EHW variance estimator (S3) or (S4) has +the same asymptotic statistical property as the direct variance estimator (S2), which agrees with +the conclusion of Zhao and Ding (2021). +S2 + +A.2 +Extension of post-screening inference to vector parameters +In this subsection we present an extension of Theorem 2 to a vector of causal parameters: +Γ = (γ1, . . . , γL)⊤, +where γl = f ⊤ +l Y . +For convenience we can stack f1, . . . , fL into a weighting matrix F = (f1, . . . , fL) and write +Γ = F ⊤Y . +We will focus on linear projections of Γ, defined as γb = b⊤Γ for a given b ∈ RL. Naturally, we can +apply forward screening and construct RLS-based estimators for Γ: +�Γr = (�γ1,r, . . . , �γL,r)⊤, +�V�Γ,r = F[ �M]⊤ �V�Y F[ �M], +(S5) +where +F[ �M] = Q−1G(·, �M)G(·, �M)⊤F. +For γb, an estimator based on (S5) is +�γb,r = b⊤�Γr, +�v2 +b,r = b⊤ �V�Γ,rb. +For standard factorial effects, we can use WLS to obtain the robust covariance matrix (Section A.1). +For one single b, we can actually apply Theorem 2 with +fb = Fb = +L +� +l=1 +blfl. +Define f ⋆ +b = F[M⋆]b. We then get the following theorem: +Theorem S1 (Statistical properties linear projections of Γ). Assume Conditions 1-4. Let N → ∞. +Then +�γb,r − γ +vb,r +⇝ N(0, 1) +where v2 +b,r = f ⋆⊤ +b +V�Y f ⋆ +b . Further assume ∥f ⋆ +b ∥∞ = O(Q−1). The variance estimator �v2 +b,r is conser- +vative in the sense +N(�v2 +b,r − v2 +b,r,lim) P−→ 0, +v2 +b,r,lim ≥ v2 +b,r, +where v2 +b,r,lim = f ⋆⊤ +b +D�Y f ⋆ +b is the limiting value of �v2 +b,r. +S3 + +The proof of Theorem S1 is similar to that of Theorem 2, which is mainly based on Lemma S5 +and thus omitted here. Moreover, for a fixed integer L, Theorem S1 implies joint normality of �Γr, +a result due to the Cram´er-Wold theorem. We summarize the result as the following corollary and +omit the proof: +Corollary S1. Assume a fixed L. Assume Conditions 1-4. We have +V −1/2 +�Γ,r +(�Γr − Γ) ⇝ N(0, IL), +where V�Γ,r = F[M⋆]⊤V�Y F[M⋆]. Further assume max∥b∥2=1 ∥f ⋆ +b ∥∞ = O(Q−1). The variance esti- +mator �v2 +b,r is conservative in the sense that +N(�V�Γ,r − V�Γ,r,lim) P−→ 0, +V�Γ,r,lim ≽ V�Γ,r, +where V�Γ,r,lim = F[M⋆]⊤D�Y F[M⋆]⊤ is the limiting value of �V�Γ,r. +B +General results on consistency of forward screening +In this section we provide some theoretical insights into the forward factor screening algorithm +(Algorithm 1). The discussion in this section starts from a more broad discussion where we allow the +S-step to be general procedures that satisfy certain conditions. We will show Bonferroni corrected +marginal t-test is a special case of these procedures. +We start with some regularization conditions to characterize a “good” layer-wise S-step, and +ensure the P-step is compatible with the structure of the true factorial effects. In light of this, we +use M⋆ +d,+ to denote the pruned set of effects on the d-th layer based on the true model M⋆ +d−1 on the +previous layer; that is, +M⋆ +d,+ = H(M⋆ +d−1). +These discussions motivate the following assumption on the layer-wise selection procedure �S(·): +Assumption 1 (Validity and consistency of the selection operator). We denote +�Md = �S(M⋆ +d,+; {Yi, Zi}N +i=1), +where M⋆ +d,+ = H(M⋆ +d−1) is defined as above. Let {αd}D +d=1 be a sequence of significance levels in (0, 1). +We assume that the following validity and consistency property hold for SN(·): +Validity: lim sup +N→∞ +P +� +�Md ∩ M⋆c +d ̸= ∅ +� +≤ αd, +Consistency: lim sup +N→∞ +D +D +� +d=1 +P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� += 0. +S4 + +This assumption can be verified for many screening procedures. In Theorem 1 we will show it +holds for the layer-wise Bonferroni corrected marginal testing procedure in Algorithm 1. Moreover, +in the high dimensional super population study, a combination of data splitting, adaptation of ℓ1 +regularization and marginal t tests can also fulfill such a requirement (Wasserman and Roeder, +2009). +Besides, we assume the H(·) operator respects the structure of the nonzero factorial effects: +Assumption 2 (H-heredity). For d = 1, · · · , D − 1, it holds +M⋆ +d+1 ⊂ P(M⋆ +d). +One special case of H(·) operator satisfying Assumption 2 is naively adding all the the higher- +order interactions regardless of the lower-order screening results. +Besides, if we have evidence +that the effects have particular hierarchical structure, applying the heredity principles can improve +screening accuracy as well as interpretability of the screening results. +Theorem S2 (Screening consistency). Assume Assumption 1 and 2. Then the forward screening +procedure (3.6) has the following properties: +(i) Type I error control. Forward screening controls the Type I error rate, in the sense that +lim sup +N→∞ +P +� +�Md ∩ M⋆ +d +c ̸= ∅ for some d ∈ [D] +� +≤ α = +D +� +d=1 +αd. +(ii) Screening consistency. Further assume α = αN → 0. The forward procedure consistently +selects all the nonzero effects up to D levels with probability tending to 1: +lim sup +N→∞ +P +� +�Md = M⋆ +d for all d ∈ [D] +� += 1. +Theorem S2 consists of two parts. First, one can control the type I error rate, which is defined +as the probability of over-selects at least one zero effect. The definition is introduced and elaborated +detailedly in Wasserman and Roeder (2009) for model selection. Second, if the tuning parameter +α = �D +d=1 αd vanish asymptotically, one can actually achieve perfect screening up to D levels of +effects. To apply Theorem S2 to specific procedures, the key step is to verify Assumption 1 and +justify Assumption 2, which we will do for Bonferroni corrected marginal t tests as an example in +the next section. +Moreover, the scaling of αN plays an important role in theoretical discussion. To achieve perfect +selection, we hope αN decays as fast as possible; ideally if αN equals zero then we do not commit +S5 + +any type I error (or equivalently, we will never select redundant effects). However, for many data- +dependent selection procedure α can only decay at certain rates, because a fast decaying α means +higher possibility of rejection, thus can lead to strict under-selection. +Therefore, in the tuning +process, αd should be scaled properly if one wants to pursue perfect selection. Nevertheless, even +if the tuning is hard and perfect model selection can not be achieved, we still have many strategies +to exploit the advantage of the forward screening procedure. We will have more discussions in later +sections. +Lastly, as we have commented earlier, in practice people have many alternative methods for +the S-step. They are attractive in factorial experiments because many lead to simple form solutions +due to the orthogonality of factorial designs. For example, Lasso is a commonly adopted strategy +for variable selection in linear models (Zhao and Yu, 2006). It solves the following penalized WLS +problem in factorial settings: +�Ml = {K : �τl,K ̸= 0}, +�τl,K = min +τ ′∈RH +1 +2 +� +z∈T +wi(Yi − g⊤ +i τ ′)2 + λl∥τ ′∥1. +Due to the orthogonality of G, the resulting �M has a closed-form solution (Hastie et al., 2009): +�Ml = {K : |�τK| ≥ λl}. +Other methods, such as AIC/BIC (Bai et al., 2022), sure independence screening (Fan and Lv, +2008), etc., are also applicable. With more delicate assumptions and tuning parameter choices, these +methods can also be justified theoretically for screening consistency and post-screening inference. +We omit the details. +C +Technical proofs +In this section we present the technical proofs for the results across the whole paper. Section C.1 +presents some preliminary probabilistic results that are useful in randomized experiments which are +mainly attributed to Shi and Ding (2022). The main proof starts from Section C.2. +S6 + +C.1 +Preliminaries: some important probabilistic results in randomized experi- +ments +In this subsection we present some preliminary probability results that are crucial for our theoretical +discussion. Consider an estimator of the form +�γ = Q−1 � +z∈T +w(z)�Y (z), +with variance estimator +�v2 = Q−2 � +z∈T +w(z)2 �S(z, z). +Li and Ding (2017) showed that +E{�Y } = Y , V�Y = Var +� +�Y +� += D�Y − N−1S. +(S6) +Then (S6) further leads to the following facts: +E{�γ} = +� +z∈T +f(z)Y (z) = γ, +(S7) +Var {�γ} = +� +z∈T +f(z)2N(z)−1S(z, z) − N−1f ⊤Sf, +E{�v2} = +� +z∈T +f(z)2N(z)−1S(z, z). +We have the following variance estimation results and Berry–Esseen bounds: +Lemma S2 (Variance concentration and Berry–Esseen bounds in finite population). Define γ = +E{�γ}, v2 = Var(�γ) and v2 +lim = E{�v2}. Suppose the following conditions hold: +• Nondegenerate variance. There exists a σw > 0, such that +Q−2 +Q +� +z=1 +w(z)2N−1 +z S(z, z) ≤ σ2 +wv2. +(S8) +• Bounded fourth moments. There exists a δ > 0 such that +max +z∈[Q] +1 +N +N +� +i=1 +{Yi(z) − Y (z)}4 ≤ ∆4. +(S9) +Then we have the following conclusions: +S7 + +1. The variance estimator is conservative for the true variance: v2 +lim ≥ v2. Besides, the following +tail bound holds: +P +� +N|�v2 − v2 +lim| > t +� +≤ Cc3c−4∥w∥2 +∞∆4 +QN0 +· 1 +t2 . +2. We have a Berry–Esseen bound with the true variance: +sup +t∈R +����P +��γ − γ +v +≤ t +� +− Φ(t) +���� ≤ 2Cσw +c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +∥w∥2 +� +c−1 minz∈[Q] S(z, z) · √N0 +. +3. We have a Berry–Esseen bound with the estimated variance: for any ϵN ∈ (0, 1/2], +sup +t∈R +����P +��γ − γ +�v +≤ t +� +− Φ +�vlim +v t +����� ≤ ϵN + Cc3c−4∥w∥2 +∞∆4 +QN0 +· +1 +(Nv2ϵN)2 ++ 2Cσw +c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +∥w∥2 +� +c−1 minz∈[Q] S(z, z) · √N0 +. +Proof of Lemma S2. +1. See Lemma S13 of Shi and Ding (2022). +2. See Theorem 1 of Shi and Ding (2022). +3. First we show a useful result: for |a| ≤ 1/2 and any b ∈ R, +sup +t∈R +|Φ{(1 + a)t + b} − Φ{t}| ≤ |a| + |b|. +(S10) +(S10) is particularly useful for small choices of a and b. Intuitively, it evaluates the change of +Φ under a small affine perturbation of t. +The proof of (S10) is based on a simple step of the mean value theorem: for any t ∈ R, +|Φ{(1 + a)t + b} − Φ{t}| +=|φ(ξt,(1+a)t) · (at + b)| +=|φ(ξt,(1+a)t) · at| + |φ(ξt,(1+a)t) · b| +=|a| · |φ(ξt,(1+a)t) · t| · 1 {|t| ≤ 1} + |a| · |φ(ξt,(1+a)t) · t| · 1 {|t| > 1} + |φ(ξt,(1+a)t) · b| +≤ +1 +√ +2π|a| · 1 {|t| ≤ 1} + +1 +√ +2π|a||t| · exp(−t2/8) · 1 {|t| > 1} + +1 +√ +2π|b| +≤|a| + |b|. +We consider t ≥ 0 because t < 0 can be handled similarly. For any ϵN > 0, We have +P +��γ − γ +�v +≤ t +� += P +��γ − γ +v +≤ �v +vt +� += P +��γ − γ +v +≤ �v +vt, +���� +�v − vlim +v +���� ≤ ϵN +� ++ P +��γ − γ +v +≤ �v +vt, +���� +�v − vlim +v +���� > ϵN +� +. +S8 + +Then we can show that +P +��γ − γ +�v +≤ t +� +≤ P +��γ − γ +v +≤ �v +vt, +���� +�v − vlim +v +���� ≤ ϵN +� ++ P +����� +�v − vlim +v +���� > ϵN +� +≤ P +��γ − γ +v +≤ +�v +v + ϵN +� +t +� ++ P +����� +�v − vlim +v +���� > ϵN +� +. +For the first term, we have +sup +t≥0 +����P +��γ − γ +v +≤ +�vlim +v ++ ϵN +� +t +� +− Φ +��vlim +v ++ ϵN +� +t +����� +≤ 2Cσw +c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +∥w∥2 +� +c−1 minz∈[Q] S(z, z) · √N0 +. +For the second term, using the variance estimation results in Part 1 we have +P +����� +�v − vlim +v +���� ≥ ϵN +� +≤ P +����� +�v − vlim +v +���� · +���� +�v + vlim +v +���� ≥ ϵN +� +(because vlim is conservative) += P +����� +N�v2 − Nv2 +lim +Nv2 +���� ≥ ϵN +� +≤ Cc3c−4∥w∥2 +∞∆4 +QN0 +· +1 +(Nv2ϵN)2 . +Besides, by (S10), when ϵN ≤ 1/2, we also have +sup +t∈R +���Φ +��vlim +v ++ ϵN +� +t +� +− Φ +�vlim +v t +���� ≤ vϵN +vlim +≤ ϵN. +Aggregating all the parts above, we can show that for any t ≥ 0, +P +��γ − γ +�v +≤ t +� +≤ Φ +�vlim +v t +� ++ ϵN + Cc3c−4∥w∥2 +∞∆4 +QN0 +· +1 +(Nv2ϵN)2 ++ 2Cσw +c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +∥w∥2 +� +c−1 minz∈[Q] S(z, z) · √N0 +. +On the other hand, we can show that +P +��γ − γ +�v +≤ t +� +≥ P +��γ − γ +v +≤ �v +vt, +���� +�v − vlim +v +���� ≤ ϵN +� +≥ P +��γ − γ +v +≤ +�vlim +v +− ϵN +� +t +� +− P +����� +�v − vlim +v +���� ≥ ϵN +� +. +(S11) +By (S10), when ϵN ≤ 1/2, we also have +sup +t∈R +���Φ +��vlim +v +− ϵN +� +t +� +− Φ +�vlim +v t +���� ≤ ϵN. +S9 + +So we can derive a lower bound analogous to (S11). Note that the results can be analogously +generalized to t ≤ 0. Putting pieces together, we can show that for any t ≥ 0, +sup +t∈R +����P +��γ − γ +�v +≤ t +� +− Φ +�vlim +v t +����� ≤ ϵN + Cc3c−4∥w∥2 +∞∆4 +QN0 +· +1 +(Nv2ϵN)2 ++ 2Cσw +c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +∥w∥2 +� +c−1 minz∈[Q] S(z, z) · √N0 +. +The following corollary shows a Berry–Esseen bound for the studentized statistic in the special +case where w = (w(z))z∈[Q] is a contrast vector for factorial effects. That is, w = gK for some +K ∈ K. +Corollary S2. Assume Condition (S8) and (S9) hold. Let w = gK for some K ∈ K. Then we have +a Berry–Esseen bound with the estimated variance: +sup +t∈R +����P +��τK − τK +�v +≤ t +� +− Φ +�vlim +v t +����� ≤ 2 +� +Cσ4 +wc5c−6∆4 +{minz∈T S(z, z)}2 +�1/3 +· +1 +(QN0)1/3 ++ 2Cσw +c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +� +c−1 minz∈[Q] S(z, z) +· +1 +(QN0)1/2 . +Proof of Corollary S2. Lower bound for Nv2. +Note that ∥w∥2 +2 = Q and ∥w∥∞ = 1. +Using +Condition (S8), we have +Nv2 ≥ Nσ−2 +w Q−2 +Q +� +z=1 +w(z)2N−1 +z S(z, z) +≥ (cQN0) · σ−2 +w c−1Q−1N−1 +0 +min +z∈T S(z, z) · (Q−1∥w∥2 +2) += σ−2 +w cc−1 min +z∈T S(z, z). +Therefore, the Berry–Esseen bound becomes +sup +t∈R +����P +��τK − τK +�v +≤ t +� +− Φ +�vlim +v t +����� ≤ ϵN + +Cσ4 +wc5c−6∆4 +(QN0){minz∈T S(z, z)}2 · 1 +ϵ2 +N ++ 2Cσw +c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +� +c−1 minz∈[Q] S(z, z) · √QN0 +. +Optimize the summation of the first and second term. By taking derivative over ϵN on +the upper bound and solving for the zero point, we know that when +ϵN = +� +2Cσ4 +wc5c−6∆4 +(QN0){minz∈T S(z, z)}2 +�1/3 +, +S10 + +the upper bound is minimized and +sup +t∈R +����P +��τK − τK +�v +≤ t +� +− Φ +�vlim +v t +����� ≤ 2 +� +Cσ4 +wc5c−6∆4 +{minz∈T S(z, z)}2 +�1/3 +· +1 +(QN0)1/3 ++ 2Cσw +c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +� +c−1 minz∈[Q] S(z, z) +· +1 +(QN0)1/2 . +Additionally, we have a Berry–Esseen bounds after screening the effects: +Lemma S3 (Berry Esseen bound with screening). Assume there exists σw > 0 such that +Q +� +z=1 +f[M](z)2N−1 +z S(z, z) ≤ σ2 +wv2(M). +(S12) +Then +sup +t∈R +�����P +� +�γ[ �M] − γ[M] +v(M) +≤ t +� +− Φ(t) +����� +≤ 2P +� +�M ̸= M +� ++ 2Cσw +c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +� +c−1 minz∈[Q] S(z, z) · √N0 +· ∥f[M]∥∞ +∥f[M]∥2 +. +Proof of Lemma S3. With the selected working model we have +sup +t∈R +�����P +� +�γ[ �M] − γ[M] +v(M) +≤ t +� +− Φ(t) +����� += sup +t∈R +�����P +� +�γ[ �M] − γ[M] +v(M) +≤ t, �M = M +� +− Φ(t) + P +� +�γ[ �M] − γ[M] +v(M) +≤ t, �M ̸= M +������ +≤ sup +t∈R +�����P +� +�γ[ �M] − γ[M] +v(M) +≤ t, �M = M +� +− Φ(t) +����� + P +� +�γ[ �M] − γ[M] +v(M) +≤ t, �M ̸= M +� += sup +t∈R +����P +��γ[M] − γ[M] +v(M) +≤ t, �M = M +� +− Φ(t) +���� + P +� +�γ[ �M] − γ[M] +v(M) +≤ t, �M ̸= M +� +≤ sup +t∈R +����P +��γ[M] − γ[M] +v(M) +≤ t +� +− Φ(t) +���� + 2P +� +�M ̸= M +� +. +Now we have +�γ(M) = f ⊤G(·, M)�τ(M) += f ⊤G(·, M)G(·, M)⊤ �Y += f[M]⊤ �Y . +S11 + +By Theorem 1 of Shi and Ding (2022), we have a Berry–Esseen bound with the true variance: +sup +t∈R +����P +��γ(M) − γ[M] +v +≤ t +� +− Φ(t) +���� ≤ 2Cσw +∥f[M]∥∞c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +∥f[M]∥2 +� +c−1 minz∈[Q] S(z, z) · √N0 +. +A crucial quantity that appeared in Lemma S3 is the ratio of norms: +∥f[M]∥∞ +∥f[M]∥2 +. +(S13) +The following Lemma S4 provides an explicit bound on (S13) which reveals how the ratio is con- +trolled with respect to the size of the working model. +Lemma S4. For f[M] ̸= 0, we have +∥f[M]∥∞ +∥f[M]∥2 +≤ +�|M| +Q +�1/2 +. +(S14) +Proof of Lemma S4. Because the LHS of (S14) is a ratio, based on the definition of f ⋆ (4.9) we can +assume ∥f∥2 = 1 without loss of generality. Due to the orthogonality of G, we can use the columns +of G as bases and express f as +f = +1 +√QG(·, M)b1 + +1 +√QG(·, Mc)b2, +where b1 ∈ R|M| and b2 ∈ R|Mc| and ∥(b⊤ +1 , b⊤ +2 )⊤∥2 = 1. Then +f[M] = Q−1G(·, M)G(·, M)⊤f = +1 +√QG(·, M)b1. +Hence +∥f[M]∥∞ ≤ +1 +√Q∥b1∥1, +∥f[M]∥2 = ∥b1∥2, +∥f[M]∥∞ +∥f[M]∥2 +≤ +1 +√Q · ∥b1∥1 +∥b1∥2 +≤ +�|f[M]| +Q +�1/2 +. +C.2 +Proof of Theorem S2 +Proof of Theorem S2. According to the orthogonality of designs, the signs for all terms in the +studied unsaturated population regressions are consistent with those of saturated regressions, which +saves the effort of differentiating true models for partial and full regression. We introduce several +key events that will play a crucial role in the proof: for D0 ∈ [D], define +Under-selection: Eu(D0) = { �Md ⊂ M⋆ +d, d ∈ [D0]}, +Strict under-selection: Esu(D0) = { �Md ⊂ M⋆ +d, d ∈ [D0]; there exists d ∈ [D0], �Md ⊊ M⋆ +d}. +S12 + +High level idea of the proof. To prove screening consistency, we will prove two facts: +P {Eu(D) holds} → 1, +P {Esu(D) holds} → 0. +Combining these two results together, we can conclude asymptotic screening consistency. +We start from the strict under-selection probability. +Step 1: Prove that asymptotically, there is no strict under-selection. +By definition, +P {Esu(1) holds} = P +� +�M1 ⊊ M⋆ +1 +� +≤ P +� +�Mc +1 ∩ M⋆ +1 ̸= ∅ +� +. +We now derive a recursive bound for P {Esu(D0 + 1) holds} where 1 ≤ D0 ≤ D − 1. +We have +decomposition +Esu(D0 + 1) = +� +�Md ⊂ M⋆ +d, d ≤ D0 + 1 +� +− +� +�Md = M⋆ +d, d ≤ D0 + 1 +� += Esu,1(D0 + 1) ∪ Esu,2(D0 + 1), +where +Esu,1(D0 + 1) = +� +�Md ⊂ M⋆ +d, d ≤ D0 + 1 +� +− +� +�Md = M⋆ +d, d ≤ D0; �MD0+1 ⊂ M⋆ +D0+1 +� +, +Esu,2(D0 + 1) = +� +�Md = M⋆ +d, d ≤ D0; �MD0+1 ⊂ M⋆ +D0+1 +� +− +� +�Md = M⋆ +d, d ≤ D0 + 1 +� +. +For Esu,1(D0 + 1), we have +P {Esu,1(D0 + 1) holds} = P +�� +�Md ⊂ M⋆ +d, d ≤ D0 + 1 +� +− +� +�Md = M⋆ +d, d ≤ D0; �MD0+1 ⊂ M⋆ +D0+1 +�� +≤ P +� +∀d ∈ [D0 + 1], �Md ⊂ M⋆ +d; ∃d ∈ [D0], �Md ⊊ M⋆ +d +� +≤ P +� +∀d ∈ [D0], �Md ⊂ M⋆ +d; ∃d ∈ [D0], �Md ⊊ M⋆ +d +� += P {Esu(D0) holds}. +(S15) +For Esu,2(D0 + 1), we notice that �MD0+1 is generated based on �MD0 and the set of estimates +over the prescreened effect set �MD0+1,+. Under Assumption 2, on the event �Md = M⋆ +d we have +�Md+1 = �Md+1. +Hence we can compute +P {Esu,2(D0 + 1) holds} =P +� +�Md = M⋆ +d, d ≤ D0; �MD0+1 ⊊ M⋆ +D0+1 +� +=P +� +�Md = M⋆ +d, d ≤ D0; �MD0+1 ⊊ M⋆ +D0+1 +� +≤P +� +�Mc +D0+1 ∩ M⋆ +D0+1 ̸= ∅ +� +. +(S16) +S13 + +Now (S15) and (S16) together suggest that +P {Esu(D0 + 1) holds} +≤P {Esu(D0) holds} + P +� +�Mc +D0+1 ∩ M⋆ +D0+1 ̸= ∅ +� +≤ · · · ≤ +D0+1 +� +d=1 +P +� +�Mc +D0+1 ∩ M⋆ +D0+1 ̸= ∅ +� +. +(S17) +Taking D0 = D − 1 in (S17) and apply Assumption 1, we conclude +P {Esu(D) holds} → 0. +Step 2: Prove the first part of Theorem S2 and give a probability bound for under- +selection. We compute the probability for under-selection: +P {Eu(D) fails} +=P {Eu(1) fails} + +D +� +D0=2 +P {Eu(D0 − 1) holds; Eu(D0) fails} +=P {Eu(1) fails}(≜ ⃝⋆ 1) + +D +� +D0=2 +P {Ep(D0 − 1) holds; Eu(D0) fails}(≜ ⃝⋆ 2) ++ +D +� +D0=2 +P {Esu(D0 − 1) holds; Eu(D0) fails}(≜ ⃝⋆ 3). +For ⃝⋆ 1, by definition of Eu(1) we have +⃝⋆ 1 = P {Eu(1) fails} = P +� +�M1 ∩ M⋆ +1 +c ̸= ∅ +� += P +� +�M1 ∩ M⋆ +1 +c ̸= ∅ +� +. +(S18) +For ⃝⋆ 2, we have +⃝⋆ 2 ≤ +D +� +D0=2 +P +� +�Md = M⋆ +d, d ∈ [D0 − 1]; �MD0 ∩ M⋆c +D0 ̸= ∅ +� +≤ +D +� +D0=2 +P +� +�MD0 ∩ M⋆c +D0 ̸= ∅ +� +, (S19) +which is because on the given event, �MD0,+ = H( �MD0−1) = H(M⋆ +D0−1) = M⋆ +D0,+ and �MD0 = +�S( �MD0,+) = �MD0. +From (S18) and (S19), +lim sup +N→∞ +(⃝⋆ 1 + ⃝⋆ 2) = +D +� +D0=1 +P +� +�MD0 ∩ M⋆c +D0 ̸= ∅ +� +≤ +D +� +D0=1 +αD0 = α. (by Assumption 1) +(S20) +S14 + +For ⃝⋆ 3, we have +⃝⋆ 3 ≤ +D +� +D0=2 +P {Esu(D0 − 1) holds} +≤ +D +� +D0=2 +D0−1 +� +d=1 +P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� +(using (S17)) += +D−1 +� +d=1 +(D − d)P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� +→ 0. (using Assumption 1) +(S21) +Therefore, by (S20) and (S21), the probability of failure of under-selection gets controlled under +α asymptotically. +As a side product, we obtain finite sample bounds: +P {Eu(D) fails} ≤ +D +� +D0=1 +P +� +�MD0 ∩ M⋆c +D0 ̸= ∅ +� ++ +D−1 +� +d=1 +(D − d)P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� +. +Step 3. Prove of the second part of Theorem S2 and conclude screening consistency. +Under α = α(N) → 0, the first part of the result implies that with probability tending to one, we +have under-selection: +P {Eu(D) holds} → 1. +By (S17) and Assumption 1, strict under-selection will not happen with high probability: +P {Esu(D) holds} → 0. +Therefore, we conclude the consistency of the screening procedure. +C.3 +Proof of Theorem 1 +We state and prove a more general version of Theorem 1: +Theorem S3 (Bonferroni corrected marginal t test). Let �Md = �S(M⋆ +d,+) where M⋆ +d,+ = P(M⋆ +d−1). +Assume Conditions 1, 2, 3 and 4. Then we have the following results for the screening procedure +based on Bonferroni corrected marginal t-test: +(i) (Validity) lim supN→∞ +�D +d=1 P +� +�Md ∩ M⋆c +d ̸= ∅ +� +≤ �D +d=1 αd = α. +(ii) (Consistency) lim supN→∞ D �D +d=1 P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� += 0. +S15 + +(iii) (Type I error control) Overall the procedure achieves type I error rate control: +lim sup +N→∞ +P +� +�M ∩ (∪D +d=1M⋆ +d)c ̸= ∅ +� +≤ α. +(iv) (Perfect screening) When δ′ is strictly positive, we have maxd∈[D] αd → 0 and +lim +N→∞ P +� +�M = +D +� +d=1 +M⋆ +d +� += 1. +Part (i) and (ii) of Theorem 1 justified Assumption 1 and 2 respectively, which build up the +basis for applying Theorem S2. Part (iii) guarantees type I error control under the significance level +α. When we let α decay to zero, Part (iii) implies that we will not include redundant terms into +the selected working model. Part (iv) further states a stronger result with vanishing α - perfect +selection can be achieved asymptotically. +Proof of Theorem 1. +(i) First, we show validity: +P +� +�Md ∩ M⋆c +d ̸= ∅ +� += P +� +∃K ∈ M⋆ +d,+\M⋆ +d, +���� +�τK +�vK,r +���� ≥ Φ−1 +� +1 − +αd +2|M⋆ +d,+| +�� +≤ +� +K∈M⋆ +d,+\M⋆ +d +P +����� +�τK +�vK,r +���� ≥ Φ−1 +� +1 − +αd +2|M⋆ +d,+| +�� +≤ +� +K∈M⋆ +d,+\M⋆ +d +� +αd +|M⋆ +d,+| + +�C +(QN0)1/3 +� +(by Corollary S2) +≤ +� +αd + +�C|M⋆ +d,+| +N1/3 +� +. +Hence, +D +� +d=1 +P +� +�Md ∩ M⋆c +d ̸= ∅ +� +≤ +D +� +d=1 +� +αd + +�C|M⋆ +d,+| +N1/3 +� +. +Due to the effect heredity condition 4, we have +|M⋆ +1,+| = |M⋆ +1|, +|M⋆ +d,+| ≤ K|M⋆ +d−1|. +Hence +lim sup +N→∞ +D +� +d=1 +P +� +�Md ∩ M⋆c +d ̸= ∅ +� +≤ α + lim sup +N→∞ +K �C|M⋆| +N1/3 += α. (using Condition 2(iii)) +S16 + +(ii) Second, we show consistency. Assume the nonzero τK’s are positive. If some are negative one +can simply modify the direction of some of the inequalities and still validate the proof. +P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� += P +� +∃K ∈ M⋆ +d, +���� +�τK +�vK,r +���� ≤ Φ−1 +� +1 − +αd +2|M⋆ +d,+| +�� +≤ +� +K∈M⋆ +d +P +����� +�τK +�vK,r +���� ≤ Φ−1 +� +1 − +αd +2|M⋆ +d,+| +�� +≤ +� +K∈M⋆ +d +P +����� +�τK +vK.r +���� ≤ �vK,r +vK.r +Φ−1 +� +1 − +αd +2|M⋆ +d,+| +�� +≤ +� +K∈M⋆ +d +P +����� +�τK +vK.r +���� ≤ +� +1 + +�C +(QN0)1/3 +� +Φ−1 +� +1 − +αd +2|M⋆ +d,+| +�� ++ P +� +�vK,r +vK.r +> 1 + +�C +(QN0)1/3 +� +. +For simplicity, let +Z⋆ +d = Φ−1 +� +1 − +αd +2|M⋆ +d,+| +� +. +Then +P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� +≤ +� +K∈M⋆ +d +� +P +� +−Z⋆ +d − τK +vK.r +≤ �τK +vK.r +− τK +vK.r +≤ Z⋆ +d − τK +vK.r +� ++ +�C +(QN0)1/3 +� += +� +K∈M⋆ +d +Φ +� +r−1 +K +� +Z⋆ +d − τK +vK.r +�� +− Φ +� +r−1 +K +� +−Z⋆ +d − τK +vK.r +�� +(≜ ⃝⋆) + +�C|M⋆ +d| +(QN0)1/3 . +With Condition 2, we have +Z⋆ +d = Θ +� +� +� +2 ln +2|M⋆ +d,+| +αd +� +� = Θ( +� +(δ′ + δ′′/3) ln N), +���� +τK +vK.r +���� = Θ(N1/2+δ) = Θ(Nδ0) (by defining δ0 = 1/2 + δ > 0). +Because δ > −1/2 and δ′ ≥ 0, we have | τK +vK.r | → ∞ and Z⋆ +d/(| τK +vK.r |) → 0. Therefore, +Φ +� +r−1 +K +� +Z⋆ +d − τK +vK.r +�� +− Φ +� +r−1 +K +� +−Z⋆ +d − τK +vK.r +�� += Θ(N−δ0 exp{−N2δ0/2}). +Now applying Condition 2 again, we have +D +D +� +d=1 +P +� +�Mc +d ∩ M⋆ +d ̸= ∅ +� += Θ +� +D|M⋆|N−δ0 exp{−N2δ0/2} + D|M⋆|/N1/3� += o(1). +S17 + +(iii) The Type I error rate control comes from Theorem S2. +(iv) The perfect selection result follows from Theorem S2. +C.4 +Proof of Theorem 2 +Theorem 2 is a direct result of Theorem 1, Lemma S2 and the following Berry–Esseen bound: +Lemma S5 (Berry–Esseen bound under perfect screening). Assume (S12). Then +sup +t∈R +�����P +� +�γ( �M) − γ +v(M⋆) +≤ t +� +− Φ(t) +����� +≤ 2P +� +�M ̸= M⋆� ++ 2Cσw +c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +� +c−1 minz∈[Q] S(z, z) · √N0 +· ∥f[M⋆]∥∞ +∥f[M⋆]∥2 +. +Proof of Lemma S5. This lemma is a direct application of Lemma S3. First we check that +γ(M⋆) = γ. +From the definition of γ (S7), we have +γ = f ⊤Y += f ⊤Gτ = f ⊤G(·, M⋆)τ(M⋆) += Q−1f ⊤G(·, M⋆)G(·, M⋆)⊤Y = γ(M⋆). +Now apply Lemma S3 with M = M⋆ to get the result of Theorem 2. +C.5 +Statement and proof of Lemma S6 +The following lemma gives the closed form solution of the RLS estimator (4.8). +Lemma S6. �Yr from (4.8) can be expressed as: +�Yr = Q−1G(·, �M)G(·, �M)⊤ �Y . +If �M = M⋆, E +� +�Yr +� += Y . +S18 + +Proof of Lemma S6. Due to the orthogonality of G, we have the following decomposition: +�Y = Q−1G(·, �M)G(·, �M)⊤ �Y + Q−1G(·, �Mc)G(·, �Mc)⊤ �Y . +By the constraint in (4.8), we have +∥�Y − µ∥2 = ∥Q−1G(·, �Mc)G(·, �Mc)⊤ �Y ∥2 + ∥Q−1G(·, �M)G(·, �M)⊤ �Y − µ∥2, +which is minimized at +�µ = �Yr = Q−1G(·, �M)G(·, �M)⊤ �Y . +Besides, �µ satisfies the constraint in (4.8). +Next we verify E +� +�Yr +� += Y if �M = M⋆. Utilizing the orthogonality of G again, we have +Y = Q−1G(·, M⋆)G(·, M⋆)⊤Y + Q−1G(·, M⋆c)G(·, M⋆c)⊤Y +C.6 +Proof of Proposition 1 +Proof of Proposition 1. (i) Based on the definition of v2 +r and v2, we have +v2 +r +v2 = f ⋆⊤V�Y f ⋆ +f ⊤V�Y f += ∥f ⋆∥2 +2 +∥f∥2 +2 +because κ(V�Y ) = 1. We further compute +∥f ⋆∥2 +2 +∥f∥2 +2 += f ⊤{Q−1G(·, M⋆)G(·, M⋆)⊤}f +f ⊤f +≤ 1 +where the inequality holds because of the following dominance relationship: +Q−1G(·, M⋆)G(·, M⋆)⊤ ≼ IQ. +(ii) Because the order of the nonzero elements in f is not crucial here, we assume the first s⋆ +coordinates of f are nonzero while the rest are zero without loss of generality. We can compute +v2 +r +v2 = f ⋆⊤V�Y f ⋆ +f ⊤V�Y f +≤ κ(V�Y ) · ∥f ⋆∥2 +2 +∥f∥2 +2 +. +(S22) +S19 + +For f ⋆ we have +∥f ⋆∥2 = ∥Q−1G(·, M⋆)G(·, M⋆)⊤f∥2 += +�����Q−1G(·, M⋆)G(·, M⋆)⊤ +� s⋆ +� +s=1 +f(s)es +������ +2 +≤ +s⋆ +� +s=1 +|f(s)|∥Q−1G(·, M⋆)G(·, M⋆)⊤es∥2 += +�|M⋆| +Q +�1/2 s⋆ +� +s=1 +|f(s)| = +�|M⋆| +Q +�1/2 +∥f∥1. +Then we have +∥f ⋆∥2 +2 +∥f∥2 +2 +≤ |M⋆| +Q +∥f∥2 +1 +∥f∥2 +2 +≤ s⋆|M⋆| +Q +. +(S23) +Combining (S22) and (S23), we conclude the result. +As an extension of Proposition 1, we compare the asymptotic length of confidence intervals in +the following Proposition S1. +Proposition S1 (Asymptotic length of confidence interval comparison). Assume that both �γ and +�γr converge to a normal distribution as the sample size tends to infinity. Assume the variance +estimators are consistent: N(�v2 − v2 +lim) = oP(1), N(�v2 +r − v2 +r,lim) = oP(1). +(i) If the condition number of D�Y satisfies κ(D�Y ) = 1, we have +v2 +r,lim +v2 +lim +≤ 1. +(ii) Let s⋆ denote the number of nonzero elements in f. , then we have +v2 +r,lim +v2 +lim +≤ κ(D�Y ) · s⋆|M⋆| +Q +. +C.7 +Proof of Theorem 3 +Proof of Theorem 3. According to Condition 5 and Theorem 1, with Strategy 1, +P +� +�M = ∪d⋆ +d=1M⋆ +d +� +→ 1. +We will apply Lemma S5 with +M = M⋆ = ∪d⋆ +d=1M⋆ +d. +S20 + +We only need to verify γ = γ[M] under the orthogonality condition (5.14). +γ = f ⊤Y += f ⊤Gτ = f ⊤G(·, M⋆)τ(M⋆) + f ⊤G(·, M⋆c)τ(M⋆c). +Now by (5.14), f ⊤G(·, Mc) = 0. Hence +γ = Q−1f ⊤G(·, ∪d⋆ +d=1M⋆ +d)G(·, ∪d⋆ +d=1M⋆ +d)⊤Y = γ. +C.8 +Proof of Theorem 4 +Proof of Theorem 4. This proof can be finished by applying Lemma S3 and S4 with M = M +⋆ and +checking γ[M +⋆] = γ, which is omitted here. +C.9 +Proof of Proposition 1 +Proof of Proposition 1. (i) Assume V�Y = Q−1GΛG⊤ where Λ is a diagonal matrix in RQ×Q. We +directly compute +v2 +r +v2 = f ⋆⊤V�Y f ⋆ +f ⊤V�Y f += f ⊤{Q−1G(·, M⋆)G(·, M⋆)⊤}{Q−1GΛG⊤}{Q−1G(·, M⋆)G(·, M⋆)⊤}f +f ⊤{Q−1GΛG⊤}f += f ⊤{Q−1G(·, M⋆)Λ(M⋆, M⋆)G(·, M⋆)⊤}f +f ⊤{Q−1GΛG⊤}f +≤ 1. +(ii) Because the order of the nonzero elements in f ⋆ is not crucial, we assume only the first s⋆ +elements of f are nonzero. That is, +f = f1e1 + · · · + fs⋆es⋆. +(S24) +We can verify that +∥Q−1G(·, M⋆)G(·, M⋆)⊤ek∥2 = |M⋆| +Q , +∀ k ∈ [Q]. +(S25) +Therefore, +v2 +r +v2 = f ⋆⊤V�Y f ⋆ +f ⊤V�Y f +≤ ϱmax(V�Y )∥f ⋆∥2 +2 +ϱmin(V�Y )∥f∥2 +2 += κ(V�Y ) · ∥f ⋆∥2 +2 +∥f∥2 +2 +. +On the one hand, using Q−1G(·, M⋆)G(·, M⋆)⊤ ≼ IQ, we have +∥f ⋆∥2 +2 +∥f∥2 +2 +≤ 1. +(S26) +S21 + +On the other hand, using (S24) and (S25), we have +∥f ⋆∥2 +2 +∥f∥2 +2 +≤ ∥f∥2 +1 +∥f∥2 +2 +· |M⋆| +Q +≤ s⋆|M⋆| +Q +. +(S27) +Combining (S26) and (S27) concludes the proof. +C.10 +Proof of Theorem 5 +For simplicity, we focus on the case given by (6.15). The general proof can be completed similarly. +We begin by the following lemma: +Lemma S7 (Consistency of the selected tie sets). Assume Conditions 1, 3 and 6. There exists +universal constants C, C′ > 0, such that when N > n(δ1, δ2, δ3), we have +P +� +�T1 = T1 +� +≥1 − P{ �M ̸= M⋆} +−C|T ′||T1| +�� +¯c∆|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c∆|M⋆| +� ++ σc−1/2 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N +� +. +Lemma S7 establishes a finite sample bound to quantify the performance of the tie set selection +step in Algorithm 2. The tail bound implies that the performance of tie selection depends on several +elements: +• Quality of effect screening. Ideally we hope perfect screening can be achieved. In other words, +the misspecification probability P{ �M ̸= M⋆} is small in an asymptotic sense. +• Size of the tie |T1| and the number of factor combinations considered |T ′|. These two quantities +play a natural role because one can expect the difficulty of selection will increase if there are +too many combinations present in the first tie or involved in comparison. +• Size of between-group distance d⋆ +h. If the gap between Y (1) and the remaining order values +are large, ηN = Θ(Nδ2) is allowed to take larger values and the term +� +¯c∆|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c∆|M⋆| +� +can become smaller in magnitude. +• Population level property of potential outcomes. The scale of the centered potential outcomes +|Yi(z) − Y (z)| should be controlled, and the population variance S(z, z) should be non- +degenerate. +S22 + +• The relative scale between number of nonzero effects |M⋆| and the total number of units N. +The larger N is compared to |M⋆|, the easier for us to draw valid asymptotic conclusions. +Proof of Lemma S7. The high level idea of the proof is: we first prove the non-asymptotic bounds +over the random event �M = M⋆, then make up for the cost of �M ̸= M⋆. Over �M = M⋆, we have +�Yr = �Y ⋆ +r = G(·, M⋆)�τ(M⋆) = Q−1G(·, M⋆)G(·, M⋆)⊤ �Y . +We apply Lemma S3 to establish a Berry–Esseen bound for each �Y ⋆ +r (z). Note that +�Y ⋆ +r (z) = f ⊤ +z �Y , f ⊤ +z = Q−1G(z, M⋆)G(·, M⋆)⊤. +By calculation we have +∥fz∥∞ = Q−1|M⋆|, ∥fz∥2 = +� +Q−1|M⋆|. +Also we can show that +Q +� +z′=1 +fz(z′)2N−1 +z′ S(z′, z′) ≤ σ2v2(M). +and obtain +sup +t∈R +�����P +� �Y ⋆ +r (z) − Y (z) +vN +≤ t +� +− Φ(t) +����� ≤ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +� +|M⋆| +QN0 +. +A probabilistic bound on the ordered statistics. We show a bound on +P +� +max +z∈T ′\T1 +�Y ⋆ +r (z) < min +z∈T1 +�Y ⋆ +r (z) ≤ max +z∈T1 +�Y ⋆ +r (z) +� +. +It is known that (Wainwright, 2019, Exercise 2.2) +1 − Φ(x) = +� ∞ +x +φ(t)dt ≤ 1 +x +� ∞ +x +tφ(t)dt ≤ +1 +√ +2πx +� +exp +� +−x2 +2 +�� +. +Hence +P +�√ +N +����Y ⋆ +r (z) − Y (z) +��� ≥ +√ +Nd⋆ +h +� +≤ +vN +√ +2πd⋆ +h +· exp +� +− d⋆2 +h +2v2 +N +� ++ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q. +(S28) +S23 + +Therefore, for all z ∈ T ′\T and z′ ∈ T1, +P +� +�Y ⋆ +r (z′) − �Y ⋆ +r (z) < 0 +� += P +�√ +N(�Y ⋆ +r (z′) − Y (z′)) − +√ +N(�Y ⋆ +r (z) − Y (z)) < +√ +N(Y (z) − Y (z′)) +� +≤ P +�√ +N(�Y ⋆ +r (z′) − Y (z′)) − +√ +N(�Y ⋆ +r (z) − Y (z)) < −2 +√ +Nd⋆ +h +� += P +�√ +N(�Y ⋆ +r (z′) − Y (z′)) − +√ +N(�Y ⋆ +r (z) − Y (z)) < −2 +√ +Nd⋆ +h, +√ +N(�Y ⋆ +r (z) − Y (z)) < +√ +Nd⋆ +h +� ++ P +�√ +N(�Y ⋆ +r (z′) − Y (z′)) − +√ +N(�Y ⋆ +r (z) − Y (z)) < −2 +√ +Nd⋆ +h, +√ +N(�Y ⋆ +r (z) − Y (z)) < +√ +Nd⋆ +h +� +≤ P +�√ +N(�Y ⋆ +r (z′) − Y (z′)) < − +√ +Nd⋆ +h +� ++ P +�√ +N(�Y ⋆ +r (z) − Y (z)) ≥ +√ +Nd⋆ +h +� +. +Using (S28) we have +P +� +�Y ⋆ +r (z′) − �Y ⋆ +r (z) < 0 +� +≤ +� +¯c∆|M⋆| +√2πN0Qd⋆ +h +· exp +� +−N0Qd⋆2 +h +2¯c¯s|M⋆| +� ++ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q. +Now a union bound gives +P +� +�Y ⋆ +r (z′) − �Y ⋆ +r (z) < 0 +� +≥ 1 − |T1||T ′| +� � +¯c¯s|M⋆| +√2πN0Qd⋆ +h +· exp +� +−N0Qd⋆2 +h +2¯c¯s|M⋆| +� ++ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +Now using that d⋆ +h = Θ(Nδ1), Nd⋆2 +h = Θ(N1+2δ1) with 1 + 2δ1 > 0. The first term in the bracket +has the following order +� +¯c¯s|M⋆| +√2πN0Qd⋆ +h +· exp +� +−N0Qd⋆2 +h +2¯c¯s|M⋆| +� += Θ +�� +¯c¯s|M⋆| +N1+2δ1 exp +� +−C′N1+2δ1 +¯c¯s|M⋆| +�� +where C′ > 0 is a universal constant due to Condition 2.Note that δ2 > δ1. Thus when N is large +enough, we have +P +� +�Y ⋆ +r (z′) − �Y ⋆ +r (z) < 0 +� +≥1 − C|T1||T ′| +�� +¯c¯s|M⋆| +N1+2δ1 exp +� +−C′N1+2δ1 +¯c¯s|M⋆| +� ++ σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +(S29) +S24 + +Nice separation. Consider the following random index: +�z ∈ arg max +z∈T ′ +�Y ⋆ +r (z). +For any ¯ϵ > 0, +P +� +min +z /∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (�z)|/ηN ≥ 2¯ϵ +� +≥ P +� +min +z /∈T1,z′∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN ≥ 2¯ϵ, �z ∈ T1 +� +≥ P +� +min +z /∈T1,z′∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN ≥ 2¯ϵ +� ++ P {�z ∈ T1} − 1 +≥ P {�z ∈ T1} − +� +z /∈T1,z′∈T1 +P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (�z′)|/ηN ≤ 2¯ϵ +� +. +(S30) +To proceed we have the following tail bound: +P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN ≤ 2¯ϵ +� +=P +� +|{�Y ⋆ +r (z) − Y (z)} − {�Y ⋆ +r (z′) − Y (z′)} − {Y (z) − Y (z′)}| ≤ 2¯ϵηN +� +≤P +� +|Y (z) − Y (z′)| − |�Y ⋆ +r (z) − Y (z)| − |�Y ⋆ +r (z′) − Y (z′)| ≤ 2¯ϵηN +� +≤P +� +|�Y ⋆ +r (z) − Y (z)| + |�Y ⋆ +r (z′) − Y (z′)| ≥ 2d⋆ +h − 2¯ϵηN +� +≤P +� +|�Y ⋆ +r (z) − Y (z)| ≥ d⋆ +h − ¯ϵηN +� ++ P +� +|�Y ⋆ +r (z′) − Y (z′)| ≥ d⋆ +h − ¯ϵηN +� +(because z /∈ T1 and z′ ∈ T1) +≤4 +� +� +¯c∆|M⋆| +√2πN0Q(d⋆ +h − ϵηN) · exp +� +−N0Q(d⋆ +h − ϵηN)2 +2¯c¯s|M⋆| +� ++2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +(This is deduced analogously to the proof in the previous part) +By the conditions we imposed in the theorem, we know that when N is large enough, +d⋆ +h − ¯ϵηN > d⋆ +h/2. +Hence, for N > N(δ1, δ2), we have +� +z /∈T1,z′∈T1 +P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN ≤ 2¯ϵ +� +≤4|T1||T ′| +� � +2¯c¯s|M⋆| +√πN0Qd⋆ +h +· exp +� +−N0Qd⋆2 +h +8¯c¯s|M⋆| +� ++ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +S25 + +Combined with (S30), we have: +P +� +min +z /∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (�z)|/ηN ≥ 2¯ϵ +� +≥P { �m ∈ T1} − 4|T1||T ′| +� +2¯c¯s|M⋆| +√πN0Qd⋆ +h +· exp +� +−N0Qd⋆2 +h +8¯c¯s|M⋆| +� +� +�� +� +Term I +− 4|T1||T ′|2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +�� +� +Term II +. +Analogous to the discussion in the previous part, when N is sufficiently large, we can show +P +� +min +z /∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (�z)|/ηN ≥ 2¯ϵ +� +≥P { �m ∈ T1} − C|T1||T ′| +�� +¯c¯s|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c¯s|M⋆| +� ++ σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +Similarly we can show for any z ∈ T1 and ϵ > 0, +P +� +max +z∈T1 |�Y ⋆ +r (z) − �Y ⋆ +r (�z)|/ηN ≤ 2ϵ +� +≥ P {�z ∈ T1} − +� +z̸=z′∈T1 +P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN > 2ϵ +� +. +Then we have for z ̸= z′ ∈ T1, +P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN > 2ϵ +� +≤ P +� +|�Y ⋆ +r (z) − Y (z)| ≥ ϵηN − dh +� ++ P +� +|�Y ⋆ +r (z′) − Y (z′)| ≥ ϵηN − dh +� +≤ 4 +� +� +¯c¯s|M⋆| +√2πN0Q(ϵηN − dh) · exp +� +−N0Q(ϵηN − dh)2 +2¯c¯s|M⋆| +� ++ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +By the scaling of the parameters, when N0 is large enough N > N(δ2, δ3), ϵηN − dh > ϵηN/2. That +being said, +P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (z′)|/ηN > 2ϵ +� +≤4 +� +� +2¯c¯s|M⋆| +√πN0Q(ϵηN) · exp +� +−N0Q(ϵηN)2 +8¯c¯s|M⋆| +� ++ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +S26 + +Hence we have: +P +� +max +z∈T1 |�Y ⋆ +r (z) − �Y ⋆ +r (�z)|/ηN ≤ 2ϵ +� +≥P {�z ∈ T1} − 4|T1||T ′| +� +2¯c¯s|M⋆| +√πN0Q(ϵηN) · exp +� +−N0Q(ϵηN)⋆2 +8¯c¯s|M⋆| +� +� +�� +� +Term I +− 4|T1||T ′|2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +�� +� +Term II +. +Again, by the conditions, we can show +P +� +max +z∈T1 |�Y ⋆ +r (z) − �Y ⋆ +r (�z)|/ηN ≤ 2ϵ +� +≥P {�z ∈ T1} − C|T1||T ′| +�� +¯c¯s|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c¯s|M⋆| +� ++ σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +Applying (S29) we know that +P{�zh ∈ T1} +≥1 − C|T ′||T1| +�� +¯c¯s|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c¯s|M⋆| +� ++ σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +Aggregating parts. Aggregating all the results above, we can show that, when N is large +enough, i.e., N > n(δ1, δ2, δ3), +P +� +max +z∈T1 |�Y ⋆ +r (z) − �Y ⋆ +r (�z)| ≤ ϵηN, min +z /∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (�z)| ≥ ¯ϵηN +� +≥1 − C|T ′||T1| +�� +¯c¯s|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c¯s|M⋆| +� ++ σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +Bounding the factor level combination selection probability. +From the formulated +procedure, we have +P +� +�T1 = T1 +� +=P +� +|�Yr(z) − max +z∈T ′ �Yr(z)| ≤ ϵηN, for z ∈ T1; +|�Yr(z) − max +z∈T ′ �Yr(z)| > ϵηN, for z /∈ T1 +� +≥P +� +|�Y ⋆ +r (z) − max +z∈T ′ �Y ⋆ +r (z)| ≤ ϵηN, for z ∈ T1; +S27 + +|�Y ⋆ +r (z) − max +z∈T ′ �Y ⋆ +r (z)| > ϵηN, for z /∈ T1 +� +− P{ �M ̸= M⋆} +=P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| ≤ ϵηN, for z ∈ T1; +|�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| > ϵηN, for z /∈ T1 +� +− P{ �M ̸= M⋆} +(where we introduce random index �zh to record the position that achieves maximum) +≥P +� +|�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| ≤ ϵηN, for z ∈ T1; +|�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| > ϵηN, for z /∈ T1 +� +− P{ �M ̸= M⋆} +(simply using the fact that ϵ > ϵ) +=P +� +max +z∈T1 |�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| ≤ ϵηN; min +z /∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| > ϵηN +� +− P{ �M ̸= M⋆} +≥1 − +H0 +� +h=1 +� +1 − P +� +max +z∈T1 |�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| ≤ ϵηN; min +z /∈T1 +|�Y ⋆ +r (z) − �Y ⋆ +r (�zh)| > ϵηN +�� +− P{ �M ̸= M⋆} +≥1 − P{ �M ̸= M⋆} +−C|T ′||T1| +�� +¯c¯s|M⋆| +N1+2δ2 exp +� +−C′N1+2δ2 +¯c¯s|M⋆| +� ++ σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| +c−1/2{minz∈[Q] S(z, z)}1/2 +· +� +|M⋆| +N0Q +� +. +Lemma S7 suggests that, under the conditions assumed in Theorem 5, we select the first tie set +consistently as N → ∞. Now Theorem 5 is a direct result of Lemma S5 and Lemma S7. +S28 + diff --git a/W9FLT4oBgHgl3EQfTS_4/content/tmp_files/load_file.txt b/W9FLT4oBgHgl3EQfTS_4/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..cabc95d30285ea4c6b65089a3d3ab12a7dc33b28 --- /dev/null +++ b/W9FLT4oBgHgl3EQfTS_4/content/tmp_files/load_file.txt @@ -0,0 +1,1556 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf,len=1555 +page_content='Forward screening and post-screening inference in factorial designs Lei Shi∗ Jingshen Wang† Peng Ding ‡ Abstract Ever since the seminal work of R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Fisher and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Yates, factorial designs have been an important experimental tool to simultaneously estimate the treatment effects of multiple factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In factorial designs, the number of treatment levels may grow exponentially with the number of factors, which motivates the forward screening strategy based on the sparsity, hierarchy, and heredity principles for factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Although this strategy is intuitive and has been widely used in practice, its rigorous statistical theory has not been formally established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To fill this gap, we establish design-based theory for forward factor screening in factorial designs based on the potential outcome framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We not only prove its consistency property but also discuss statistical inference after factor screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In particular, with perfect screening, we quantify the advantages of forward screening based on asymptotic efficiency gain in estimating factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With imperfect screening in higher-order interactions, we propose two novel strategies and investigate their impact on subsequent inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Our formulation differs from the existing literature on variable selection and post-selection inference because our theory is based solely on the physical randomization of the factorial design and does not rely on a correctly-specified outcome model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Keywords: Causal inference;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Design-based inference;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Forward selection;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Post-selection infer- ence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ∗Division of Biostatistics, University of California, Berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' leishi@berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='edu †Division of Biostatistics, University of California Berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' jingshenwang@berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ‡Peng Ding, Department of Statistics, University of California, Berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' pengdingpku@berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='edu 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='12045v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='ME] 28 Jan 2023 1 Introduction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Factorial experiments: opportunities and challenges Ever since the seminal work of Fisher (1935) and Yates (1937), factorial designs have been widely used in many fields, including agricultural, industrial, and biomedical sciences (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Box et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wu and Hamada, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Gerber and Green, 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For example, in social science, one government funded research by Zhang (2022) studied the social construction of hate crime in the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' using factorial experiments based on three factors: race, sexual orientation, and religious affiliation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As another example, in ecology, Rillig et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2019) studied multiple global change factors in driving soil functions and microbial biodiversity with factorial designs involving up to ten factors involving drought, temperature, antibiotics, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Factorial experiments are popular partially because they can simultaneously accommodate multiple factors and offer opportunities to estimate not only the main causal effects of factors but also their interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We focus on the 2K factorial design in which K binary factors are randomly assigned to N experimental units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With a small K, we can simultaneously estimate the 2K − 1 main effects and interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Nevertheless, when K is large, the number of factorial effects grows exponentially with K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This motivates us to conduct factor screening based on sparsity, hierarchy, and heredity principles for factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More precisely, Wu and Hamada (2011) summarized these three principles as below: (a) (sparsity) The number of important factorial effects is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (b) (hierarchy) Lower-order effects are more important than higher-order effects, and effects of the same order are equally important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (c) (heredity) Higher-order effects are significant only if their corresponding lower-order effects are significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The sparsity principle motivates conducting factor screening in factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The hierarchy principle motivates the forward screening strategy that starts from lower-order effects and then moves on to higher-order effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The heredity principle motivates using structural restrictions on higher-order effects based on the selected lower-order effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Due to its simplicity and computa- tional efficiency, while the forward screening strategy has been widely used in data analysis (Wu and Hamada, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Espinosa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2016), its design-based theory under the potential outcome framework has not been formally established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, it is often challenging to understand the 2 impact of factor screening on the subsequent statistical inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The overarching goal of this manuscript is to fill these gaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Our contributions and literature review We summarize our contribution from three perspectives: First, our study adds to the growing literature of factorial designs with a growing number of factors under the potential outcome framework (Dasgupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Branson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lu, 2016b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Espinosa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Egami and Imai, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Blackwell and Pashley, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao and Ding, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Pashley and Bind, 2023;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To deal with a large number of factors, Espinosa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016) and Egami and Imai (2019) informally used factor screening without studying its statistical properties, whereas Zhao and Ding (2021) discussed parsimonious model specifications that are chosen a priori and independent of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The rigorous theory for factor screening is generally missing in this literature, let alone the theory for statistical inference after factor screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' At a high level, our contributions fill the gaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Second, we formalize forward factor screening and establish its consistency under the design- based framework under few outcome modeling assumptions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' see Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Factor screening in factorial design sounds like a familiar statistical task if we formulate it as a variable selection problem in a linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Thus, forward screening is reminiscent of the vast literature on forward selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wang (2009) and Wieczorek and Lei (2022) proved the consistency of forward selection for the main effects in a linear model, whereas Hao and Zhang (2014) and Hao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2018) moved further to allow for second-order interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Other researchers proposed various penalized regressions to encode the sparsity, hierarchy, and heredity principles (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bickel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bien et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lim and Hastie, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Haris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2016), without formally studying the statistical properties of the selected model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Our design-based framework departs from the literature without assuming a correctly-specified linear outcome model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This framework is classic in experimental design and causal inference with randomness coming solely from the design of experiments rather than the error terms in a linear model (Neyman, 1923/1990;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Kempthorne, 1952;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Freedman, 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lin, 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Dasgupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This framework invokes fewer outcome modeling assumptions but consequently imposes technical challenges for developing the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bloniarz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016) discussed the design-based theory for covariate selection in treatment-control experiments, but the corresponding theory for factorial designs is largely unexplored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Third, we discuss statistical inference after forward factor screening with (Sections 4 and 6) or without perfect screening (Section 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the one hand, we prove the screening consistency of the 3 forward screening procedure, which ensures that the selected factorial effects are the true, non-zero ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With this perfect screening property, we can then proceed as if the selected working model is the true model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This allows us to ignore the impact of forward screening on the subsequent inference, which is similar to the proposal of Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2021) for statistical inference after Lasso (Tibshirani, 1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In particular, we quantify the advantages of conducting forward screening based on the asymptotic efficiency gain for estimating factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As an application under perfect screening, we discuss statistical inference for the mean outcome under the best factorial combination (Andrews et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the other hand, we acknowledge that perfect screening can be too much to hope for in practice as it requires strong regularity conditions on factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As a remedy, we propose two strategies to deal with imperfect screening in higher- order interactions, and we study their impacts on post-screening inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A key motivation for our strategies is to ensure that the parameters of interest after forward factorial screening are not data-dependent, avoiding philosophical debates in the current literature of post-selection inference (Fithian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Kuchibhotla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 Notation We will use the following notation throughout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For asymptotic analyses, aN = O(bN) denotes that there exists a positive constant C > 0 such that aN ≤ CbN;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' aN = o(bN) denotes that aN/bN → 0 as N goes to infinity;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' aN = Θ(bN) denotes that there exists positive constants c and C such that cbN ≤ aN ≤ CbN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For matrix V , define ϱmax(V ) and ϱmin(V ) as the largest and smallest eigenvalues, respectively, and define κ(V ) = ϱmax(V )/ϱmin(V ) as its condition number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For two positive semi-definite matrices V1 and V2, we write V1 ≼ V2 or V2 ≽ V1 if V2 − V1 is positive semi-definite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will use different levels of sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For an integer K, let [K] = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , K}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We use K in calligraphic to denote a subset of [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let K = {K | K ⊂ [K]} denote the power set of [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We also use blackboard bold font to denote subsets of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For example, M ⊂ K denotes that M is a subset of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will use Ai ∼ Bi to denote the least-squares fit of Ai’s on Bi’s, which is purely a numerical procedure without assuming a linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let P−→ denote convergence in probability, and ⇝ denote convergence in distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 4 2 Setup of factorial designs This section introduces the key mathematical components of factorial experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 introduces the notation of potential outcomes and the definitions of the factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 introduces the treatment assignment mechanism, the observed data, and the regression analysis of factorial experiment data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 uses a concrete example of a 23 factorial experiment to illustrate the key concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Potential outcomes and factorial effects We first introduce the general framework of a 2K factorial design, with K ≥ 2 being an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This design has K binary factors, and factor k can take value zk ∈ {0, 1} for k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , zK) denote the treatment combining all K factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The K factors in total define Q = 2K treatment combinations, collected in the set below: T = {z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , zK) | zk ∈ {0, 1} for k = 1, · · · , K} with |T | = Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We follow the potential outcome notation of Dasgupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2015) for 2K factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Unit i has potential outcome Yi(z) under each treatment level z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Corresponding to the Q = 2K treatment levels, each unit i has Q potential outcomes, vectorized as Yi = {Yi(z)}z∈T using the lexicographic order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Over units i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , N, the potential outcomes have finite-population mean vector Y = (Y (z))z∈T and covariance matrix S = (S(z, z′))z,z′∈T , with elements defined as follows: Y (z) = 1 N N � i=1 Yi(z), S(z, z′) = 1 N − 1 N � i=1 (Yi(z) − Y (z))(Yi(z′) − Y (z′)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We then use the potential outcomes to define factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For a subset K ⊂ [K] of the K factors, we introduce the following “contrast vector” notation to facilitate the presentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To start with, we define the main causal effect for factor k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For a treatment level z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , zK) ∈ T , we use g{k}(z) = 2zk − 1 to denoted the “centered” treatment indicator zk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We then define a Q-dimensional contrast vector g{k} by aggregating these centered treatment variables into a vector using the lexicographic order, that is g{k} = {g{k}(z)}z∈T , where g{k}(z) = 2zk − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1) Next, for the interactions of multiple factors with |K| ≥ 2, we define the contrast vector gK ∈ RQ as gK = {gK(z)}z∈T , where gK(z) = � k∈K g{k}(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2) 5 As a special case, when no factor is considered, we define g∅ = 1Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Stack the contrast vectors into a Q × Q matrix G = (g∅, g{1}, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , g{K}, g{1,2}, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , g{K−1,K}, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , g[K]), which has orthogonal columns with G⊤G = Q · IQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We refer to G as the contrast matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Equipped with the contrast vector notation, we are ready to introduce the main effects and interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More concretely, define the main causal effect of a single factor and the k-way inter- action causal effect of multiple factors (k ≥ 2) as the inner product of the contrast vector gK, and the averaged potential outcome Y , that is τK = Q−1 · g⊤ KY for K ⊂ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For convenience in description, we use τ∅ = Q−1g⊤ ∅Y to denote the average of potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We call the effect τK a parent of τK′ if K ⊂ K′ and |K| = |K′| − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More compactly, we summarize the entire collection of causal parameters in factorial experiments as τ = (τK)K⊂[K] = Q−1 · G⊤Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Treatment assignment, observed data, and regression analysis Under the design-based framework, the treatment assignment mechanism characterizes the com- pletely randomized factorial design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In other words, the experimenter randomly assigns N(z) units to treatment level z ∈ T , with � z∈T N(z) = N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume N(z) ≥ 2 to allow for variance estima- tion within each treatment level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let Zi ∈ T denote the treatment level for unit i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The treatment vector (Z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , ZN) is a random permutation of a vector with prespecified number N(z) of the corresponding treatment level z, for z ∈ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For each unit i, the treatment level Zi only reveals one potential outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We use Yi = Yi(Zi) = � z∈T Yi(z)1 {Zi = z} to denote the observed outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We also use Ni = N(Zi) to denote the number of units for the treatment group in which unit i is assigned to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The central task of causal inference in factorial designs is to use the observed data (Zi, Yi)N i=1 to estimate factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Define �Y (z) = N(z)−1 N � i=1 1 {Zi = z} Yi, �S(z, z) = {N(z) − 1}−1 N � i=1 1 {Zi = z} (Yi − �Y (z))2 as the sample mean and variance of the observed outcomes under treatment z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Vectorize the sample means as �Y = (�Y (z))z∈T , which has mean Y and covariance matrix V�Y = D�Y − N−1S (Li and Ding, 2017), where D�Y = Diag � N(z)−1S(z, z) � z∈T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 6 An unbiased estimator for D�Y is �V�Y = Diag � N(z)−1 �S(z, z) � z∈T , whereas S does not have an unbiased sample analogue because the potential outcomes across treat- ment levels are never jointly observed for the same units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, �V�Y is a conservative estimator of the covariance matrix in the sense that E{�V�Y } = D�Y ≽ V�Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A dominant approach to estimate factorial effects from factorial designs is through estimating least-squares coefficients based on appropriate model specifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let gi denote the row vector in the contrast matrix G corresponding to unit i’s treatment level Zi, that is, gi = {gK(Zi)}K⊂[K] ∈ RQ with gK(z) defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For a set of target effects {τK}K∈M indexed by M, we can run weighted least squares (WLS) to obtain unbiased estimates: �τ = arg min τ N � i=1 wi(Yi − g⊤ i τ)2 with wi = 1/Ni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3) With a small K, we can simply fit the saturated regression by regressing the observed outcome Yi on the regressor gi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The saturated regression involves Q = 2K coefficients without any restrictions on the targeted factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In contrast, an unsaturated regression involves fewer coefficients by regressing the observed outcome Yi on gi,M, a subvector of gi, where M ⊂ K is a subset of the power set of all factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' That is, �τ = arg min τ N � i=1 wi(Yi − g⊤ i,Mτ)2 with wi = 1/Ni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4) For the convenience of description, we will call M a working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We use a working model to generate estimates based on least squares without assuming its correctness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' When M = K, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4) incorporates the saturated regression (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Based on the unsaturated regression with working model M, let �τ(M) = {�τK}K∈M and τ(M) = {τK}K∈M denote the vectors of estimated and true coefficients, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao and Ding (2021) showed that if we run unsaturated regressions with weights 1/Ni for unit i, then the obtained estimated coefficients are unbiased for the true factorial effects within the working model M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More precisely, �τ(M) = Q−1G(·, M)�Y , where G(·, M) to denote the columns in G indexed by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Because �τ(M) is a linear transformation of �Y , we can use the following estimator for its covariance matrix: �Σ(M) = 1 Q2 G(·, M)⊤ �V�Y G(·, M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5) 7 See Lemma S1 in Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 of the supplementary material for more discussions on the above algebraic results for unsaturated regressions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 An illustrating example of a 23 factorial design We realize that the above notation can be rather abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In what follows, we provide an illustrative Example 1 below with K = 3 factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Example 1 (23 factorial design).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Suppose we have three binary factors z1, z2, and z3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' These three factors generate 8 treatment combinations, indexed by a triplet (z1z2z3) with z1, z2, z3 ∈ {0, 1}, in the set T = {(000), (001), (010), (011), (100), (101), (110), (111)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Each unit i has a potential outcome vector Yi = {Yi(z1z2z3)}⊤ z1,z2,z3=0,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The vector of factorial effects in this experiment is τ = 1 23 G⊤Y ≜ � τ∅,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{1},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{2},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{3},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' τ{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3} �⊤ ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' where G is the contrast matrix G = � � � � � � � � � � � � � � � � � � � τ∅ τ{1} τ{2} τ{3} τ{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2} τ{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3} τ{2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3} τ{1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3} (000) 1 −1 −1 −1 1 1 1 −1 (001) 1 −1 −1 1 1 −1 −1 1 (010) 1 −1 1 −1 −1 1 −1 1 (011) 1 −1 1 1 −1 −1 1 −1 (100) 1 1 −1 −1 −1 −1 1 1 (101) 1 1 −1 1 −1 1 −1 −1 (110) 1 1 1 −1 1 −1 −1 −1 (111) 1 1 1 1 1 1 1 1 � � � � � � � � � � � � � � � � � � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We observe the pair (Yi, Zi) for unit i, where Zi = (zi,1, zi,2, zi,3) is the observed treatment combi- nations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let g{k}(Zi) = 2zi,k − 1 be the centered version of zi,k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For the factor-based regression, the regressor gi corresponding to the treatment level Zi equals ti = � 1, g{1}(Zi), g{2}(Zi), g{3}(Zi), g{2,3}(Zi), g{1,3}(Zi), g{1,2}(Zi), g{1,2,3}(Zi) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For instance, when Zi = (101), the regressor gi corresponds to the row (101) of the contrast matrix G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then, a saturated regression is to regress Yi on gi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For the unsaturated regression, if we only 8 include indices ∅ (the intercept), {1}, {1, 2}, {1, 3}, {123} in our regression, we can form the working model M = {∅, {1}, {1, 2}, {1, 3}, {1, 2, 3}} and perform weighted least squares Yi ∼ ti,M, where ti,M = � 1, g{1}(Zi), g{1,3}(Zi), g{1,2}(Zi), g{1,2,3}(Zi) � and the weight for unit i equals 1/Ni = 1/N(Zi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3 Forward screening in factorial experiments In factorial designs with small K, we can simply run the saturated regression to estimate all factorial effects simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' However, when K is large, saturated regression can be computationally unwieldy and scientifically unreasonable by delivering potentially noisy estimates of all higher-order interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As a remedy, forward screening is a popular strategy frequently adopted in practice to analyze data collected from factorial experiments, due to its clear benefits in screening out a large number of zero nuisance factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In this section, we formalize forward screening as a principled procedure to carefully decide an unsaturated working model �M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We first present a formal version of forward screening and then demonstrate its consistency property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 A formal forward screening procedure In this subsection, we introduce a principled forward screening procedure that not only fully respects the effect hierarchy, sparsity, and heredity principles but also results in an interpretable parsimo- nious model with statistical guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More concretely, the algorithm starts by performing factor screening over lower-order effects, then move forward to select the significant higher-order effects following the heredity principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Algorithm 1 summarizes the forward screening procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In what follows, we illustrate why the proposed procedure in Algorithm 1 respects the three fundamental principles in factorial experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' First, Algorithm 1 obeys the hierarchy principle as it performs factor screening in a forward style (coded in the global loop from d = 1 to d = D, Step 2 in particular).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More concretely, we begin with an empty working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We then select relevant main effects (Steps 4 and 8) and add them into the working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Once the working model is updated, we continue to select relevant higher-order interaction effects in a forward fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Such a forward screening procedure is again motivated by the hierarchy principle that lower-order effects are more important than higher-order ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 9 Algorithm 1: Forward factorial screening Input: Factorial data {(Yi, Zi)}N i=1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' predetermined integer D ≤ K;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' initial working model �M = {∅};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' significance levels {αd}D d=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Output: Selected working model �M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 1 Define an intermediate working model �M′ = �M for convenience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2 for d = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , D do 3 Update the intermediate working model to include all the d-order (interaction) terms: �M′ = �M ∪ {K | |K| = d} ≜ �M ∪ Kd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 4 Screen out indices in �M′ according to either the weak/strong heredity principles, and renew the screened working model as �M′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 5 Run the unsaturated regression with the working model �M′: Yi ∼ gi, �M′, with weights wi = N/Ni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 6 Obtain coefficients �τ( �M′) and robust covariance estimation �Σ( �M′) defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 7 Extract �τK( �M′) and �σK( �M′) for all K ∈ �M′ with |K| = d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 8 Run marginal t-tests using the above �τK( �M′) and �σK( �M′) under the significance level min{αd/(| �M′| − | �M|), 1} and remove the non-significant terms from �M′\\ �M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 9 Set �M = �M′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 10 return �M Second, Algorithm 1 operates under the sparsity principle as it removes potentially unimportant effects using marginal t-tests with the Bonferroni correction (Step 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This step induces a sparse working model and helps us to identify essential factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The sparsity-inducing step can incorporate many popular selection frameworks, such as marginal t-tests, Lasso (Tibshirani, 1996), sure independence screening (Fan and Lv, 2008), etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For simplicity, we present Algorithm 1 with marginal t-tests and relegate more general discussions to Section B of the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Third, Algorithm 1 incorporates the heredity principle as it screens out the interaction effects (Wu and Hamada, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hao and Zhang, 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lim and Hastie, 2015) when either none of their parent effects is included (weak heredity) or some of their patient effects are excluded (strong heredity) in the previous working model (Step 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lastly, we note that our forward screening procedure enhances the interpretability of the selected working model by iterating between the “Sparsity-screening” step (called the S-step in the rest of the manuscript), captured by a data-dependent operator �S = �S(·;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' {Yi, Zi}N i=1), and the “Heredity- 10 screening” step (called the H-step in the rest of the manuscript), captured by a deterministic operator H = H(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Because the working model is updated in an iterative fashion, �M1 H −→ �M2,+ �S −→ �M2 · · · �S −→ �Md−1 H −→ �Md,+ �S −→ �Md → · · · �S −→ �MD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6) the final working model includes a small number of statistically significant effects that fully respect the heredity principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Consistency of forward screening We are now ready to analyze the screening consistency property of Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We shall show that the proposed algorithm selects the targeted working model up to level D with probability tending to one as the sample size goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Here, the targeted working model at level k ∈ [K], denoted as M⋆ k, is the collection of K’s where |K| = k and τK ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Define the full targeted working model up to level D as M⋆ 1:D = D � d=1 M⋆ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In particular, when D = K, we omit the subscript and simply denote M⋆ = M⋆ 1:K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We start by introducing the following condition on nearly uniform designs: Condition 1 (Nearly uniform design).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists a positive integer N0 and absolute constants c ≤ c, such that N(z) = c(z)N0 ≥ 2, where c ≤ c(z) ≤ c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 1 allows for diverging Q and bounded N(z)’s across all treatment levels (Shi and Ding, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It generalizes the classical assumption in the fixed Q regime where Q is fixed, and each treatment arm contains a sufficiently large number of replications (Li and Ding, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Next, we quantify the order of the true factorial effect sizes τK’s and the tuning parameters αd’s adopted in the Bonferroni correction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We allow these parameters to change with the sample size N: Condition 2 (Order of parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The true factorial effects τK’s and tuning parameters αd’s have the following orders: (i) True nonzero factorial effects: |τK| = Θ(Nδ) for some −1/2 < δ ≤ 0 and all K ∈ M⋆ 1:D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 11 (ii) Tuning parameters in Bonferroni correction: αd = Θ(N−δ′) for all d ∈ [D] with some δ′ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iii) Size of the targeted working model: �D d=1 |M⋆ d| = Θ(Nδ′′) for some 0 ≤ δ′′ < 1/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 2(i) specifies the allowable order of the true factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If this condition fails, the effect size is of the same order as the statistical error and thus is too small to be detected by marginal t-test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Similar conditions are also adopted in model selection literature, including Zhao and Yu (2006) and Wieczorek and Lei (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 2(ii) requires the tuning parameter αd to converge to zero, which ensures that there is no Type I error in our procedure as N goes to infinity, which is crucial for the selection consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wasserman and Roeder (2009, Theorems 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2) assumed similar conditions in high-dimensional model selection settings for linear models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition (iii) restricts the size of the targeted working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The rate is due to our technical analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Similar conditions also appeared in Zhao and Yu (2006), Wieczorek and Lei (2022) and Wasserman and Roeder (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The next condition specifies a set of regularity assumptions on the potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 3 (Regularity conditions on the potential outcomes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The potential outcomes satisfy the following conditions: (i) Nondegenerate correlation matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let V ⋆ be the correlation matrix of �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists σ > 0 such that the condition number of V ⋆ is smaller than or equal to σ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Bounded fourth central moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists a universal constant ∆ > 0 such that max z∈[Q] 1 N N � i=1 {Yi(z) − Y (z)}4 ≤ ∆4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iii) Bounded standardization scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists a constant M > 0 such that MN ≤ M where MN = maxi∈[N],q∈[Q] |Yi(q) − Y (q)| {minq∈[Q] S(q, q)}1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 3(i) requires the correlation matrix of �Y to be well-behaved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 3(ii) controls the moments of the potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 3(iii) imposes a universal bound on the standard- ization of potential outcomes, which is required by Shi and Ding (2022) to prove the Berry–Esseen bound based on Stein’s method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lastly, we impose the following structural conditions on the factorial effects: Condition 4 (Hierarchical structure in factorial effects).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The nonzero true factorial effects align with the effect heredity principle: 12 Weak heredity: τK ̸= 0 only if there exists K′ ⊂ K with |K′| = |K| − 1 such that τK′ ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Strong heredity: τK ̸= 0 only if τK′ ̸= 0 for all K′ ⊂ K with |K′| = |K| − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Finally, we present the screening consistency property of Algorithm 1: Theorem 1 (Perfect screening property).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under Conditions 1-4, the working model selected by Algorithm 1 converges to the targeted working model with probability one as the sample size goes to infinity: lim N→∞ P � �M = M⋆ 1:D � = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 4 Inference under perfect screening Statistical inference is relatively straightforward under the perfect screening of the factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If forward screening correctly identifies the true, nonzero factorial effects with probability approaching one, we can proceed as if the selected working model is predetermined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1, we present the point estimators and confidence intervals for general causal parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2, we study the advantages of forward screening in terms of asymptotic efficiency in estimating general causal parameters, compared with the corresponding estimators without forward screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We relegate the extensions to vector parameters to Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 of the supplementary material since it is conceptually straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Post-screening inference for general causal parameters Define a general causal parameter of interest as a weighted combination of average potential out- comes: γ = � z∈T f(z)Y (z) ≜ f ⊤Y , where f = {f(z)}z∈T is a pre-specified weighting vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For example, if one is interested in estimating the main factorial effects, f can be taken as the contrast vectors g{k} given in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If one wants to estimate interaction effects, then f can be constructed from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' However, we allow f to be different from the contrast vectors gK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For instance, if one wants to focus on the first two arms in factorial experiments and estimate the average treatment effect, we shall choose f = (1, −1, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , 0)⊤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 13 In general, researchers may tailor the choice of f to the specific research questions of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Without factor screening, a well-studied plug-in estimator of γ in the existing literature is to replace Y with its sample analogue (Li and Ding, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao and Ding, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Shi and Ding, 2022): �γ = f ⊤ �Y = � z∈T f(z)�Y (z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7) Under regularity conditions in Shi and Ding (2022), the plug-in estimator �γ satisfies a central limit theorem (�γ − γ)/v ⇝ N(0, 1) with the variance v2 = f ⊤V�Y f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' When N(z) ≥ 2, its variance can be estimated by: �v2 = f ⊤ �V�Y f = � z∈T f(z)2N(z)−1 �S(z, z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With the help of factor screening, based on the selected working model �M, we consider a potentially more efficient estimator of Y via the restricted least squares (RLS) �Yr = arg min µ∈RQ � ∥�Y − µ∥2 2 : G(·, �Mc)⊤µ = 0 � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8) which leverages the information that the nuisance effects G(·, �Mc)⊤Y are all zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The �Yr in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8) has a closed form solution (see Lemma S6 in the supplementary material): �Yr = Q−1G(·, �M)G(·, �M)⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under perfect screening, �Yr is also a consistent estimator for Y , so �γr = f ⊤ �Yr is also consistent for γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Introduce the following notation f[M] = Q−1G(·, M)G(·, M)⊤f (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9) to simplify �γr and its variance estimator as �γr = f[ �M]⊤ �Y and �v2 r = f[ �M]⊤ �V�Y f[ �M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Construct a Wald-type level-(1 − α) confidence interval for γ: � �γr ± z1−α/2 × �vr � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='10) where z1−α/2 is (1 − α/2)th quantile of a standard normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We can also obtain point estimates and confidence intervals handily from WLS regression of Yi on gi, �M with weights 1/Ni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' See Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 in the supplementary material for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In the following subsection, we provide the theoretical properties of �γr and �v2 r, and compare their asymptotic behaviors with the plug-in estimators �γ and �v2 in various settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 14 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Theoretical properties under perfect screening In this section, we first present the asymptotic normality result for �γr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To simplify discussion, we denote f ⋆ = f[M⋆].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Given M⋆ is the true working model, we have (f ⋆)⊤Y = f ⊤Y , for all f ∈ RQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This identify holds for the true working model, not a general model, suggested by the following algebraic facts: f ⊤Y = f ⊤{Q−1G(·, M⋆)G(·, M⋆)⊤ + Q−1G(·, M⋆c)G(·, M⋆c)⊤}Y (orthogonality of G) = (f ⋆)⊤Y + G(·, M⋆c)τ(M⋆c) (definition of f ⋆ based on (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9)) = (f ⋆)⊤Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (using τ(M⋆c) = 0) We are now ready to present the asymptotic properties of �γr and �v2 r: Theorem 2 (Statistical properties of �γr and �v2 r).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have �γr − γ vr ⇝ N(0, 1) where v2 r = f ⋆⊤V�Y f ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Further assume ∥f ⋆∥∞ = O(Q−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The variance estimator �v2 r is conservative in the sense that: N(�v2 r − v2 r,lim) P−→ 0, v2 r,lim ≥ v2 r, where v2 r,lim = f ⋆⊤D�Y f ⋆ is the limiting value of �v2 r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Theorem 2 above guarantees that the proposed confidence interval in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='10) for γ attains the nominal coverage probability asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Furthermore, it allows us to compare the conditions for reaching asymptotic normality of �γ, which we formalize in the following remark: Remark 1 (Comparison of conditions for asymptotic normality).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Without factor screening, the simple plug-in estimator �γ in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7) satisfies a central limit theorem if N−1/2 0 ∥f∥∞ ∥f∥2 → 0 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='11) recalling the definition of N0 in Condition 1 (Shi and Ding, 2022, Theorem 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='11) fails when N0 is small and f is sparse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, it does not incorporate the sparsity information in the structure of factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With factor screening, however, we can borrow the benefit of a sparse working model and overcome the above drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, factor screening broadens the applicability of our proposed estimator �γr by weakening the assumptions for the Wald-type inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 15 To elaborate the benefits of conducting forward factorial screening in terms of asymptotic efficiency, we make a simple comparison of the asymptotic variances of �γ and �γr in Proposition 1 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In the most general setup, there is no ordering relationship between v2 r and v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' That is, the RLS based estimator may have higher variance than the unrestricted OLS estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This is a known fact due to heteroskedasticity and the use of sandwich variance estimators (Meng and Xie, 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao and Ding, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Nevertheless, in many interesting scenarios, we can prove an improvement of efficiency by factor screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Two conditions are summarized in Proposition 1: Proposition 1 (Asymptotic relative efficiency comparison between �γ and �γr).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume that both �γ and �γr converge to a normal distribution as N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) If the eigenvectors of the covariance matrix V�Y are given by the contrast matrix G, then v2 r v2 ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Let s⋆ denote the number of nonzero elements in f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then the asymptotic relative efficiency between �γ and �γr is upper bounded by v2 r v2 ≤ κ(V�Y ) · s⋆|M⋆| Q .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now we add some interpretation for Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Part (i) gives a sufficient condition when the eigen-space of V�Y has a close connection with G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More concretely, G can be regarded as an orthogonal representation of the potential outcome matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' One can verify that such a condition implies that the variance of �Y (z) does not change with its treatment group membership z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' One concrete problem of interest where Part (i) can be applied is testing the sharp null hypothesis of constant effects in uniform factorial designs (with N0 replications in each arm), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', H0F : Yi(z) = Yi for all i ∈ [N] and z ∈ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under H0F, we have V�Y = N−1 0 σ2 · IQ − N−1σ21Q1⊤ Q = N−1 0 σ2GDiag {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , 1} G⊤, where σ2 = 1 N − 1 N � i=1 (Yi − Y )2 and Y = 1 N N � i=1 Yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Thus, the proposed RLS-based estimator �γr is in general more efficient than the plug-in estimator �γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Part (ii) studies a general heteroskedastic setting with sparse weighting vector f and small working model size |M⋆|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The condition number κ(V�Y ) captures the variability of the variances of �Y (z) 16 across multiple treatment combination groups in T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' When the variability of such changes is limited in the sense that κ(V�Y ) < Q/(s⋆|M⋆|), the RLS-based estimator is more efficient than �γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, the above result can be extended to compare the length of the confidence intervals as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The conclusion is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' See Proposition S1 in the supplementary material for the details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 5 Post-screening inference under imperfect screening Similar to many other consistency results for variable selection, the perfect screening property can be too much to hope for in practical data analysis in factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This is because the perfect screening property of forward screening requires the non-zero effects to be well separated from zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Such a theoretical requirement can be rather stringent for higher-order factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In other words, implied by the hierarchy principle, while main factorial effects and lower-order factorial effects are more likely to have non-negligible effect sizes, higher-order factorial effects tend to have comparably smaller effect sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Perfect screening property is less likely to hold when applied to screen out those higher-order effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More rigorously, when Condition 2(i) is violated, Algorithm 1 may no longer enjoy the perfect screening property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Statistical inference without perfect screening is a non-trivial problem in factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If we do not put any restrictions on the factorial selection procedure, the selected model can be anything, even without a stable limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Classical strategies for post-selection inference (Kuchibhotla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022) will encounter various drawbacks in our current setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For example, data splitting (Wasserman and Roeder, 2009) is a widely used strategy to validate inference after variable selection due to its simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' However, it highly relies on the independent sampling assumption, which is violated under our setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the other hand, selective inference (Fithian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2014) is another widely studied strategy, which can deliver valid inference for data-dependent parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' However, it cannot be directly applied to analyze data collected in factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This is because the selective inference strategy often tends to rely on specific selection methods and parametric modeling assumptions on the outcome variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Rather than directly generalizing classical post-selection inference methods to factorial experi- ments, in this section, we shall discuss two alternative strategies leveraging the special data struc- tures in factorial experiments, along with with their statistical inference results (summarized in Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 17 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Two alternative strategies for imperfect screening and statistical inference The two proposed strategies are built on a belief that perfect screening is more plausible for selecting the main factorial effects and lower-order factorial effects up to level d⋆ than the high-order effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will add more discussion on d⋆ after presenting these two strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Select the first d⋆ levels Are higher-order effects necessary?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Select higher-order effects by heredity (Strategy 2) Exclude higher- order effects (Strategy 1) Under-selection with the targeted working model M⋆ Over-selection with the targeted working model M ⋆ yes no Figure 1: Two strategies for factorial screening: Strategy 1 under-selects whereas Strategy 2 over- selects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For Strategy 1, when the higher-order factorial effects are considered to be not necessary, we may stop our forward screening procedure in Algorithm 1 at d = d∗ (instead of d = D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Such a strategy focus on recovering a targeted working model M⋆ up to level d⋆, that is, M⋆ = ∪d⋆ d=1M⋆ d ⊆ M⋆, which leads to an under-selected parsimonious working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We summarize this strategy below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Strategy 1 (Under-selection by excluding high-order interactions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In Algorithm 1, we stop the screening procedure at d = d∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Or equivalently, we set αd = ∞ for d ≥ d⋆ + 1 so that no effects beyond level d⋆ will be selected and �M = ∪d⋆ d=1 �Md.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Given the selected working model �M, we can again construct an estimator of γ = f ⊤Y (defined in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1) based on RLS: �γru = f[ �M]⊤ �Y , and �v2 ru = f[ �M]⊤ �V�Y f[ �M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='12) For Strategy 2, rather than excluding all higher-order interactions with negligible effects, we may further leverage the heredity principle and continue our screening procedure beyond level d⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This means that instead of selecting the higher-order interactions via marginal t-test and Bonferroni correction, we select the higher-order interaction terms whenever either all of their parent effects 18 are selected (strong heredity), or one of their parent effects is selected (weak heredity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' While such a strategy takes higher-order factorial effects into account, it often targets a working model M ⋆ that includes the true model M⋆, that is, M⋆ ⊆ M ⋆ = D � d=1 M ⋆ d, where M ⋆ d = � � � M⋆ d, d ≤ d⋆;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' H(d−d⋆)(M⋆ d⋆), d⋆ + 1 ≤ d ≤ D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The selected model by this strategy is expected to introduce an over-selected model that includes M⋆ as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We summarize this strategy as follows: Strategy 2 (Over-selection by including higher-order interactions through the heredity principle).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In Algorithm 1, set αd = 0, d ≥ d⋆ + 1 and apply a heredity principle (either weak or strong, depending on people’s knowledge of the structure of the effects).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then the high-order effects beyond level d⋆ are selected merely by the heredity principle and �M = ∪D d=1 �Md where �Md = � � � Algorithm 1 Output, d ≤ d⋆;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' H(d−d⋆)( �Md⋆), d⋆ + 1 ≤ d ≤ D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Here H(d−d⋆) is the (d − d⋆)-order composition of H, meaning applying H for (d − d⋆) times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Given the selected working model �M, similarly, we can construct an estimator of γ = f ⊤Y based on RLS: �γro = f[ �M]⊤ �Y , and �v2 ro = f[ �M]⊤ �V�Y f[ �M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='13) In terms of implementation, one can use WLS to conveniently obtain the point estimators in (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='12) and (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='13) and construct slightly more conservative variance estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Due to the orthogonality of the contrast matrix G, perfect screening is not required for computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' See Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 in the supplementary material for more detailed discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In real-world factorial experiments, how should practitioners decide which strategy to work with?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This relies on domain knowledge and the research question of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Strategy 1 is more suitable when there are domain-specific messages indicating that higher-order interactions are neg- ligible, or when the research question only involves lower-order factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, Strategy 1 is helpful when the number of active lower-order interaction is large and Strategy 2 cannot be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Meanwhile, Strategy 2 works better when domain knowledge suggests non-negligible higher- order interactions or the research question targets a more general parameter beyond factorial effects themselves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It may also work well when the number of active lower-order interactions is small, and we can include a small set of high-order terms according to the heredity principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 19 In the following subsection, we study the statistical properties of �γro and �γru and demonstrate the trade-offs between the two strategies for statistical inference from a theoretical perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Theoretical properties under imperfect screening Throughout this subsection, we discuss the scenario where perfect screening is hard to achieve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We work under a relaxed condition of Condition 2 defined as follows: Condition 5 (Order of parameters up to level d⋆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 2 holds with D = d⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Condition 5 no longer imposes any restriction on the order of the parameters beyond level d⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By Theorem 1, Condition 5 guarantees that Algorithm 1 perfectly screens the first d⋆ levels of factorial effects in the sense that P � �Md = M⋆ d for d = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , d⋆� → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We start by analyzing the statistical property of �γru with �M obtained from the under-selection Strategy 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Because the selected working model might deviate from the truth beyond level d⋆, �γru may not be a consistent estimator of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, we focus on weighting vectors f that satisfy certain orthogonality conditions as introduced in Theorem 3 below: Theorem 3 (Guarantee for Strategy 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Recall the equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9) and define f ⋆ = f[M⋆] = Q−1G(·, M⋆)G(·, M⋆)⊤f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1, 3, 4, 5, and f satisfies the following orthogonality condition: G(·, M⋆ d)⊤f = 0 for d⋆ + 1 ≤ d ≤ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='14) Let N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have �γru − γ vru ⇝ N(0, 1), where v2 ru = f ⋆⊤V�Y f ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Further assume ∥f ⋆∥∞ = O(Q−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The variance estimator �v2 ru is conser- vative in the sense that: N(�v2 ru − v2 ru,lim) P−→ 0, v2 ru,lim ≥ v2 ru, where v2 ru,lim = f ⋆⊤D�Y f ⋆ is the limiting value of �v2 ru.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now we add some discussion on Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The orthogonality condition presented in (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='14) restricts the weighting vector f to be orthogonal to the higher-order contrasts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Intuitively, because 20 the higher-order interactions are excluded from the model, making inference on a weighted combina- tion of those excluded interactions is infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' One set of weighting vectors satisfying (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='14) is the contrast vectors of nonzero canonical lower-order interactions, given by f = G(, ∪d⋆ d=1M⋆ d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In large K settings, the lower-order interactions can also grow polynomially fast in K and add difficulty for interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As an example, when K = 10, for the first two levels of factorial effects without screening, there are a total of more than 50 estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It can still greatly benefit the analysis and interpretation to filter out the insignificant ones and obtain a parsimonious, structured working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As for Strategy 2, similarly, we have the following results: Theorem 4 (Guarantee for Strategy 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Recall the equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9) and define f ⋆ = f[M ⋆] = Q−1G(·, M ⋆)G(·, M ⋆)⊤f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1, 3, 4 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If |M ⋆|/N → 0, then �γro − γ vro ⇝ N(0, 1), where v2 ro = f ⋆⊤V�Y f ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Further assume ∥f ⋆∥∞ = O(Q−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The variance estimator �v2 ro is conser- vative in the sense that: N(�v2 ro − v2 ro,lim) P−→ 0, v2 ro,lim ≥ v2 ro, where v2 ro,lim = f ⋆⊤D�Y f ⋆ is the limiting value of �v2 ro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We comment that there is an additional technical requirement in Theorem 4 for over-selection: we assume |M ⋆|/N → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This equation mainly serves as a sufficient condition for CLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The reason is that we need to control the size of the target model M ⋆ compared to the sample size N in order to infer a general causal parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' When analyzing Strategies 1 and 2, Algorithm 1 recovers a targeted model with high probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Both strategies have advantages and disadvantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under-selection reflects a bias-variance trade- off: it can induce more bias for certain weighting vectors, but the constructed estimator typically enjoys smaller variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Over-selection can reduce bias for estimation, but may not be feasible if there are too many lower-order terms which can result in many redundant terms in the selected model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In practice, if higher-order interactions are not crucial, Strategy 1 should be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If high-order interactions are of interest and hard to select, one could pursue Strategy 2 as a practically useful and interpretable solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under the eigenvector condition that V�Y has eigenvector G, we can prove v2 ru ≤ v2 ro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, in this case, by excluding higher-order terms and pursuing under-selection, we can ob- tain an equal or smaller asymptotic variance compared with over-selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In general, due to 21 heteroskedasticity, the order of v2 ru and v2 ro depends on the choice of target weighing vector f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Here we take a sparse f = e1 = (1, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , 0)⊤ as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We can show that v2 ru v2ro ≤ κ(V�Y ) · ��M⋆�� ��M ⋆��.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' When the variability of V�Y between treatment arms is small in the sense that κ(V�Y ) < ��M ⋆��/ ��M⋆��, under-selection leads to smaller asymptotic variance for inferring e⊤ 1 Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 6 Application to inference on the best arm in factorial experiments In the previous sections, we consider the problem of making inference on a single factorial causal effect γ = f ⊤Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As an application of the proposed framework, we study the problem of inference on “best” effect among many causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Without loss of generality, we define the best effect as the effect with the highest level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In what follows, Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 introduces our setup and an inferential procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 presents theoretical guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Best arms, tie set and statistical inference Suppose we have a set of causal effects Γ defined by pre-specified weighting vectors f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , fL (L is potentially large), that is Γ = {γ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , γL}, γl = f ⊤ l Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We aim to perform statistical inference on their ordered values γ(1) ≥ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ≥ γ(l0) with l0 < L being a fixed positive integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As a simple example, if we choose {fl}l∈[L] = {e(z)}z∈T to be the set of the canonical bases {e(z)}z∈T , then our inferential targets include the maximal potential outcome means: Y (1) = max z∈T Y (z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='15) A more practical consideration in factorial experiments is to incorporate structural constraints into the choices of {fl}l∈[L], as it might be infeasible to consider all treatment levels T due to budget or resource constraints especially when K is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This means we might be only interested 22 in factor combinations z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , zK) with at most K0(≤ K) 1’s;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' equivalently, we replace T with the following T ′ in (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='15) and obtain: T ′ = � z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , zK) | K � k=1 zk ≤ K0 � , Y (1) = max z∈T ′ Y (z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='16) By focusing on {f}l∈[L] that are most relevant, the inferential target maxz∈T ′ Y (z) allows us to use the available data to decide if the best causal parameter among those practically interesting ones has a non-zero causal effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Two challenges exist in delivering valid statistical inference on γ(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , γ(l0) in factorial ex- periments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the one hand, sample analogs of the ordered parameters, (�γ(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , �γ(l0)), are often biased estimates of (γ(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , γ(l0)) due to the well-known winner’s curse phenomenon (Andrews et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the other hand, although one might argue that existing approaches can be applied to remove the winner’s curse bias in �γ(l), these approaches do not account for the special structural constraint in factorial experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Rigorous statistical guarantees have been lacking in our context due to the unique presence of both large L and large Q in factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To simultaneously address the above challenges, we propose a procedure that tailors the tie-set identification approach proposed in Claggett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2014) and Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022) to our current problem setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We focus on making inference on the first ordered value γ(1) to simplify discussion, and our approach extends naturally to other ordered values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The proposed procedure is provided in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Algorithm 2 consists of three major components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' First, we need to construct �γl = f ⊤ l �YR with feature screening (Step 1-2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' These RLS-based estimators enjoy great benefits for large Q and small N0 regimes based on our previous discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Second, we construct �L1 to include the estimates that are close to �γ(1) (Step 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Intuitively, these collected estimates are different due to random error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will show that with proper tuning, this procedure will include all the l for which γl are statistically indistinguishable from γ(1) with high probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Third, we construct estimators by averaging over �L1 (Step 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By averaging the estimates over the selected �L1 we alleviate the impact of randomness and obtain accurate estimates for the maximal effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 23 Algorithm 2: Inference on best causal effect(s) Input: Factorial data (Yi, Zi);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' predetermined integer D;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' initial model for factorial effects �M = {∅};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' significance level {αd}D d=1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' set of weighting vectors {fl}l∈[L];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' thresholds ηN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Output: Selected working model �M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 1 Perform forward effects screening with Algorithm 1 and obtain working model �M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2 Obtain RLS-based estimates: use Equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9) and definition of �Yr (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8) to compute fl[ �M] = Q−1G(·, �M)G(·, �M)⊤fl, �γl = f ⊤ l �Yr = fl[ �M]⊤ �Y , l ∈ [L].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3 Record the set of effects close to �γ(1): �L1 = � l ∈ [L] | |�γl − �γ(1)| ≤ ηN � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Here, ηN is a tuning parameter which can be selected using the algorithm provided in Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022, Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 4 Define f(1) = (Q| �L1|)−1 � l∈ � L1 G(·, �M)G(·, �M)⊤fl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Generate point estimates and variance estimator for γ(1): �Y(1) = 1 | �L1| � l∈ � L1 �γl = f ⊤ (1) �Y , �v2 (1) = f ⊤ (1) �VY f(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 5 return �L1, �Y(1), �v2 (1) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Theoretical guarantees In the following, we present theoretical guarantees for Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We introduce the following notation L1 to include all effects that stay in a local neighborhood of γ(1): L1 = � l ∈ [L] | |γl − γ(1)| = O(N−δ3) � , for some δ3 > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A well-known fact is that the naive estimator maxz∈[Q] �Y (z) can be an overly optimistic estimate for γ(1) when L1 contains more than one element (Andrews et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Define dh = max z∈L1 |γl − γ(1)|, d⋆ h = min z /∈L1 |γl − γ(1)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' as within- and between-group distances, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We work under the following condition: 24 Condition 6 (Order of dh, d⋆ h and ηN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume the within and between group distances satisfy: d⋆ h = Θ(Nδ1), ηN = Θ(Nδ2), dh = Θ(Nδ3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' with δ3 ≤ −1/2 < δ2 < δ1 ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Define the population counterpart of f(1) as f ⋆ (1) = (Q|L1|)−1 � l∈L1 G(·, M⋆)G(·, M⋆)⊤fl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We establish the following result for the procedure provided in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Recall δ2 from Condition 6 and δ′′ from Condition 2(iii), which characterizes the magnitude of the within/between group distances and the size of the true working model, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Theorem 5 (Asymptotic results on the estimated effects using Algorithm 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Condition 1–4 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If N−(1+2δ2−δ′′) → 0, (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='17) L · |L1| · N− 1−δ′′ 2 → 0, (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='18) then �γ(1) − γ(1) v(1) ⇝ N(0, 1), where v2 (1) = f ⋆⊤ (1) VY f ⋆ (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, �v2 (1) is conservative in the sense that N(�v2 (1) − v2 (1),lim) P−→ 0, v2 (1),lim ≥ v2 (1), where v2 (1),lim = f ⋆⊤ (1) D�Y f ⋆ (1) is the limiting value of v2 (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The conditions in Theorem 5 are mild and reveal a trade-off between some mathematical quan- tities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For the first asymptotic condition in (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='17), when the size of the targeted working model is small compared to N, say δ′′ = 0 (meaning |M⋆| does not grow with N), this condition always holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More generally, (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='17) is easier to satisfy with a larger between-group distance (larger δ2) and smaller true working model size (smaller δ′′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The second condition (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='18) reflects the trade-off among the total number of interested parameters (given by L, which is also |T ′|), the size of the neighborhood of γ(1) (given by |L1|), and the size of the true working model (captured by δ′′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The smaller these quantities are, the easier inference will be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='18) is easily justifiable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Going back to the previous example (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='16), (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='18) translates into K0 � K=0 �K k � |L1| · �|M⋆| N �1/2 → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='19) 25 One can check that (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='19) accommodates a variety of interesting regimes with different specifications of K0, |L1| and |M⋆|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We omit the discussion here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Theorem 5 also suggests the benefits of factor screening compared to procedures where no screening is involved following similar reasoning provided in Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More precisely, without screening, one requires Q to be small compared to N or {fl}l∈[L] are dense, which is violated in large Q setups and many practical scenarios such as (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As a final comment, the result of our Theorem 5 relies on the perfect screening property (Theo- rem 1), which are ensured by Conditions 1 - 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Without perfect screening, there might be additional sources of bias due to the uncertainty induced by the screening step and possible under-selection results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Nevertheless, one can consider applying the over-selection strategy (Strategy 2 in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1) to facilitate inference on the best factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 7 Simulation In this section, we use simulation studies to demonstrate the finite-sample performance of the proposed forward screening framework and the inferential properties of the RLS-based estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More concretely, our simulation results verify the following properties of the proposed procedure and estimators: (G1) The RLS-based estimator �γr demonstrates efficiency gain (in terms of improved power and shortened confidence interval) compared to the simple moment estimator �γ for general causal parameters defined by sparse weighting vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (G2) The factorial forward screening procedure provided in Algorithm 1 can improve the perfor- mance of effect screening compared to naive procedure (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', screening without leveraging the heredity principle).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (G1) echoes our discussion on the comparison of CLT conditions and asymptotic variance in Remark 1 and Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (G2) verifies the results in Theorem 1 and 2 and checks the finite sample behaviors of the proposed procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For both goals, we will vary the sample size and effect size to provide a comprehensive understanding of their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Simulation setup We set up a 28 factorial experiment (K = 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There are N0 units in each treatment arm where N0 is set to be a varying number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We generate independent potential outcomes from a shifted 26 exponential distribution: Yi(z) ∼ EXP(1) − 1 + µ(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Here µ(z) are super population means of potential outcomes under treatment z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We choose µ(z) such that the factorial effects satisfy the following structure: Main effects: the main effects corresponding to the first five factors, τ{1}, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , τ{5}, are nonzero;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' the rest three main effects, τ{6}, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , τ{8}, are zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Two-way interactions: the two-way interactions associated with the first five factors are nonzero, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', τ{kl} ̸= 0 for k ̸= l, k, l ∈ [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' All the rest of the two-way interactions are zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Higher-order interactions: all the higher-order interactions τK are zero if |K| ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The above setup of factorial effects guarantees that they are sparse and follow the strong heredity principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In the provided simulation results, we will vary the number of units in each treatment arm and the size of the nonzero factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More details can be found in the R code attached to the support materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Simulation results supporting (G1) In this subsection, we evaluate the performance of the RLS-based estimators (�γr, �vr) compared to (�γ, �v) for testing a causal effect γtarget = f ⊤Y specified by a sparse vector: f = (0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , 0, 1)⊤ ∈ RQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Intuitively, γtarget measures the average of potential outcomes in the last level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For each estimator, we report: (i) power for testing H0 : γtarget = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) coverage probability of the confidence intervals for γtarget at level 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Figure 2 summarizes the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Figure 2 demonstrates that the RLS-based estimator �γr has much higher power than the sim- ple moment estimator �γ for inferring γtarget for all considered simulation settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This echoes our conclusion in Proposition 1 that the RLS-based estimator has reduced variance than the sim- ple moment estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, while the RLS-based estimator attains near nominal coverage probability with reasonably large N0 and γtarget, the simple moment estimator tends to provide under-covered confidence intervals in all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 Simulation results for (G2) In this subsection, we compare the performance of four candidate effect screening methods: 27 Method Forward Bonferroni No Selection 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 5 10 15 20 N0 Power 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 5 10 15 20 N0 Coverage probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='00 Effect size Power 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='00 Effect size Coverage probability Figure 2: Simulation results on (G1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) Top left panel: power curve with varying N0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Top right panel: coverage probability with varying N0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iii) Bottom left panel: power curve with varying effect size γtarget;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iv) Bottom right panel: coverage probability with varying effect size γtarget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Forward Bonferroni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Forward screening based on Bonferroni corrected marginal t-tests;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Forward Lasso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Forward screening based on Lasso;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Naive Bonferroni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Screening with the full working model based on Bonferroni corrected margin t-tests;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Naive Lasso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Screening with the full working model based on Lasso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For each screening method, we evaluate their performance with three measures: (i) perfect screening probability P{ �M = M⋆}, (ii) power of �γr for testing H0 : γtarget = 0 for the same γtarget defined in the previous section, and (iii) coverage probability of the RLS-based confidence interval 28 for γtarget with the nominal level at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The results are summarized in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Method Forward Bonferroni Forward Lasso Naive Bonferroni Naive Lasso 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='00 5 10 15 20 N0 Perfect selection probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 5 10 15 20 N0 Power 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 5 10 15 20 N0 Coverage probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 Effect size Perfect selection probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 Effect size Power 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='0 Effect size Coverage probability Figure 3: Simulation results on (G2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) Top left panel: perfect screening probability with a small fixed effect size γtarget = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='20 and varying N0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Top middle panel: power curve with a small fixed effect size γtarget = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='20 and varying N0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iii) Top right panel: coverage probability with a small fixed effect size γtarget = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='20 and varying N0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iv) Bottom left panel: perfect screening probability with a small fixed replication N0 = 2 and varying effect size γtarget;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (v) Bottom middle panel: power curve with a small fixed replication N0 = 2 and varying effect size γtarget;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (vi) Bottom right panel: coverage probability with a small fixed replication N0 = 2 and varying effect size γtarget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' From Figure 3, all four effect screening methods lead to perfect selection with high probability as N0 or γtarget increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Nevertheless, with the forward screening procedure, the probability of perfect screening is higher than the naive screening procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, forward screening com- plies with the heredity structure and demonstrates higher interpretability than the naive screening methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In terms of the power of �γr and �vr for testing H0 : γtarget = 0, while all four methods have power approaching one as N0 and γtarget increases, forward screening based procedures pos- sess higher power with small N0 and γtarget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lastly, we can see an improvement in the coverage 29 probability of the RLS-based confidence intervals with the forward screening procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 8 Discussion In this manuscript, we have discussed the formal theory for forward screening and post-screening inference in 2K factorial designs with large K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It is conceptually straightforward to extend the theory to general factorial designs with multi-valued factors under more complicated notations, and we thus omit the technical details to simplify the theoretical discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Another important direction is covariate adjustment in factorial experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lin (2013), Lu (2016a) and Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022) demonstrated the efficiency gain of covariate adjustment with small K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao and Ding (2023) discussed covariate adjustment in factorial experiments with factors and covariates selected independent of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We leave it to future research to establish the theory for factor screening and covariate selection in factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' References Andrews, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Kitagawa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and McCloskey, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2019), “Inference on winners,” Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', National Bureau of Economic Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Angrist, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Pischke, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2009), Mostly Harmless Econometrics: An Empiricist’s Compan- ion, Princeton: Princeton University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Choi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Fujikoshi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Asymptotics of AIC, BIC and Cp model selection rules in high-dimensional regression,” Bernoulli, 28, 2375–2403.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bickel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Ritov, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Tsybakov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2010), “Hierarchical selection of variables in sparse high-dimensional regression,” IMS Collections, 6, 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bien, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Taylor, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2013), “A lasso for hierarchical interactions,” Annals of Statistics, 41, 1111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Blackwell, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Pashley, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2021), “Noncompliance and instrumental variables for 2K factorial experiments,” Journal of the American Statistical Association, in press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bloniarz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Sekhon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016), “Lasso adjustments of treatment effect estimates in randomized experiments,” Proceedings of the National Academy of Sciences, 113, 7383–7390.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 30 Box, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Hunter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Hunter, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2005), Statistics for Experimenters: Design, Innovation, and Discovery, Hoboken, NJ: Wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Branson, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Dasgupta, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016), “Improving covariate balance in 2K factorial designs via rerandomization with an application to a New York City Department of Education High School Study,” Annals of Applied Statistics, 10, 1958–1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Claggett, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Xie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Tian, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2014), “Meta-analysis with fixed, unknown, study-specific parameters,” Journal of the American Statistical Association, 109, 1660–1671.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Dasgupta, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Pillai, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2015), “Causal inference from 2K factorial designs by using potential outcomes,” Journal of the Royal Statistical Society: Series B (Statistical Method- ology), 77, 727–753.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Egami, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Imai, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2019), “Causal interaction in factorial experiments: Application to conjoint analysis,” Journal of the American Statistical Association, 114, 529–540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Espinosa, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Dasgupta, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016), “A Bayesian perspective on the analysis of unreplicated factorial experiments using potential outcomes,” Technometrics, 58, 62–73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Fan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Lv, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2008), “Sure independence screening for ultrahigh dimensional feature space,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70, 849–911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Fisher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (1935), The Design of Experiments, Edinburgh, London: Oliver and Boyd, 1st ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Fithian, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Sun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Taylor, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2014), “Optimal inference after model selection,” arXiv preprint arXiv:1410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2597.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Freedman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2008), “On regression adjustments to experimental data,” Advances in Applied Mathematics, 40, 180–193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Gerber, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Green, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2012), Field Experiments: Design, Analysis, and Interpretation, New York, NY: Norton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Guo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Wei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2021), “Sharp inference on selected subgroups in observa- tional studies,” arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='11338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hao, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Feng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2018), “Model selection for high-dimensional quadratic regression via regularization,” Journal of the American Statistical Association, 113, 615–625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 31 Hao, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2014), “Interaction screening for ultrahigh-dimensional data,” Journal of the American Statistical Association, 109, 1285–1301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Haris, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Witten, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Simon, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016), “Convex modeling of interactions with strong heredity,” Journal of Computational and Graphical Statistics, 25, 981–1004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hastie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Friedman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Friedman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2009), The Elements of Statistical Learning: Data Mining, Inference, and Prediction, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2, New York: Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Kempthorne, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (1952), The Design and Analysis of Experiments, New York: Wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Kuchibhotla, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Kolassa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Kuffner, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Post-selection inference,” Annual Review of Statistics and Its Application, 9, 505–527.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Ding, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2017), “General forms of finite population central limit theorems with appli- cations to causal inference,” Journal of the American Statistical Association, 112, 1759–1769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Hastie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2015), “Learning interactions via hierarchical group-lasso regularization,” Journal of Computational and Graphical Statistics, 24, 627–654.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2013), “Agnostic notes on regression adjustments to experimental data: Reexamining Freedman’s critique,” Annals of Applied Statistics, 7, 295–318.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Ren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Randomization-based joint central limit theorem and efficient covariate adjustment in randomized block 2K factorial experiments,” Journal of the American Statistical Association, in press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2016a), “Covariate adjustment in randomization-based causal inference for 2K factorial designs,” Statistics and Probability Letters, 119, 11–20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' — (2016b), “On randomization-based and regression-based inferences for 2K factorial designs,” Statistics and Probability Letters, 112, 72–78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Meng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2014), “I got more data, my model is more refined, but my estimator is getting worse!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Am I just dumb?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Econometric Reviews, 33, 218–250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Neyman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (1923/1990), “On the application of probability theory to agricultural experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Essay on principles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section 9.” Statistical Science, 465–472.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Pashley, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Bind, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2023), “Causal inference for multiple non-randomized treatments using fractional factorial designs,” Canadian Journal of Statistics, in press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 32 Rillig, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Ryo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Lehmann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Aguilar-Trigueros, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Buchert, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Wulf, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Iwasaki, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Roy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Yang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2019), “The role of multiple global change factors in driving soil functions and microbial biodiversity,” Science, 366, 886–890.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Shi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Ding, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Berry–Esseen bounds for design-based causal inference with possibly diverging treatment levels and varying group sizes,” arXiv preprint arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='12345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (1996), “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), 58, 267–288.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wainwright, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2019), High-dimensional Statistics: A Non-asymptotic Viewpoint, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 48, Cam- bridge: Cambridge University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2009), “Forward regression for ultra-high dimensional variable screening,” Journal of the American Statistical Association, 104, 1512–1524.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wasserman, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Roeder, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2009), “High dimensional variable selection,” Annals of Statistics, 37, 2178.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wei, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Inference on the best policies with many covariates,” arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='11868.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wieczorek, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Lei, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Model selection properties of forward selection and sequential cross-validation for high-dimensional regression,” Canadian Journal of Statistics, 50, 454–470.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Hamada, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2011), Experiments: Planning, Analysis, and Optimization, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 552, Hoboken, NJ: John Wiley & Sons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Zhang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Non-stationary a/b tests: Optimal variance reduction, bias correction, and valid inference,” Bias Correction, and Valid Inference (May 20, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Yates, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (1937), “The design and analysis of factorial experiments,” Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Technical Commu- nication 35, Imperial Bureau of Soil Science, London, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Yuan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Joseph, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2007), “An efficient variable selection approach for analyzing designed experiments,” Technometrics, 49, 430–439.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2022), “Social construction of hate crimes in the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=': A factorial survey experiment,” Theses and Dissertations–Sociology, 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 33 Zhao, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Ding, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2021), “Regression-based causal inference with factorial experiments: esti- mands, model specifications and design-based properties,” Biometrika, 109, 799–815.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' — (2023), “Covariate adjustment in multi-armed, possibly factorial experiments,” Journal of the Royal Statistical Society, Series B (Statistical Methodology), in press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Rocha, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2009), “The composite absolute penalties family for grouped and hierarchical variable selection,” Annals of Statistics, 37, 3468–3497.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2006), “On model selection consistency of Lasso,” The Journal of Machine Learning Research, 7, 2541–2563.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Zhao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', Witten, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', and Shojaie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (2021), “In defense of the indefensible: A very naive approach to high-dimensional inference,” Statistical Science, 36, 562–577.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 34 Supplementary material Section A provides more discussions/extensions to the results introduced in the main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More concretely, Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 presents detailed discussion of the use of weight least squares in factorial experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 extends the inference results in Section 4 to a vector of causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section B presents general results on consistency of forward factor screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Theorem 1 is a corollary of the results in Section B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section C gives the technical proofs of the results in the main paper and the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A Additional results This section provides more extensions to the results in the main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 discusses the use of WLS in analyzing factorial experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 extends the inference results under perfect screening (Section 4) to a vector of causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Weighted least squares for estimating factorial effects In this subsection, we briefly state and prove some useful facts about weighted least squares in estimating factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' More discussions can be found in Zhao and Ding (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Denote the design matrix as X = (g1,M, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , gN,M)⊤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let W = Diag {wi}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The problem (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4) has closed-form solution: �τ = (X⊤WX)−1(X⊤WY ) (closed form solution of WLS) = {G(·, M)⊤G(·, M)}−1{G(·, M)⊤ �Y } (units under the same treatment arm share the same regressor) = Q−1G(·, M)⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S1) The closed form (S1) motivates the variance estimation: �V�τ = Q−2G(·, M)⊤ �V�Y G(·, M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S2) Alternatively, one can use the Eicker–Huber–White (EHW) variance estimation with the HC2 cor- rection (Angrist and Pischke, 2009): �VEHW = (X⊤WX)−1X⊤WDiag � �ϵ2 i 1 − N−1 i � WX(X⊤WX)−1, �ϵi = Yi − g⊤ i,M�τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S3) S1 Again, because units under the same treatment arm share the same regressor, �VEHW simplifies to �VEHW = Q−2G(·, M)⊤ �V ′ �Y G(·, M), (S4) where �V ′ �Y = Diag � N(z)−1 �S′(z, z) � z∈T with �S′(z, z) = 1 N(z) − 1 � Zi=z (Yi − g⊤ i,M�τ)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Following some algebra, we can show �S′(z, z) = 1 N(z) − 1 � Zi=z (Yi − �Y (z))2 + N(z) N(z) − 1{�Y (z) − G(z, M)�τ}2 = �S(z, z) + N(z) N(z) − 1{�Y (z) − G(z, M)�τ}2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence �S′(z, z) ≥ �S(z, z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In general �Y (z) ̸= G(z, M)�τ, so the difference is not negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The fol- lowing Lemma S1 formally summarizes the statistical property of �τ and its two variance estimators, �V�τ and �VEHW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The proof can be done by utilizing the moment facts from Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 of Shi and Ding (2022), which we omit here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lemma S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For the WLS in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4), we have 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �τ = Q−1G(·, M)⊤ �Y is unbiased for the true factorial effects τ(M);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', E {�τ} = τ(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Both variance estimators are consistent and robust: N(�V�τ − V�τ,lim) = oP(1), N(�VEHW − VEHW,lim) = oP(1), with V�τ,lim ≽ V�τ and VEHW ≽ V�τ, where V�τ,lim = Q−2G(·, M)⊤D�Y G(·, M), and VEHW,lim = Q−2G(·, M)⊤Diag � 1 − N−1 N(z) − 1S(z, z) + 1 N(z) − 1{Y (z) − G(z, M)τ(M)}2 � G(·, M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' EHW variance estimator is more conservative than the direct variance estimator: �VEHW ≽ �V�τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It is worthy of mentioning that in the fixed Q setting, if we assume that the factorial effects that are not included in M are all zero, Lemma S1 implies EHW variance estimator (S3) or (S4) has the same asymptotic statistical property as the direct variance estimator (S2), which agrees with the conclusion of Zhao and Ding (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S2 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Extension of post-screening inference to vector parameters In this subsection we present an extension of Theorem 2 to a vector of causal parameters: Γ = (γ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , γL)⊤, where γl = f ⊤ l Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For convenience we can stack f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , fL into a weighting matrix F = (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , fL) and write Γ = F ⊤Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will focus on linear projections of Γ, defined as γb = b⊤Γ for a given b ∈ RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Naturally, we can apply forward screening and construct RLS-based estimators for Γ: �Γr = (�γ1,r, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , �γL,r)⊤, �V�Γ,r = F[ �M]⊤ �V�Y F[ �M], (S5) where F[ �M] = Q−1G(·, �M)G(·, �M)⊤F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For γb, an estimator based on (S5) is �γb,r = b⊤�Γr, �v2 b,r = b⊤ �V�Γ,rb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For standard factorial effects, we can use WLS to obtain the robust covariance matrix (Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For one single b, we can actually apply Theorem 2 with fb = Fb = L � l=1 blfl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Define f ⋆ b = F[M⋆]b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We then get the following theorem: Theorem S1 (Statistical properties linear projections of Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then �γb,r − γ vb,r ⇝ N(0, 1) where v2 b,r = f ⋆⊤ b V�Y f ⋆ b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Further assume ∥f ⋆ b ∥∞ = O(Q−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The variance estimator �v2 b,r is conser- vative in the sense N(�v2 b,r − v2 b,r,lim) P−→ 0, v2 b,r,lim ≥ v2 b,r, where v2 b,r,lim = f ⋆⊤ b D�Y f ⋆ b is the limiting value of �v2 b,r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S3 The proof of Theorem S1 is similar to that of Theorem 2, which is mainly based on Lemma S5 and thus omitted here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, for a fixed integer L, Theorem S1 implies joint normality of �Γr, a result due to the Cram´er-Wold theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We summarize the result as the following corollary and omit the proof: Corollary S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume a fixed L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have V −1/2 �Γ,r (�Γr − Γ) ⇝ N(0, IL), where V�Γ,r = F[M⋆]⊤V�Y F[M⋆].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Further assume max∥b∥2=1 ∥f ⋆ b ∥∞ = O(Q−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The variance esti- mator �v2 b,r is conservative in the sense that N(�V�Γ,r − V�Γ,r,lim) P−→ 0, V�Γ,r,lim ≽ V�Γ,r, where V�Γ,r,lim = F[M⋆]⊤D�Y F[M⋆]⊤ is the limiting value of �V�Γ,r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' B General results on consistency of forward screening In this section we provide some theoretical insights into the forward factor screening algorithm (Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The discussion in this section starts from a more broad discussion where we allow the S-step to be general procedures that satisfy certain conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will show Bonferroni corrected marginal t-test is a special case of these procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We start with some regularization conditions to characterize a “good” layer-wise S-step, and ensure the P-step is compatible with the structure of the true factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In light of this, we use M⋆ d,+ to denote the pruned set of effects on the d-th layer based on the true model M⋆ d−1 on the previous layer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' that is, M⋆ d,+ = H(M⋆ d−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' These discussions motivate the following assumption on the layer-wise selection procedure �S(·): Assumption 1 (Validity and consistency of the selection operator).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We denote �Md = �S(M⋆ d,+;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' {Yi, Zi}N i=1), where M⋆ d,+ = H(M⋆ d−1) is defined as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let {αd}D d=1 be a sequence of significance levels in (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We assume that the following validity and consistency property hold for SN(·): Validity: lim sup N→∞ P � �Md ∩ M⋆c d ̸= ∅ � ≤ αd, Consistency: lim sup N→∞ D D � d=1 P � �Mc d ∩ M⋆ d ̸= ∅ � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S4 This assumption can be verified for many screening procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In Theorem 1 we will show it holds for the layer-wise Bonferroni corrected marginal testing procedure in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, in the high dimensional super population study, a combination of data splitting, adaptation of ℓ1 regularization and marginal t tests can also fulfill such a requirement (Wasserman and Roeder, 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, we assume the H(·) operator respects the structure of the nonzero factorial effects: Assumption 2 (H-heredity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For d = 1, · · · , D − 1, it holds M⋆ d+1 ⊂ P(M⋆ d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' One special case of H(·) operator satisfying Assumption 2 is naively adding all the the higher- order interactions regardless of the lower-order screening results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, if we have evidence that the effects have particular hierarchical structure, applying the heredity principles can improve screening accuracy as well as interpretability of the screening results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Theorem S2 (Screening consistency).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Assumption 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then the forward screening procedure (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6) has the following properties: (i) Type I error control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Forward screening controls the Type I error rate, in the sense that lim sup N→∞ P � �Md ∩ M⋆ d c ̸= ∅ for some d ∈ [D] � ≤ α = D � d=1 αd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Screening consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Further assume α = αN → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The forward procedure consistently selects all the nonzero effects up to D levels with probability tending to 1: lim sup N→∞ P � �Md = M⋆ d for all d ∈ [D] � = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Theorem S2 consists of two parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' First, one can control the type I error rate, which is defined as the probability of over-selects at least one zero effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The definition is introduced and elaborated detailedly in Wasserman and Roeder (2009) for model selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Second, if the tuning parameter α = �D d=1 αd vanish asymptotically, one can actually achieve perfect screening up to D levels of effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To apply Theorem S2 to specific procedures, the key step is to verify Assumption 1 and justify Assumption 2, which we will do for Bonferroni corrected marginal t tests as an example in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Moreover, the scaling of αN plays an important role in theoretical discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To achieve perfect selection, we hope αN decays as fast as possible;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ideally if αN equals zero then we do not commit S5 any type I error (or equivalently, we will never select redundant effects).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' However, for many data- dependent selection procedure α can only decay at certain rates, because a fast decaying α means higher possibility of rejection, thus can lead to strict under-selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, in the tuning process, αd should be scaled properly if one wants to pursue perfect selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Nevertheless, even if the tuning is hard and perfect model selection can not be achieved, we still have many strategies to exploit the advantage of the forward screening procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will have more discussions in later sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lastly, as we have commented earlier, in practice people have many alternative methods for the S-step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' They are attractive in factorial experiments because many lead to simple form solutions due to the orthogonality of factorial designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For example, Lasso is a commonly adopted strategy for variable selection in linear models (Zhao and Yu, 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It solves the following penalized WLS problem in factorial settings: �Ml = {K : �τl,K ̸= 0}, �τl,K = min τ ′∈RH 1 2 � z∈T wi(Yi − g⊤ i τ ′)2 + λl∥τ ′∥1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Due to the orthogonality of G, the resulting �M has a closed-form solution (Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2009): �Ml = {K : |�τK| ≥ λl}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Other methods, such as AIC/BIC (Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', 2022), sure independence screening (Fan and Lv, 2008), etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', are also applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With more delicate assumptions and tuning parameter choices, these methods can also be justified theoretically for screening consistency and post-screening inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We omit the details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C Technical proofs In this section we present the technical proofs for the results across the whole paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 presents some preliminary probabilistic results that are useful in randomized experiments which are mainly attributed to Shi and Ding (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The main proof starts from Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S6 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='1 Preliminaries: some important probabilistic results in randomized experi- ments In this subsection we present some preliminary probability results that are crucial for our theoretical discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Consider an estimator of the form �γ = Q−1 � z∈T w(z)�Y (z), with variance estimator �v2 = Q−2 � z∈T w(z)2 �S(z, z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Li and Ding (2017) showed that E{�Y } = Y , V�Y = Var � �Y � = D�Y − N−1S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S6) Then (S6) further leads to the following facts: E{�γ} = � z∈T f(z)Y (z) = γ, (S7) Var {�γ} = � z∈T f(z)2N(z)−1S(z, z) − N−1f ⊤Sf, E{�v2} = � z∈T f(z)2N(z)−1S(z, z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have the following variance estimation results and Berry–Esseen bounds: Lemma S2 (Variance concentration and Berry–Esseen bounds in finite population).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Define γ = E{�γ}, v2 = Var(�γ) and v2 lim = E{�v2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Suppose the following conditions hold: Nondegenerate variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists a σw > 0, such that Q−2 Q � z=1 w(z)2N−1 z S(z, z) ≤ σ2 wv2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S8) Bounded fourth moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists a δ > 0 such that max z∈[Q] 1 N N � i=1 {Yi(z) − Y (z)}4 ≤ ∆4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S9) Then we have the following conclusions: S7 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The variance estimator is conservative for the true variance: v2 lim ≥ v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, the following tail bound holds: P � N|�v2 − v2 lim| > t � ≤ Cc3c−4∥w∥2 ∞∆4 QN0 1 t2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have a Berry–Esseen bound with the true variance: sup t∈R ����P ��γ − γ v ≤ t � − Φ(t) ���� ≤ 2Cσw c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| ∥w∥2 � c−1 minz∈[Q] S(z, z) · √N0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have a Berry–Esseen bound with the estimated variance: for any ϵN ∈ (0, 1/2], sup t∈R ����P ��γ − γ �v ≤ t � − Φ �vlim v t ����� ≤ ϵN + Cc3c−4∥w∥2 ∞∆4 QN0 1 (Nv2ϵN)2 + 2Cσw c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| ∥w∥2 � c−1 minz∈[Q] S(z, z) · √N0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proof of Lemma S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' See Lemma S13 of Shi and Ding (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' See Theorem 1 of Shi and Ding (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' First we show a useful result: for |a| ≤ 1/2 and any b ∈ R, sup t∈R |Φ{(1 + a)t + b} − Φ{t}| ≤ |a| + |b|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S10) (S10) is particularly useful for small choices of a and b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Intuitively, it evaluates the change of Φ under a small affine perturbation of t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The proof of (S10) is based on a simple step of the mean value theorem: for any t ∈ R, |Φ{(1 + a)t + b} − Φ{t}| =|φ(ξt,(1+a)t) · (at + b)| =|φ(ξt,(1+a)t) · at| + |φ(ξt,(1+a)t) · b| =|a| · |φ(ξt,(1+a)t) · t| · 1 {|t| ≤ 1} + |a| · |φ(ξt,(1+a)t) · t| · 1 {|t| > 1} + |φ(ξt,(1+a)t) · b| ≤ 1 √ 2π|a| · 1 {|t| ≤ 1} + 1 √ 2π|a||t| · exp(−t2/8) · 1 {|t| > 1} + 1 √ 2π|b| ≤|a| + |b|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We consider t ≥ 0 because t < 0 can be handled similarly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For any ϵN > 0, We have P ��γ − γ �v ≤ t � = P ��γ − γ v ≤ �v vt � = P ��γ − γ v ≤ �v vt, ���� �v − vlim v ���� ≤ ϵN � + P ��γ − γ v ≤ �v vt, ���� �v − vlim v ���� > ϵN � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S8 Then we can show that P ��γ − γ �v ≤ t � ≤ P ��γ − γ v ≤ �v vt, ���� �v − vlim v ���� ≤ ϵN � + P ����� �v − vlim v ���� > ϵN � ≤ P ��γ − γ v ≤ �v v + ϵN � t � + P ����� �v − vlim v ���� > ϵN � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For the first term, we have sup t≥0 ����P ��γ − γ v ≤ �vlim v + ϵN � t � − Φ ��vlim v + ϵN � t ����� ≤ 2Cσw c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| ∥w∥2 � c−1 minz∈[Q] S(z, z) · √N0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For the second term, using the variance estimation results in Part 1 we have P ����� �v − vlim v ���� ≥ ϵN � ≤ P ����� �v − vlim v ���� · ���� �v + vlim v ���� ≥ ϵN � (because vlim is conservative) = P ����� N�v2 − Nv2 lim Nv2 ���� ≥ ϵN � ≤ Cc3c−4∥w∥2 ∞∆4 QN0 1 (Nv2ϵN)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, by (S10), when ϵN ≤ 1/2, we also have sup t∈R ���Φ ��vlim v + ϵN � t � − Φ �vlim v t ���� ≤ vϵN vlim ≤ ϵN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Aggregating all the parts above, we can show that for any t ≥ 0, P ��γ − γ �v ≤ t � ≤ Φ �vlim v t � + ϵN + Cc3c−4∥w∥2 ∞∆4 QN0 1 (Nv2ϵN)2 + 2Cσw c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| ∥w∥2 � c−1 minz∈[Q] S(z, z) · √N0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the other hand, we can show that P ��γ − γ �v ≤ t � ≥ P ��γ − γ v ≤ �v vt, ���� �v − vlim v ���� ≤ ϵN � ≥ P ��γ − γ v ≤ �vlim v − ϵN � t � − P ����� �v − vlim v ���� ≥ ϵN � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S11) By (S10), when ϵN ≤ 1/2, we also have sup t∈R ���Φ ��vlim v − ϵN � t � − Φ �vlim v t ���� ≤ ϵN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S9 So we can derive a lower bound analogous to (S11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Note that the results can be analogously generalized to t ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Putting pieces together, we can show that for any t ≥ 0, sup t∈R ����P ��γ − γ �v ≤ t � − Φ �vlim v t ����� ≤ ϵN + Cc3c−4∥w∥2 ∞∆4 QN0 1 (Nv2ϵN)2 + 2Cσw c−1∥w∥∞ maxi∈[N],z∈[Q] |Yi(z) − Y (z)| ∥w∥2 � c−1 minz∈[Q] S(z, z) · √N0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The following corollary shows a Berry–Esseen bound for the studentized statistic in the special case where w = (w(z))z∈[Q] is a contrast vector for factorial effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' That is, w = gK for some K ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Corollary S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Condition (S8) and (S9) hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let w = gK for some K ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then we have a Berry–Esseen bound with the estimated variance: sup t∈R ����P ��τK − τK �v ≤ t � − Φ �vlim v t ����� ≤ 2 � Cσ4 wc5c−6∆4 {minz∈T S(z, z)}2 �1/3 1 (QN0)1/3 + 2Cσw c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| � c−1 minz∈[Q] S(z, z) 1 (QN0)1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proof of Corollary S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lower bound for Nv2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Note that ∥w∥2 2 = Q and ∥w∥∞ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Using Condition (S8), we have Nv2 ≥ Nσ−2 w Q−2 Q � z=1 w(z)2N−1 z S(z, z) ≥ (cQN0) · σ−2 w c−1Q−1N−1 0 min z∈T S(z, z) · (Q−1∥w∥2 2) = σ−2 w cc−1 min z∈T S(z, z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, the Berry–Esseen bound becomes sup t∈R ����P ��τK − τK �v ≤ t � − Φ �vlim v t ����� ≤ ϵN + Cσ4 wc5c−6∆4 (QN0){minz∈T S(z, z)}2 · 1 ϵ2 N + 2Cσw c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| � c−1 minz∈[Q] S(z, z) · √QN0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Optimize the summation of the first and second term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By taking derivative over ϵN on the upper bound and solving for the zero point, we know that when ϵN = � 2Cσ4 wc5c−6∆4 (QN0){minz∈T S(z, z)}2 �1/3 , S10 the upper bound is minimized and sup t∈R ����P ��τK − τK �v ≤ t � − Φ �vlim v t ����� ≤ 2 � Cσ4 wc5c−6∆4 {minz∈T S(z, z)}2 �1/3 1 (QN0)1/3 + 2Cσw c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| � c−1 minz∈[Q] S(z, z) 1 (QN0)1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Additionally, we have a Berry–Esseen bounds after screening the effects: Lemma S3 (Berry Esseen bound with screening).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume there exists σw > 0 such that Q � z=1 f[M](z)2N−1 z S(z, z) ≤ σ2 wv2(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S12) Then sup t∈R �����P � �γ[ �M] − γ[M] v(M) ≤ t � − Φ(t) ����� ≤ 2P � �M ̸= M � + 2Cσw c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| � c−1 minz∈[Q] S(z, z) · √N0 ∥f[M]∥∞ ∥f[M]∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proof of Lemma S3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With the selected working model we have sup t∈R �����P � �γ[ �M] − γ[M] v(M) ≤ t � − Φ(t) ����� = sup t∈R �����P � �γ[ �M] − γ[M] v(M) ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �M = M � − Φ(t) + P � �γ[ �M] − γ[M] v(M) ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �M ̸= M ������ ≤ sup t∈R �����P � �γ[ �M] − γ[M] v(M) ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �M = M � − Φ(t) ����� + P � �γ[ �M] − γ[M] v(M) ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �M ̸= M � = sup t∈R ����P ��γ[M] − γ[M] v(M) ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �M = M � − Φ(t) ���� + P � �γ[ �M] − γ[M] v(M) ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �M ̸= M � ≤ sup t∈R ����P ��γ[M] − γ[M] v(M) ≤ t � − Φ(t) ���� + 2P � �M ̸= M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now we have �γ(M) = f ⊤G(·, M)�τ(M) = f ⊤G(·, M)G(·, M)⊤ �Y = f[M]⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S11 By Theorem 1 of Shi and Ding (2022), we have a Berry–Esseen bound with the true variance: sup t∈R ����P ��γ(M) − γ[M] v ≤ t � − Φ(t) ���� ≤ 2Cσw ∥f[M]∥∞c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| ∥f[M]∥2 � c−1 minz∈[Q] S(z, z) · √N0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A crucial quantity that appeared in Lemma S3 is the ratio of norms: ∥f[M]∥∞ ∥f[M]∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S13) The following Lemma S4 provides an explicit bound on (S13) which reveals how the ratio is con- trolled with respect to the size of the working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lemma S4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For f[M] ̸= 0, we have ∥f[M]∥∞ ∥f[M]∥2 ≤ �|M| Q �1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S14) Proof of Lemma S4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Because the LHS of (S14) is a ratio, based on the definition of f ⋆ (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9) we can assume ∥f∥2 = 1 without loss of generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Due to the orthogonality of G, we can use the columns of G as bases and express f as f = 1 √QG(·, M)b1 + 1 √QG(·, Mc)b2, where b1 ∈ R|M| and b2 ∈ R|Mc| and ∥(b⊤ 1 , b⊤ 2 )⊤∥2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then f[M] = Q−1G(·, M)G(·, M)⊤f = 1 √QG(·, M)b1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence ∥f[M]∥∞ ≤ 1 √Q∥b1∥1, ∥f[M]∥2 = ∥b1∥2, ∥f[M]∥∞ ∥f[M]∥2 ≤ 1 √Q · ∥b1∥1 ∥b1∥2 ≤ �|f[M]| Q �1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2 Proof of Theorem S2 Proof of Theorem S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' According to the orthogonality of designs, the signs for all terms in the studied unsaturated population regressions are consistent with those of saturated regressions, which saves the effort of differentiating true models for partial and full regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We introduce several key events that will play a crucial role in the proof: for D0 ∈ [D], define Under-selection: Eu(D0) = { �Md ⊂ M⋆ d, d ∈ [D0]}, Strict under-selection: Esu(D0) = { �Md ⊂ M⋆ d, d ∈ [D0];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' there exists d ∈ [D0], �Md ⊊ M⋆ d}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S12 High level idea of the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' To prove screening consistency, we will prove two facts: P {Eu(D) holds} → 1, P {Esu(D) holds} → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Combining these two results together, we can conclude asymptotic screening consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We start from the strict under-selection probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Step 1: Prove that asymptotically, there is no strict under-selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By definition, P {Esu(1) holds} = P � �M1 ⊊ M⋆ 1 � ≤ P � �Mc 1 ∩ M⋆ 1 ̸= ∅ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We now derive a recursive bound for P {Esu(D0 + 1) holds} where 1 ≤ D0 ≤ D − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We have decomposition Esu(D0 + 1) = � �Md ⊂ M⋆ d, d ≤ D0 + 1 � − � �Md = M⋆ d, d ≤ D0 + 1 � = Esu,1(D0 + 1) ∪ Esu,2(D0 + 1), where Esu,1(D0 + 1) = � �Md ⊂ M⋆ d, d ≤ D0 + 1 � − � �Md = M⋆ d, d ≤ D0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �MD0+1 ⊂ M⋆ D0+1 � , Esu,2(D0 + 1) = � �Md = M⋆ d, d ≤ D0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �MD0+1 ⊂ M⋆ D0+1 � − � �Md = M⋆ d, d ≤ D0 + 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For Esu,1(D0 + 1), we have P {Esu,1(D0 + 1) holds} = P �� �Md ⊂ M⋆ d, d ≤ D0 + 1 � − � �Md = M⋆ d, d ≤ D0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �MD0+1 ⊂ M⋆ D0+1 �� ≤ P � ∀d ∈ [D0 + 1], �Md ⊂ M⋆ d;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ∃d ∈ [D0], �Md ⊊ M⋆ d � ≤ P � ∀d ∈ [D0], �Md ⊂ M⋆ d;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ∃d ∈ [D0], �Md ⊊ M⋆ d � = P {Esu(D0) holds}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S15) For Esu,2(D0 + 1), we notice that �MD0+1 is generated based on �MD0 and the set of estimates over the prescreened effect set �MD0+1,+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under Assumption 2, on the event �Md = M⋆ d we have �Md+1 = �Md+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence we can compute P {Esu,2(D0 + 1) holds} =P � �Md = M⋆ d, d ≤ D0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �MD0+1 ⊊ M⋆ D0+1 � =P � �Md = M⋆ d, d ≤ D0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �MD0+1 ⊊ M⋆ D0+1 � ≤P � �Mc D0+1 ∩ M⋆ D0+1 ̸= ∅ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S16) S13 Now (S15) and (S16) together suggest that P {Esu(D0 + 1) holds} ≤P {Esu(D0) holds} + P � �Mc D0+1 ∩ M⋆ D0+1 ̸= ∅ � ≤ · · · ≤ D0+1 � d=1 P � �Mc D0+1 ∩ M⋆ D0+1 ̸= ∅ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S17) Taking D0 = D − 1 in (S17) and apply Assumption 1, we conclude P {Esu(D) holds} → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Step 2: Prove the first part of Theorem S2 and give a probability bound for under- selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We compute the probability for under-selection: P {Eu(D) fails} =P {Eu(1) fails} + D � D0=2 P {Eu(D0 − 1) holds;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Eu(D0) fails} =P {Eu(1) fails}(≜ ⃝⋆ 1) + D � D0=2 P {Ep(D0 − 1) holds;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Eu(D0) fails}(≜ ⃝⋆ 2) + D � D0=2 P {Esu(D0 − 1) holds;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Eu(D0) fails}(≜ ⃝⋆ 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For ⃝⋆ 1, by definition of Eu(1) we have ⃝⋆ 1 = P {Eu(1) fails} = P � �M1 ∩ M⋆ 1 c ̸= ∅ � = P � �M1 ∩ M⋆ 1 c ̸= ∅ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S18) For ⃝⋆ 2, we have ⃝⋆ 2 ≤ D � D0=2 P � �Md = M⋆ d, d ∈ [D0 − 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �MD0 ∩ M⋆c D0 ̸= ∅ � ≤ D � D0=2 P � �MD0 ∩ M⋆c D0 ̸= ∅ � , (S19) which is because on the given event, �MD0,+ = H( �MD0−1) = H(M⋆ D0−1) = M⋆ D0,+ and �MD0 = �S( �MD0,+) = �MD0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' From (S18) and (S19), lim sup N→∞ (⃝⋆ 1 + ⃝⋆ 2) = D � D0=1 P � �MD0 ∩ M⋆c D0 ̸= ∅ � ≤ D � D0=1 αD0 = α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (by Assumption 1) (S20) S14 For ⃝⋆ 3, we have ⃝⋆ 3 ≤ D � D0=2 P {Esu(D0 − 1) holds} ≤ D � D0=2 D0−1 � d=1 P � �Mc d ∩ M⋆ d ̸= ∅ � (using (S17)) = D−1 � d=1 (D − d)P � �Mc d ∩ M⋆ d ̸= ∅ � → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (using Assumption 1) (S21) Therefore, by (S20) and (S21), the probability of failure of under-selection gets controlled under α asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As a side product, we obtain finite sample bounds: P {Eu(D) fails} ≤ D � D0=1 P � �MD0 ∩ M⋆c D0 ̸= ∅ � + D−1 � d=1 (D − d)P � �Mc d ∩ M⋆ d ̸= ∅ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Prove of the second part of Theorem S2 and conclude screening consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Under α = α(N) → 0, the first part of the result implies that with probability tending to one, we have under-selection: P {Eu(D) holds} → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By (S17) and Assumption 1, strict under-selection will not happen with high probability: P {Esu(D) holds} → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, we conclude the consistency of the screening procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='3 Proof of Theorem 1 We state and prove a more general version of Theorem 1: Theorem S3 (Bonferroni corrected marginal t test).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Let �Md = �S(M⋆ d,+) where M⋆ d,+ = P(M⋆ d−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1, 2, 3 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then we have the following results for the screening procedure based on Bonferroni corrected marginal t-test: (i) (Validity) lim supN→∞ �D d=1 P � �Md ∩ M⋆c d ̸= ∅ � ≤ �D d=1 αd = α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) (Consistency) lim supN→∞ D �D d=1 P � �Mc d ∩ M⋆ d ̸= ∅ � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S15 (iii) (Type I error control) Overall the procedure achieves type I error rate control: lim sup N→∞ P � �M ∩ (∪D d=1M⋆ d)c ̸= ∅ � ≤ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iv) (Perfect screening) When δ′ is strictly positive, we have maxd∈[D] αd → 0 and lim N→∞ P � �M = D � d=1 M⋆ d � = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Part (i) and (ii) of Theorem 1 justified Assumption 1 and 2 respectively, which build up the basis for applying Theorem S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Part (iii) guarantees type I error control under the significance level α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' When we let α decay to zero, Part (iii) implies that we will not include redundant terms into the selected working model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Part (iv) further states a stronger result with vanishing α - perfect selection can be achieved asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) First, we show validity: P � �Md ∩ M⋆c d ̸= ∅ � = P � ∃K ∈ M⋆ d,+\\M⋆ d, ���� �τK �vK,r ���� ≥ Φ−1 � 1 − αd 2|M⋆ d,+| �� ≤ � K∈M⋆ d,+\\M⋆ d P ����� �τK �vK,r ���� ≥ Φ−1 � 1 − αd 2|M⋆ d,+| �� ≤ � K∈M⋆ d,+\\M⋆ d � αd |M⋆ d,+| + �C (QN0)1/3 � (by Corollary S2) ≤ � αd + �C|M⋆ d,+| N1/3 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence, D � d=1 P � �Md ∩ M⋆c d ̸= ∅ � ≤ D � d=1 � αd + �C|M⋆ d,+| N1/3 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Due to the effect heredity condition 4, we have |M⋆ 1,+| = |M⋆ 1|, |M⋆ d,+| ≤ K|M⋆ d−1|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence lim sup N→∞ D � d=1 P � �Md ∩ M⋆c d ̸= ∅ � ≤ α + lim sup N→∞ K �C|M⋆| N1/3 = α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (using Condition 2(iii)) S16 (ii) Second, we show consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume the nonzero τK’s are positive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If some are negative one can simply modify the direction of some of the inequalities and still validate the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' P � �Mc d ∩ M⋆ d ̸= ∅ � = P � ∃K ∈ M⋆ d, ���� �τK �vK,r ���� ≤ Φ−1 � 1 − αd 2|M⋆ d,+| �� ≤ � K∈M⋆ d P ����� �τK �vK,r ���� ≤ Φ−1 � 1 − αd 2|M⋆ d,+| �� ≤ � K∈M⋆ d P ����� �τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r ���� ≤ �vK,r vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r Φ−1 � 1 − αd 2|M⋆ d,+| �� ≤ � K∈M⋆ d P ����� �τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r ���� ≤ � 1 + �C (QN0)1/3 � Φ−1 � 1 − αd 2|M⋆ d,+| �� + P � �vK,r vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r > 1 + �C (QN0)1/3 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For simplicity, let Z⋆ d = Φ−1 � 1 − αd 2|M⋆ d,+| � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then P � �Mc d ∩ M⋆ d ̸= ∅ � ≤ � K∈M⋆ d � P � −Z⋆ d − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r ≤ �τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r ≤ Z⋆ d − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r � + �C (QN0)1/3 � = � K∈M⋆ d Φ � r−1 K � Z⋆ d − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r �� − Φ � r−1 K � −Z⋆ d − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r �� (≜ ⃝⋆) + �C|M⋆ d| (QN0)1/3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' With Condition 2, we have Z⋆ d = Θ � � � 2 ln 2|M⋆ d,+| αd � � = Θ( � (δ′ + δ′′/3) ln N), ���� τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r ���� = Θ(N1/2+δ) = Θ(Nδ0) (by defining δ0 = 1/2 + δ > 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Because δ > −1/2 and δ′ ≥ 0, we have | τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r | → ∞ and Z⋆ d/(| τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r |) → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Therefore, Φ � r−1 K � Z⋆ d − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r �� − Φ � r−1 K � −Z⋆ d − τK vK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r �� = Θ(N−δ0 exp{−N2δ0/2}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now applying Condition 2 again, we have D D � d=1 P � �Mc d ∩ M⋆ d ̸= ∅ � = Θ � D|M⋆|N−δ0 exp{−N2δ0/2} + D|M⋆|/N1/3� = o(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S17 (iii) The Type I error rate control comes from Theorem S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (iv) The perfect selection result follows from Theorem S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='4 Proof of Theorem 2 Theorem 2 is a direct result of Theorem 1, Lemma S2 and the following Berry–Esseen bound: Lemma S5 (Berry–Esseen bound under perfect screening).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume (S12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then sup t∈R �����P � �γ( �M) − γ v(M⋆) ≤ t � − Φ(t) ����� ≤ 2P � �M ̸= M⋆� + 2Cσw c−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| � c−1 minz∈[Q] S(z, z) · √N0 ∥f[M⋆]∥∞ ∥f[M⋆]∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proof of Lemma S5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This lemma is a direct application of Lemma S3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' First we check that γ(M⋆) = γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' From the definition of γ (S7), we have γ = f ⊤Y = f ⊤Gτ = f ⊤G(·, M⋆)τ(M⋆) = Q−1f ⊤G(·, M⋆)G(·, M⋆)⊤Y = γ(M⋆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now apply Lemma S3 with M = M⋆ to get the result of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='5 Statement and proof of Lemma S6 The following lemma gives the closed form solution of the RLS estimator (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lemma S6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' �Yr from (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8) can be expressed as: �Yr = Q−1G(·, �M)G(·, �M)⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If �M = M⋆, E � �Yr � = Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S18 Proof of Lemma S6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Due to the orthogonality of G, we have the following decomposition: �Y = Q−1G(·, �M)G(·, �M)⊤ �Y + Q−1G(·, �Mc)G(·, �Mc)⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By the constraint in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8), we have ∥�Y − µ∥2 = ∥Q−1G(·, �Mc)G(·, �Mc)⊤ �Y ∥2 + ∥Q−1G(·, �M)G(·, �M)⊤ �Y − µ∥2, which is minimized at �µ = �Yr = Q−1G(·, �M)G(·, �M)⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Besides, �µ satisfies the constraint in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Next we verify E � �Yr � = Y if �M = M⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Utilizing the orthogonality of G again, we have Y = Q−1G(·, M⋆)G(·, M⋆)⊤Y + Q−1G(·, M⋆c)G(·, M⋆c)⊤Y C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='6 Proof of Proposition 1 Proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) Based on the definition of v2 r and v2, we have v2 r v2 = f ⋆⊤V�Y f ⋆ f ⊤V�Y f = ∥f ⋆∥2 2 ∥f∥2 2 because κ(V�Y ) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We further compute ∥f ⋆∥2 2 ∥f∥2 2 = f ⊤{Q−1G(·, M⋆)G(·, M⋆)⊤}f f ⊤f ≤ 1 where the inequality holds because of the following dominance relationship: Q−1G(·, M⋆)G(·, M⋆)⊤ ≼ IQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Because the order of the nonzero elements in f is not crucial here, we assume the first s⋆ coordinates of f are nonzero while the rest are zero without loss of generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We can compute v2 r v2 = f ⋆⊤V�Y f ⋆ f ⊤V�Y f ≤ κ(V�Y ) · ∥f ⋆∥2 2 ∥f∥2 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S22) S19 For f ⋆ we have ∥f ⋆∥2 = ∥Q−1G(·, M⋆)G(·, M⋆)⊤f∥2 = �����Q−1G(·, M⋆)G(·, M⋆)⊤ � s⋆ � s=1 f(s)es ������ 2 ≤ s⋆ � s=1 |f(s)|∥Q−1G(·, M⋆)G(·, M⋆)⊤es∥2 = �|M⋆| Q �1/2 s⋆ � s=1 |f(s)| = �|M⋆| Q �1/2 ∥f∥1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then we have ∥f ⋆∥2 2 ∥f∥2 2 ≤ |M⋆| Q ∥f∥2 1 ∥f∥2 2 ≤ s⋆|M⋆| Q .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S23) Combining (S22) and (S23), we conclude the result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' As an extension of Proposition 1, we compare the asymptotic length of confidence intervals in the following Proposition S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proposition S1 (Asymptotic length of confidence interval comparison).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume that both �γ and �γr converge to a normal distribution as the sample size tends to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume the variance estimators are consistent: N(�v2 − v2 lim) = oP(1), N(�v2 r − v2 r,lim) = oP(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) If the condition number of D�Y satisfies κ(D�Y ) = 1, we have v2 r,lim v2 lim ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Let s⋆ denote the number of nonzero elements in f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' , then we have v2 r,lim v2 lim ≤ κ(D�Y ) · s⋆|M⋆| Q .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='7 Proof of Theorem 3 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' According to Condition 5 and Theorem 1, with Strategy 1, P � �M = ∪d⋆ d=1M⋆ d � → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We will apply Lemma S5 with M = M⋆ = ∪d⋆ d=1M⋆ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S20 We only need to verify γ = γ[M] under the orthogonality condition (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' γ = f ⊤Y = f ⊤Gτ = f ⊤G(·, M⋆)τ(M⋆) + f ⊤G(·, M⋆c)τ(M⋆c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now by (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='14), f ⊤G(·, Mc) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence γ = Q−1f ⊤G(·, ∪d⋆ d=1M⋆ d)G(·, ∪d⋆ d=1M⋆ d)⊤Y = γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='8 Proof of Theorem 4 Proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' This proof can be finished by applying Lemma S3 and S4 with M = M ⋆ and checking γ[M ⋆] = γ, which is omitted here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='9 Proof of Proposition 1 Proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (i) Assume V�Y = Q−1GΛG⊤ where Λ is a diagonal matrix in RQ×Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We directly compute v2 r v2 = f ⋆⊤V�Y f ⋆ f ⊤V�Y f = f ⊤{Q−1G(·, M⋆)G(·, M⋆)⊤}{Q−1GΛG⊤}{Q−1G(·, M⋆)G(·, M⋆)⊤}f f ⊤{Q−1GΛG⊤}f = f ⊤{Q−1G(·, M⋆)Λ(M⋆, M⋆)G(·, M⋆)⊤}f f ⊤{Q−1GΛG⊤}f ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (ii) Because the order of the nonzero elements in f ⋆ is not crucial, we assume only the first s⋆ elements of f are nonzero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' That is, f = f1e1 + · · · + fs⋆es⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S24) We can verify that ∥Q−1G(·, M⋆)G(·, M⋆)⊤ek∥2 = |M⋆| Q , ∀ k ∈ [Q].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S25) Therefore, v2 r v2 = f ⋆⊤V�Y f ⋆ f ⊤V�Y f ≤ ϱmax(V�Y )∥f ⋆∥2 2 ϱmin(V�Y )∥f∥2 2 = κ(V�Y ) · ∥f ⋆∥2 2 ∥f∥2 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' On the one hand, using Q−1G(·, M⋆)G(·, M⋆)⊤ ≼ IQ, we have ∥f ⋆∥2 2 ∥f∥2 2 ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S26) S21 On the other hand, using (S24) and (S25), we have ∥f ⋆∥2 2 ∥f∥2 2 ≤ ∥f∥2 1 ∥f∥2 2 |M⋆| Q ≤ s⋆|M⋆| Q .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S27) Combining (S26) and (S27) concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='10 Proof of Theorem 5 For simplicity, we focus on the case given by (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The general proof can be completed similarly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We begin by the following lemma: Lemma S7 (Consistency of the selected tie sets).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Assume Conditions 1, 3 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' There exists universal constants C, C′ > 0, such that when N > n(δ1, δ2, δ3), we have P � �T1 = T1 � ≥1 − P{ �M ̸= M⋆} −C|T ′||T1| �� ¯c∆|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c∆|M⋆| � + σc−1/2 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lemma S7 establishes a finite sample bound to quantify the performance of the tie set selection step in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The tail bound implies that the performance of tie selection depends on several elements: Quality of effect screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Ideally we hope perfect screening can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' In other words, the misspecification probability P{ �M ̸= M⋆} is small in an asymptotic sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Size of the tie |T1| and the number of factor combinations considered |T ′|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' These two quantities play a natural role because one can expect the difficulty of selection will increase if there are too many combinations present in the first tie or involved in comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Size of between-group distance d⋆ h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' If the gap between Y (1) and the remaining order values are large, ηN = Θ(Nδ2) is allowed to take larger values and the term � ¯c∆|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c∆|M⋆| � can become smaller in magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Population level property of potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The scale of the centered potential outcomes |Yi(z) − Y (z)| should be controlled, and the population variance S(z, z) should be non- degenerate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S22 The relative scale between number of nonzero effects |M⋆| and the total number of units N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The larger N is compared to |M⋆|, the easier for us to draw valid asymptotic conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Proof of Lemma S7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The high level idea of the proof is: we first prove the non-asymptotic bounds over the random event �M = M⋆, then make up for the cost of �M ̸= M⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Over �M = M⋆, we have �Yr = �Y ⋆ r = G(·, M⋆)�τ(M⋆) = Q−1G(·, M⋆)G(·, M⋆)⊤ �Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We apply Lemma S3 to establish a Berry–Esseen bound for each �Y ⋆ r (z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Note that �Y ⋆ r (z) = f ⊤ z �Y , f ⊤ z = Q−1G(z, M⋆)G(·, M⋆)⊤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By calculation we have ∥fz∥∞ = Q−1|M⋆|, ∥fz∥2 = � Q−1|M⋆|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Also we can show that Q � z′=1 fz(z′)2N−1 z′ S(z′, z′) ≤ σ2v2(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' and obtain sup t∈R �����P � �Y ⋆ r (z) − Y (z) vN ≤ t � − Φ(t) ����� ≤ 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| QN0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' A probabilistic bound on the ordered statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' We show a bound on P � max z∈T ′\\T1 �Y ⋆ r (z) < min z∈T1 �Y ⋆ r (z) ≤ max z∈T1 �Y ⋆ r (z) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' It is known that (Wainwright, 2019, Exercise 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2) 1 − Φ(x) = � ∞ x φ(t)dt ≤ 1 x � ∞ x tφ(t)dt ≤ 1 √ 2πx � exp � −x2 2 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence P �√ N ����Y ⋆ r (z) − Y (z) ��� ≥ √ Nd⋆ h � ≤ vN √ 2πd⋆ h exp � − d⋆2 h 2v2 N � + 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S28) S23 Therefore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' for all z ∈ T ′\\T and z′ ∈ T1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' P � �Y ⋆ r (z′) − �Y ⋆ r (z) < 0 � = P �√ N(�Y ⋆ r (z′) − Y (z′)) − √ N(�Y ⋆ r (z) − Y (z)) < √ N(Y (z) − Y (z′)) � ≤ P �√ N(�Y ⋆ r (z′) − Y (z′)) − √ N(�Y ⋆ r (z) − Y (z)) < −2 √ Nd⋆ h � = P �√ N(�Y ⋆ r (z′) − Y (z′)) − √ N(�Y ⋆ r (z) − Y (z)) < −2 √ Nd⋆ h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' √ N(�Y ⋆ r (z) − Y (z)) < √ Nd⋆ h � + P �√ N(�Y ⋆ r (z′) − Y (z′)) − √ N(�Y ⋆ r (z) − Y (z)) < −2 √ Nd⋆ h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' √ N(�Y ⋆ r (z) − Y (z)) < √ Nd⋆ h � ≤ P �√ N(�Y ⋆ r (z′) − Y (z′)) < − √ Nd⋆ h � + P �√ N(�Y ⋆ r (z) − Y (z)) ≥ √ Nd⋆ h � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Using (S28) we have P � �Y ⋆ r (z′) − �Y ⋆ r (z) < 0 � ≤ � ¯c∆|M⋆| √2πN0Qd⋆ h exp � −N0Qd⋆2 h 2¯c¯s|M⋆| � + 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now a union bound gives P � �Y ⋆ r (z′) − �Y ⋆ r (z) < 0 � ≥ 1 − |T1||T ′| � � ¯c¯s|M⋆| √2πN0Qd⋆ h exp � −N0Qd⋆2 h 2¯c¯s|M⋆| � + 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now using that d⋆ h = Θ(Nδ1), Nd⋆2 h = Θ(N1+2δ1) with 1 + 2δ1 > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' The first term in the bracket has the following order � ¯c¯s|M⋆| √2πN0Qd⋆ h exp � −N0Qd⋆2 h 2¯c¯s|M⋆| � = Θ �� ¯c¯s|M⋆| N1+2δ1 exp � −C′N1+2δ1 ¯c¯s|M⋆| �� where C′ > 0 is a universal constant due to Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='Note that δ2 > δ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Thus when N is large enough, we have P � �Y ⋆ r (z′) − �Y ⋆ r (z) < 0 � ≥1 − C|T1||T ′| �� ¯c¯s|M⋆| N1+2δ1 exp � −C′N1+2δ1 ¯c¯s|M⋆| � + σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (S29) S24 Nice separation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Consider the following random index: �z ∈ arg max z∈T ′ �Y ⋆ r (z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' For any ¯ϵ > 0, P � min z /∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)|/ηN ≥ 2¯ϵ � ≥ P � min z /∈T1,z′∈T1 |�Y ⋆ r (z) − �Y ⋆ r (z′)|/ηN ≥ 2¯ϵ, �z ∈ T1 � ≥ P � min z /∈T1,z′∈T1 |�Y ⋆ r (z) − �Y ⋆ r (z′)|/ηN ≥ 2¯ϵ � + P {�z ∈ T1} − 1 ≥ P {�z ∈ T1} − � z /∈T1,z′∈T1 P � |�Y ⋆ r (z) − �Y ⋆ r (�z′)|/ηN ≤ 2¯ϵ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='(S30) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='To proceed we have the following tail bound: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='|�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z) − �Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z′)|/ηN ≤ 2¯ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='=P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='|{�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z) − Y (z)} − {�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z′) − Y (z′)} − {Y (z) − Y (z′)}| ≤ 2¯ϵηN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='≤P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='|Y (z) − Y (z′)| − |�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z) − Y (z)| − |�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z′) − Y (z′)| ≤ 2¯ϵηN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='≤P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='|�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z) − Y (z)| + |�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z′) − Y (z′)| ≥ 2d⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='h − 2¯ϵηN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='≤P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='|�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z) − Y (z)| ≥ d⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='h − ¯ϵηN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='+ P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='|�Y ⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='r (z′) − Y (z′)| ≥ d⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='h − ¯ϵηN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='(because z /∈ T1 and z′ ∈ T1) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='≤4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='¯c∆|M⋆| ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='√2πN0Q(d⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='h − ϵηN) · exp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='−N0Q(d⋆ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='h − ϵηN)2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='2¯c¯s|M⋆| ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='+2Cσc−1 maxi∈[N],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' (This is deduced analogously to the proof in the previous part) By the conditions we imposed in the theorem, we know that when N is large enough, d⋆ h − ¯ϵηN > d⋆ h/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Hence, for N > N(δ1, δ2), we have � z /∈T1,z′∈T1 P � |�Y ⋆ r (z) − �Y ⋆ r (z′)|/ηN ≤ 2¯ϵ � ≤4|T1||T ′| � � 2¯c¯s|M⋆| √πN0Qd⋆ h exp � −N0Qd⋆2 h 8¯c¯s|M⋆| � + 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S25 Combined with (S30), we have: P � min z /∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)|/ηN ≥ 2¯ϵ � ≥P { �m ∈ T1} − 4|T1||T ′| � 2¯c¯s|M⋆| √πN0Qd⋆ h exp � −N0Qd⋆2 h 8¯c¯s|M⋆| � � �� � Term I − 4|T1||T ′|2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � �� � Term II .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Analogous to the discussion in the previous part, when N is sufficiently large, we can show P � min z /∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)|/ηN ≥ 2¯ϵ � ≥P { �m ∈ T1} − C|T1||T ′| �� ¯c¯s|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c¯s|M⋆| � + σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Similarly we can show for any z ∈ T1 and ϵ > 0, P � max z∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)|/ηN ≤ 2ϵ � ≥ P {�z ∈ T1} − � z̸=z′∈T1 P � |�Y ⋆ r (z) − �Y ⋆ r (z′)|/ηN > 2ϵ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Then we have for z ̸= z′ ∈ T1, P � |�Y ⋆ r (z) − �Y ⋆ r (z′)|/ηN > 2ϵ � ≤ P � |�Y ⋆ r (z) − Y (z)| ≥ ϵηN − dh � + P � |�Y ⋆ r (z′) − Y (z′)| ≥ ϵηN − dh � ≤ 4 � � ¯c¯s|M⋆| √2πN0Q(ϵηN − dh) · exp � −N0Q(ϵηN − dh)2 2¯c¯s|M⋆| � + 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' By the scaling of the parameters, when N0 is large enough N > N(δ2, δ3), ϵηN − dh > ϵηN/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' That being said, P � |�Y ⋆ r (z) − �Y ⋆ r (z′)|/ηN > 2ϵ � ≤4 � � 2¯c¯s|M⋆| √πN0Q(ϵηN) · exp � −N0Q(ϵηN)2 8¯c¯s|M⋆| � + 2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S26 Hence we have: P � max z∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)|/ηN ≤ 2ϵ � ≥P {�z ∈ T1} − 4|T1||T ′| � 2¯c¯s|M⋆| √πN0Q(ϵηN) · exp � −N0Q(ϵηN)⋆2 8¯c¯s|M⋆| � � �� � Term I − 4|T1||T ′|2Cσc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � �� � Term II .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Again, by the conditions, we can show P � max z∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)|/ηN ≤ 2ϵ � ≥P {�z ∈ T1} − C|T1||T ′| �� ¯c¯s|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c¯s|M⋆| � + σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Applying (S29) we know that P{�zh ∈ T1} ≥1 − C|T ′||T1| �� ¯c¯s|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c¯s|M⋆| � + σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Aggregating parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Aggregating all the results above, we can show that, when N is large enough, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=', N > n(δ1, δ2, δ3), P � max z∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)| ≤ ϵηN, min z /∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�z)| ≥ ¯ϵηN � ≥1 − C|T ′||T1| �� ¯c¯s|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c¯s|M⋆| � + σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Bounding the factor level combination selection probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' From the formulated procedure, we have P � �T1 = T1 � =P � |�Yr(z) − max z∈T ′ �Yr(z)| ≤ ϵηN, for z ∈ T1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' |�Yr(z) − max z∈T ′ �Yr(z)| > ϵηN, for z /∈ T1 � ≥P � |�Y ⋆ r (z) − max z∈T ′ �Y ⋆ r (z)| ≤ ϵηN, for z ∈ T1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S27 |�Y ⋆ r (z) − max z∈T ′ �Y ⋆ r (z)| > ϵηN, for z /∈ T1 � − P{ �M ̸= M⋆} =P � |�Y ⋆ r (z) − �Y ⋆ r (�zh)| ≤ ϵηN, for z ∈ T1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' |�Y ⋆ r (z) − �Y ⋆ r (�zh)| > ϵηN, for z /∈ T1 � − P{ �M ̸= M⋆} (where we introduce random index �zh to record the position that achieves maximum) ≥P � |�Y ⋆ r (z) − �Y ⋆ r (�zh)| ≤ ϵηN, for z ∈ T1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' |�Y ⋆ r (z) − �Y ⋆ r (�zh)| > ϵηN, for z /∈ T1 � − P{ �M ̸= M⋆} (simply using the fact that ϵ > ϵ) =P � max z∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�zh)| ≤ ϵηN;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' min z /∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�zh)| > ϵηN � − P{ �M ̸= M⋆} ≥1 − H0 � h=1 � 1 − P � max z∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�zh)| ≤ ϵηN;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' min z /∈T1 |�Y ⋆ r (z) − �Y ⋆ r (�zh)| > ϵηN �� − P{ �M ̸= M⋆} ≥1 − P{ �M ̸= M⋆} −C|T ′||T1| �� ¯c¯s|M⋆| N1+2δ2 exp � −C′N1+2δ2 ¯c¯s|M⋆| � + σc−1 maxi∈[N],z∈[Q] |Yi(z) − Y (z)| c−1/2{minz∈[Q] S(z, z)}1/2 � |M⋆| N0Q � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Lemma S7 suggests that, under the conditions assumed in Theorem 5, we select the first tie set consistently as N → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' Now Theorem 5 is a direct result of Lemma S5 and Lemma S7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} +page_content=' S28' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/W9FLT4oBgHgl3EQfTS_4/content/2301.12045v1.pdf'} diff --git a/X9E3T4oBgHgl3EQfcApa/content/tmp_files/2301.04521v1.pdf.txt b/X9E3T4oBgHgl3EQfcApa/content/tmp_files/2301.04521v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..36f7c446fac0a0f80f184d645429cd7637015385 --- /dev/null +++ b/X9E3T4oBgHgl3EQfcApa/content/tmp_files/2301.04521v1.pdf.txt @@ -0,0 +1,713 @@ + +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 + + +Seminar Nasional Hasil Riset +Prefix - RTR +287 +DETEKSI DEPRESI DAN KECEMASAN PENGGUNA TWITTER +MENGGUNAKAN BIDIRECTIONAL LSTM +Kuncahyo Setyo Nugroho1*), Ismail Akbar2), Affi Nizar Suksmawati3), Istiadi4) +1) Fakultas Ilmu Komputer, Universitas Brawijaya, Malang +2) Fakultas Sains dan Teknologi, UIN Maulana Malik Ibrahim, Malang +3) Fakultas Matematika dan Ilmu Pengetahuan Alam, Universitas Gadjah Mada, Yogyakarta +4) Fakultas Teknik, Universitas Widyagama Malang, Malang +*Email Korespondensi: ksnugroho26@gmail.com +ABSTRAK +Gangguan mental yang paling umum dialami seseorang dalam kehidupan sehari-hari adalah +depresi dan kecemasan. Stigma sosial membuat penderita depresi dan kecemasan +diabaikan lingkungan sekitarnya. Oleh karena itu, mereka beralih ke media sosial seperti +Twitter untuk mencari dukungan. Mendeteksi pengguna dengan potensi gangguan depresi +dan kecemasan melalui data tekstual tidaklah mudah karena mereka tidak secara eksplisit +berbicara tentang kondisi mentalnya. Dibutuhkan pemodelan yang mampu mengenali +potensi pengguna yang mengalami depresi dan kecemasan pada data tekstual sehingga +mereka mendapatkan penanganan lebih awal. Hal ini dapat dicapai dengan teknik +klasifikasi teks. Salah satu pendekatan yang dapat digunakan adalah LSTM sebagai +pengembangan aristektur RNN dalam menangani masalah vanishing gradient. LSTM +standar tidak cukup menangkap informasi karena hanya mampu membaca kalimat dari +satu arah. Sedangkan Bidirectional LSTM (BiLSTM) merupakan LSTM dua arah yang mampu +menangkap informasi tanpa mengabaikan konteks dan arti dari suatu kalimat. Model +BiLSTM yang diusulkan menunjukkan performa yang lebih tinggi daripada semua model +machine learning tradisional dan LSTM standar. Berdasarkan hasil pengujian, akurasi +tertinggi yang diperoleh BiLSTM mencapai 94.12%. Penelitian ini telah berhasil +mengembangkan model untuk deteksi depresi dan kecemasan pengguna twitter. +Kata kunci: depresi dan kecemasan, deep learning, RNN, BiLSTM +ABSTRACT +The most common mental disorders experienced by a person in daily life are depression and +anxiety. Social stigma makes people with depression and anxiety neglected by their +surroundings. Therefore, they turn to social media like Twitter for support. Detecting users +with potential depression and anxiety disorders through textual data is not easy because they +do not explicitly discuss their mental state. It takes a model that can identify potential users +who experience depression and anxiety on textual data to get treatment earlier. Text +classification techniques can achieve this. One approach that can be used is LSTM as an RNN +architecture development in dealing with vanishing gradient problems. Standard LSTM does +not capture enough information because it can only read sentences from one direction. +Meanwhile, Bidirectional LSTM (BiLSTM) is a two-way LSTM that can capture information +without ignoring the context and meaning of a sentence. The proposed BiLSTM model is higher +than all traditional machine learning models and standard LSTMs. Based on the test results, +the highest accuracy obtained by BiLSTM reached 94.12%. This study has succeeded in +developing a model for the detection of depression and anxiety in Twitter users. +Keywords: depression and anxiety, deep learning, RNN, BiLSTM +PENDAHULUAN +Gangguan mental didefinisikan sebagai sindrom yang secara klinis ditandai dengan +regulasi emosi atau perilaku yang mencerminkan disfungsi dalam proses psikologis, + + + +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 + + +Seminar Nasional Hasil Riset +Prefix - RTR +288 +biologis, atau perkembangan yang mendasari fungsi mental [1]. Gangguan mental +menyebabkan penderitaan yang dapat menghambat aktivitas seseorang [2]. Terlepas dari +dampak gangguan mental, adanya stigma sosial tentang gangguan mental merupakan +penyakit jiwa yang tidak dapat disembuhkan membuat penderita diabaikan oleh +lingkungan disekitarnya dan menghindari menjalani pengobatan yang diperlukan [3]. +Gangguan mental yang paling umum adalah depresi dan kecemasan [4]. Diagnosis awal dan +pengobatan merupakan hal penting yang harus dilakukan tepat waktu [5]. Namun, bagi +penderita depresi dan kecemasan dibutuhkan keberanian dan kekuatan besar untuk +mencari pengobatan yang tepat. Disisi lain, stigma gangguan mental membuat penderita +depresi dan kecemasan beralih pada sumber daya online seperti media sosial Twitter untuk +mencari dukungan [6]. Oleh karena itu, dibutuhkan suatu pemodelan yang mampu secara +otomatis mengenali potensi seseorang mengalami depresi dan kecemasan sehingga +memungkinkan diagnosis dan pengobatan yang tepat untuk penanganan lebih awal [7]. +Deteksi depresi dan kecemasan melalui data tekstual telah dilakukan menggunakan +Support Vector Machine (SVM) yang dibandingkan dengan Bidirectional Encoder +Representations from Transformers (BERT) dan A Lite BERT (ALBERT) [8]. Performa model +tertinggi diperoleh BERT dengan akurasi mencapai 75%. Penelitian lain menggunakan +Naïve Bayes (NB) dan Support Vector Regression (SVR) [9]. Hasil pengujian pada 3.754 tweet +menunjukkan SVR memperoleh akurasi lebih baik daripada NB sebesar 79.7%. Hasil +pengujian juga dibandingkan dengan K-Means Clustering dan SVM. SVM memperoleh +akurasi sebesar 78.8%, di mana SVM lebih baik dari NB tetapi masih dibawah SVR. +Penelitian serupa pada klasifikasi teks telah dilakukan menggunakan Bidirectional LSTM +(BiLSTM) [10]. Hasil pengujian dibandingkan dengan RNN, CNN, LSTM, dan NB +menunjukkan precision, recall, dan F1-score tertinggi diperoleh BiLSTM. Meskipun RNN +efektif mengekstrak informasi semantik antar kata, tetapi RNN tidak bisa menangani +masalah hilangnya gradien pada kalimat panjang. Sedangkan Long Short-Term Memory +(LSTM) dapat mengatasi masalah hilangnya gradien tetapi hanya sampai batas tertentu +dengan membaca informasi satu arah. Oleh karena itu, BiLSTM diusulkan untuk mengatasi +hilangnya gradien dengan mempertimbangkan membaca informasi dari dua arah [11]. +Penelitian tentang deteksi depresi dan kecemasan pengguna Twitter pada bahasa +Indonesia belum pernah dilakukan sebelumnya. Oleh karena itu, penelitian ini bertujuan +melakukan prediksi depresi dan kecemasan pada data tekstual menggunakan BiLSTM. +BiLSTM diusulkan karena mampu mengekstrak informasi kontekstual lebih cepat dengan +pendekatan dua arah, sehingga tidak menghilangkan arti dan konteks suatu kalimat. Untuk +mengevaluasi kinerja model, BiLSTM dibandingkan dengan beberapa metode machine +learning tradisional lainnya seperti k-Nearest Neighbor (k-NN), Support Vector +Machine(SVM), Decision Tree Classifier (DT), Naïve Bayes (NB) dan Multi Layer Perceptron +(MLP). Selain itu, arsitektur LSTM umum juga dibandingkan dengan metode yang diusulkan. + +Gambar 1. Kerangka penelitian + +PengumpulanData +(TwitterAPI) +Potensi Depresi atau Cemas +Text +Preprocessing +PelatihanModel +Evaluasi Kinerja Model +BasisData +Normal +Prediksi +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 + + +Seminar Nasional Hasil Riset +Prefix - RTR +289 + +Gambar 2. Sebaran tweet berdasarkan label pada dataset +Tabel 1. Sampel data pada dateset +Index +Tweet +Label +5 +ngga enak bgt akhir2 ini rasanya, sering cemas berlebihan +1 +126 +Gak tau kenapa perasaan aku sedih gelisah y +1 +273 +Sedikit cemas banyak rindunya.... +0 +1789 +dulu dipaksa untuk menjadi yang paling cemas, sekarang terpaksa untuk +jadi yang paling ikhlas � +0 + +METODE PENELITIAN +Penelitian ini terdiri dari empat langkah utama yaitu pengumpulan dataset, text- +preprocessing, pelatihan model, dan evaluasi kinerja model, seperti yang ditunjukkan +kerangka penelitian pada Gambar 1. +Dataset +Penelitian ini menggunakan dataset yang diperoleh dari media sosial Twitter dan telah +dianotasi sebelumnya [12]. Dataset memiliki 2.751 tweet berbahasa Indonesia yang telah +dikategorikan ke dalam dua label berbeda. Label 1 menyiratkan jika tweet pengguna +memiliki potensi kecemasan, kegelisahan atau depresi, sedangkan label 0 adalah +sebaliknya. Label 0 terdiri dari 1.857 tweet dan label 1 terdiri dari 894 tweet. Dataset +memiliki distribusi kelas yang tidak seimbang seperti yang ditunjukkan pada Gambar 2, di +mana label 0 memiliki jumlah tweet lebih banyak daripada label 1. Sampel data untuk setiap +label ditunjukkan pada Tabel 1. +Bidirectional LSTM +Long Short-Term Memory (LSTM) [13] adalah pengembangan arsitektur Recurrent Neural +Network (RNN) [14] untuk menangani masalah vanishing gradient, di mana kemiringan +fungsi kerugian menurun secara eksponensial pada saat memproses data sekuensial yang +panjang [15]. Masalah ini menyebabkan RNN gagal menangkap long term dependencies [16] +sehingga dapat mengurangi performa prediksi [17]. LSTM mengganti lapisan RNN dengan +blok memory cell menggunakan mekanisme gerbang yang terdiri dari forget gate, input gate, +dan output gate [11]. Sama halnya dengan RNN, LSTM tersusun atas neuron yang diproses +secara berulang. Struktur neuron tunggal pada LSTM ditunjukkan pada Gambar 3. + +Gambar 3. Neuron tunggal pada arsitektur LSTM + +1750 +1500 +1250 +1000 +750 +500 +250 +otherwise +anxiety/depressioninput Gate +OutputGate +h(t) +ForgetGate +c(t-1) +X +c(t) +tanh +LSTM +f(t) +it) +o(t) +LSTM +2(t) +tanh. +h(t-1) +T ++ ++ +h(t) +x(t) +t-7 +t+1 + +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 + + +Seminar Nasional Hasil Riset +Prefix - RTR +290 +Forget gate merupakan gerbang pertama pada LSTM untuk menentukan informasi +mana yang akan dipertahankan atau dibuang dari cell state. Gerbang ini menerima input +ℎ��� dan �� untuk menghasilkan nilai 0 atau 1 pada ���� seperti diuraikan pada persamaan +(1). Ketika forget gate bernilai 1, maka cell state akan menyimpan informasi, sedangkan jika +bernilai 0 maka informasi akan dibuang dari cell state. Meningkatkan bias �� pada forget +gate dapat meningkatkan kinerja LSTM [18]. + +�� = ���� ∙ [ℎ���, ��] + ��� +(1) + +Input gate merupakan gerbang kedua pada LSTM untuk menentukan informasi apa +yang akan disimpan pada cell state. Gerbang ini terdiri dari lapisan sigmoid dan lapisan tanh. +Lapisan sigmoid memutuskan nilai mana yang akan diperbarui seperti diuraikan pada +persamaan (2). Lapisan tanh membuat nilai baru C̃� untuk ditambahkan ke cell state seperti +diuraikan pada persamaan (3). Output kedua lapisan ini digabungkan untuk memperbarui +informasi cell state. + +�� = �(�� ∙ [ℎ���, ��] + ��) +(2) + +C̃� = ���ℎ(�� ∙ [ℎ���, ��] + ��) +(3) + +Langkah berikutnya adalah memperbarui nilai cell state lama ���� menjadi �� dengan +mengalikan cell state lama dengan �� untuk menghapus nilai pada forget gate sebelumnya. +Selanjutnya, ditambahkan dengan ��C̃� sebagai nilai baru dan digunakan untuk +memperbarui nilai cell state seperti diuraikan pada persamaan (4). + +�� = �� ∙ ���� + �� ∙ C̃� +(4) + +Output gate merupakan gerbang terakhir pada LSTM untuk menentukan output dari +cell state. Pertama, lapisan sigmoid menentukan bagian dari cell state mana yang menjadi +output seperti diuraikan pada persamaan (5). Selanjutnya, output tersebut dimasukkan +kedalam lapisan tanh dan dikalikan dengan lapisan sigmoid agar output sesuai dengan yang +diputuskan sebelumnya seperti diuraikan pada persamaan (6). + +Gambar 4. Arsitektur BiLSTM yang diusulkan + +�� = �(�� ∙ [ℎ���, ��] + ��) +(5) + +ℎ� = �� ∙ tanh(��) +(6) + +Salah satu kelemahan LSTM adalah tidak cukup memperhitungkan informasi dari +kata terakhir karena hanya membaca kalimat dari satu arah saja, awal ke akhir [19]. Oleh +karena itu, kami menggunakan bidirectional LSTM (BiLSTM) untuk membaca kalimat dari +dua arah sekaligus, awal ke akhir serta akhir ke awal. Secara teknis, BiLSTM menerapkan + +OutputGabungan +Fully-Connected +DataSekuensial +ForwardLSTM (F) danBackwardtSTM (B) +(F) @ (8) +Layer +Sigmoid Layer +LSTM (F) +LSTM (F) +LSTM(F) +LSTM (B) +LSTM (B) +LSTM (B) +Output Prediksi +representasinxs,dimanan +adatah jumtahkalimat dalam dataset +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 + + +Seminar Nasional Hasil Riset +Prefix - RTR +291 +dua LSTM terpisah, satu untuk arah depan dan satu untuk arah mundur. Dua hidden state +ℎ� +������� dan ℎ� +�������� dari LSTM digabungkan menjadi final hidden state ℎ� +������ seperti +yang diuraikan pada persamaan (7). Sehingga, arsitektur BiLSTM yang kami usulkan +disajikan pada Gambar 4. + +ℎ� +������ = ℎ� +������� ⊕ ℎ� +�������� +(7) + +Evaluasi Kinerja Model +Confusion matrix dapat digunakan untuk mengetahui kinerja model dengan +menghitung rasio prediksi benar maupun salah, serta mengetahui jenis kesalahannya. True +positive (TP) adalah kelas positif yang diprediksi benar. Misalnya, pengguna yang memiliki +potensi kecemasan diprediksi memiliki kecemasan. True negative (TN) adalah kelas negatif +yang diprediksi benar. Misalnya, pengguna yang tidak memiliki potensi kecemasan +diprediksi tidak memiliki kecemasan. False positive (FP) adalah kelas negatif yang +diprediksi sebagai kelas positif. Misalnya, pengguna yang tidak memiliki kecemasan +diprediksi memiliki potensi kecemasan. False negative (FN) adalah kelas positif yang +diprediksi sebagai kelas negatif. Misalnya, pengguna yang memiliki kecemasan diprediksi +tidak memiliki potensi kecemasan. +Metrik yang paling sering digunakan untuk mengevaluasi model berdasarkan +confusion matrix adalah accuracy. Accuracy adalah rasio prediksi benar (TP dan TN) dengan +keseluruhan data yang menggambarkan tingkat kedekatan nilai prediksi dengan nilai +sebenarnya, seperti diuraikan pada persamaan (8). Permasalahan distribusi data tidak +seimbang adalah sampel data negatif lebih banyak daripada data positif. Oleh karena itu, +kami menggunakan dua metrik lainnya, precision dan recall. Precision adalah rasio prediksi +benar positif (TP) dengan keseluruhan data yang diprediksi positif, seperti diuraikan pada +persamaan (9). Sedangkan, recall adalah rasio prediksi benar positif (TP) dengan +keseluruhan data yang benar positif, seperti diuraikan pada persamaan (10). +�������� = +�� + �� +�� + �� + �� + �� +(8) + +��������� = +�� +�� + �� +(9) + +������ = +�� +�� + �� +(10) +HASIL DAN PEMBAHASAN +Dataset yang tersedia memiliki format tidak terstruktur. Oleh karena itu, langkah +pertama yang dilakukan adalah text preprocessing meliputi menghapus angka, menghapus +URL, menghapus username mention, dan menghapus tanda baca. Sedangkan stemming, +stopword removal dan normalisasi kata slang atau kata gaul tidak dilakukan karena kami +tidak ingin mengubah arti dan konteks dari suatu kalimat. Seluruh eksperimen dilakukan +pada Google Colab1 environment menggunakan Python 3.6 dengan spesifikasi 1 Tesla V100- +SXM2-16 GB GPU dan 27.8 GB RAM. Dataset dibagi menjadi tiga bagian, yaitu data latih, data +uji dan data validasi. Pertama, dataset utuh dibagi 80% untuk data latih, sisanya untuk data +uji. Kemudian, data latih dibagi dua untuk data validasi selama proses pelatihan model. +Sebagai baseline model, kami menggunakan beberapa metode machine learning +tradisional meliputi k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), Decision +Tree Classifier (DT), Naïve Bayes (NB) dan Multi Layer Perceptron (MLP). Nilai parameter + +1 https://colab.research.google.com + + + +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 + + +Seminar Nasional Hasil Riset +Prefix - RTR +292 +yang telah ditentukan untuk setiap baseline model disajikan pada Tabel 2. Kami +menggunakan skema pembobotan kata Term Frequency -Inverse Document Frequency (TF- +IDF) dengan kombinasi bi-gram sebagai metode ekstraksi fitur. Selanjutnya, prosedur +validasi silang 10 lipat diterapkan pada data latih selama fase pelatihan model. Hasil +pengujian baseline model disajikan pada Tabel 3. +Berdasarkan Tabel 3 dapat diketahui bahwa MLP memiliki akurasi tertinggi pada fase +pelatihan sebesar 0.9850 maupun fase pengujian sebesar 0.7422 daripada model lainnya. +Akurasi validasi silang tertinggi juga diperoleh oleh MLP sebesar 0.76 dengan standar +deviasi ± 0.0628 . Sedangkan akurasi pengujian terendah adalah 0.6497 diperoleh DT, +meskipun memiliki akurasi pelatihan yang sama dengan MLP. Jika diperhatikan, akurasi +pelatihan dan akurasi pengujian memiliki selisih yang jauh. Hal ini dapat disebabkan karena +model terlalu naif. Hasil berbeda diperoleh akurasi validasi silang dengan memiliki nilai +yang cenderung dekat dengan akurasi pengujian. +Tabel 2. Nilai parameter untuk baseline model +Baseline Model +Nama dan Nilai Parameter +k-NN +n_neighbors=3 +SVM +kernel= polynomial, C=1.0, degree=3 +DT +criterion=gini, min_samples_split=2, min_samples_leaf=1 +NB +alpha=1.0 +MLP +hidden_layer_size=25, solver=adam, learning_rate=1e-3, max_iter=100 +Tabel 3. Hasil pengujian baseline model +Baseline Model +Akurasi Pelatihan +Akurasi Validasi Silang +Akurasi Pengujian +k-NN +0.7995 +0.6786 (± 0.0645) +0.6588 +SVM +0.9831 +0.6836 (± 0.0688) +0.6696 +DT +0.9850 +0.7263 (±0.0322) +0.6497 +NB +0.8945 +0.7286 (±0.0633) +0.6987 +MLP +0.9850 +0.7600 (± 0.0628) +0.7422 +Pengujian selanjutnya adalah menerapkan arsitektur BiLSTM seperti yang +ditunjukkan pada Gambar 4. Dataset yang telah melalui text-preprocessing kemudian +dipisahkan berdasarkan spasi menggunakan tokenizer dari library Keras2. Berikutnya, +daftar token kosakata dikonversi menjadi urutan numerik dengan mengganti indeks setiap +kosakata dengan nilai integer. Setiap token kata dipetakan ke vektor berukuran �, di mana +� adalah jumlah kata dalam sebuah kalimat. Kami menerapkan strategi zero-padding +sehingga semua kalimat memiliki dimensi vektor yang sama � ∈ ��� dengan nilai �� = 1000. +Sebagai pembanding, kami juga menerapkan arsitektur LSTM yang umum. Parameter yang +telah ditentukan untuk LSTM maupun BiLSTM disajikan pada Tabel 4. Jumlah epoch +ditentukan sebanyak 25 kali untuk setiap percobaan. Sedangkan untuk menghindari over- +fitting pada model selama fase pelatihan, kami menentukan nilai dropout sebesar 0.5. +Tabel 4. Pengaturan parameter LSTM dan BiLSTM +Nama Parameter +Nilai Parameter +Embedding_size +200 +activation +sigmoid +optimizer +adam +learning_rate +1e-3 +batch_size +64 +regularizer +L2 + +2 https://keras.io/api/preprocessing/text + + +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 + + +Seminar Nasional Hasil Riset +Prefix - RTR +293 +Tabel 5. Hasil pengujian model LSTM dan BiLSTM +Model +Akurasi +Training Loss +Precision +Recall +LSTM +0.8491 +0.3707 +0.7659 +0.7673 +BiLSTM +0.9412 +0.1826 +0.9759 +0.8386 +Hasil pengujian LSTM dan BiLSTM pada Tabel 5 menunjukkan BiLSTM memiliki +kinerja yang lebih baik pada semua metrik evaluasi. BiLSTM juga unggul jika dibandingkan +dengan semua model machine learning tradisional pada Tabel 3. Akurasi pengujian tertinggi +adalah 0.9412 dengan training loss sebesar 0.1826. Sedangkan precision dan recall yang +diperoleh adalah 0.9759 dan 0.8386. Berdasarkan grafik fase pelatihan pada Error! +Reference source not found. dan Gambar 6, dapat diketahui bahwa BiLSTM memiliki +akurasi pelatihan dan training loss yang lebih stabil pada setiap epochnya. Sedangkan LSTM +pada awal epoch cenderung memiliki akurasi yang kecil tetapi mengalami peningkatan pada +setiap epoch berikutnya. Sama halnya dengan training loss yang menunjukkan penurunan +pada setiap epochnya yang berarti model telah belajar selama fase pelatihan. +Arsitektur BiLSTM yang diusulkan dalam penelitian ini menunjukkan peningkatan +kinerja dibandingkan dengan baseline model maupun LSTM standar. Hal ini juga +memperkuat fakta bahwa pendekatan deep learning mampu mencapai performa yang lebih +baik daripada pendekatan machine learning tradisional. Kami juga mengamati bahwa +BiLSTM mampu mengatasi masalah long-term dependency. Keuntungan lain dari penelitian +ini adalah kemampuan BiLSTM untuk membaca informasi dari dua arah sekaligus. + +(a) + +(b) +Gambar 5. Grafik pada fase pelatihan LSTM, (a) akurasi dan validasi pelatihan, (b) training dan +validation loss + +(a) + +(b) +Gambar 6. Grafik pada fase pelatihan BiLSTM, (a) akurasi dan validasi pelatihan, (b) training dan +validation loss + +Sedangkan kelemahan LSTM maupun BiLSTM adalah membutuhkan lebih banyak +data serta waktu dan biaya komputasi yang lebih tinggi daripada baseline model yang ada. +Dengan demikian, kemampuan BiLSTM dalam membaca konteks melalui dua arah sekaligus +memberikan hasil yang baik pada deteksi depresi dan kecemasan pengguna Twitter. + +accuracy +0.82 +val accuracy +0.80 +0.78 +0.74 +0.72 +0.70 +0.68 +10 +15 +20 +25 +Epochsloss +0.65 +val_loss +0.60 +0.55 +0.50 +0.45 +0.40 +FO +Fn +10 +15 +20 +T +25 +Epochs0.925 +0060 +0.875 +curacy +0.850 +accuracy +0.825 +val_accuracy +0.800 +0.775 +0.750 +0 +10 +15 +20 +25 +Epochs0.7. +loss +val_loss +0.6 +$501 +0.5 +0.4 +0.3 +0.2 +0 +10 +15 +20 +25 +Epochs + +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 + + +Seminar Nasional Hasil Riset +Prefix - RTR +294 +KESIMPULAN +Pada penelitian ini kami mengusulkan arsitektur BiLSTM untuk deteksi depresi dan +kecemasan pengguna Twitter pada bahasa Indonesia. Berdasarkan hasil pengujian, model +kami menunjukkan performa yang lebih tinggi daripada semua model machine learning +tradisional dan LSTM standar. Akurasi tertinggi yang diperoleh menggunakan BiLSTM +mencapai 94.12%. Hal ini dapat diraih karena BiLSTM mampu mengambil informasi dengan +membaca konteks melalui dua arah sekaligus. Namun, BiLSTM membutuhkan dataset yang +cukup besar untuk menghindari model yang over-fitting. Selain itu, biaya dan waktu +komputasi yang dibutuhkan juga tinggi. Pada penelitian berikutnya, kombinasi word +embedding perlu diterapkan agar menghasilkan representasi kata yang lebih kaya. Selain +itu, hyperparameter-tuning juga perlu dilakukan guna meningkatkan performa model. +REFERENSI +[1] +V. del Barrio, “Diagnostic and Statistical Manual of Mental Disorders,” in Encyclopedia +of Applied Psychology, Elsevier, 2004, pp. 607–614. +[2] +D. Bolton, What is Mental Disorder? Oxford University Press, 2008. +[3] +A. Husseini Orabi, P. Buddhitha, M. Husseini Orabi, and D. Inkpen, “Deep Learning for +Depression Detection of Twitter Users,” in Proceedings of the Fifth Workshop on +Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, 2018, vol. +19, no. 2, pp. 88–97, doi: 10.18653/v1/W18-0609. +[4] +World Health Organization, “World Health Statistics - Monitoring Health For The +SDGs,” World Heal. Organ., p. 1.121, 2016. +[5] +“Centers for Disease Control and Prevention,” Suicide: Facts at a glance [fact sheet], +2015. +[6] +A. Yates, A. Cohan, and N. Goharian, “Depression and Self-Harm Risk Assessment in +Online Forums,” in Proceedings of the 2017 Conference on Empirical Methods in +Natural Language Processing, 2017, pp. 2968–2978, doi: 10.18653/v1/D17-1322. +[7] +M. A. S. Lexis et al., “Prevention of long-term sickness absence and major depression +in high-risk employees: a randomised controlled trial,” Occup. Environ. Med., vol. 68, +no. 6, pp. 400–407, Jun. 2011, doi: 10.1136/oem.2010.057877. +[8] +J. Camacho-collados, L. Espinosa-anke, and D. Owen, “Towards Preemptive Detection +of Depression and Anxiety in Twitter,” in Social Media Mining for Health Applications +(#SMM4H) Workshop & Shared Task, 2020, pp. 82–89. +[9] +P. Arora and P. Arora, “Mining Twitter Data for Depression Detection,” in 2019 +International Conference on Signal Processing and Communication (ICSC), Mar. 2019, +pp. 186–189, doi: 10.1109/ICSC45622.2019.8938353. +[10] +G. Xu, Y. Meng, X. Qiu, Z. Yu, and X. Wu, “Sentiment analysis of comment texts based +on BiLSTM,” IEEE Access, vol. 7, no. c, pp. 51522–51532, 2019, doi: +10.1109/ACCESS.2019.2909919. +[11] +F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to Forget: Continual Prediction +with LSTM,” Neural Comput., vol. 12, no. 10, pp. 2451–2471, Oct. 2000, doi: +10.1162/089976600300015015. +[12] +D. M. R. Rianto, L. P. Wisesa, and S. Hans, “Depression and Anxiety in Twitter (ID),” +Kaggle, 2021. https://www.kaggle.com/stevenhans/depression-and-anxiety-in- +twitter-id (accessed Nov. 11, 2021). +[13] +S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. +9, no. 8, pp. 1735–1780, 1997, doi: 10.1162/neco.1997.9.8.1735. + + +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 + + +Seminar Nasional Hasil Riset +Prefix - RTR +295 +[14] +D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations +by Error Propagation,” in Readings in Cognitive Science: A Perspective from Psychology +and Artificial Intelligence, MIT Press, 1987, pp. 318–362. +[15] +M. Kim and K.-H. Kang, “Comparison of Neural Network Techniques for Text Data +Analysis,” Int. J. Adv. Cult. Technol., vol. 8, no. 2, pp. 231–238, 2020, doi: +10.17703/IJACT.2020.8.2.231. +[16] +A. Saxena and T. R. Sukumar, “Predicting bitcoin price using lstm And Compare its +predictability with arima model,” Int. Journa Pure Appl. Math., vol. 119, no. 17, pp. +2591–2600, 2018. +[17] +Z. Zhao, W. Chen, X. Wu, P. C. Y. Chen, and J. Liu, “LSTM network: A deep learning +approach for Short-term traffic forecast,” IET Intell. Transp. Syst., vol. 11, no. 2, pp. +68–75, Mar. 2017, doi: 10.1049/iet-its.2016.0208. +[18] +R. Jozefowicz, W. Zaremba, and I. Sutskever, “An empirical exploration of Recurrent +Network architectures,” in 32nd International Conference on Machine Learning, ICML +2015, 2015, vol. 3, pp. 2332–2340. +[19] +H. Elfaik and E. H. Nfaoui, “Deep Bidirectional LSTM Network Learning-Based +Sentiment Analysis for Arabic Text,” J. Intell. Syst., vol. 30, no. 1, pp. 395–412, Jan. +2021, doi: 10.1515/jisys-2020-0021. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) +Universitas Widyagama Malang, 15 Desember 2021 +ISSN Cetak : 2622-1276 +ISSN Online : 2622-1284 + + +Seminar Nasional Hasil Riset +Prefix - RTR +296 + + diff --git a/X9E3T4oBgHgl3EQfcApa/content/tmp_files/load_file.txt b/X9E3T4oBgHgl3EQfcApa/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad837c77c7f68f71f952619ce4021c3854be00e1 --- /dev/null +++ b/X9E3T4oBgHgl3EQfcApa/content/tmp_files/load_file.txt @@ -0,0 +1,436 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf,len=435 +page_content='ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 15 Desember 2021 Seminar Nasional Hasil Riset Prefix - RTR 287 DETEKSI DEPRESI DAN KECEMASAN PENGGUNA TWITTER MENGGUNAKAN BIDIRECTIONAL LSTM Kuncahyo Setyo Nugroho1*),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Ismail Akbar2),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Affi Nizar Suksmawati3),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Istiadi4) 1) Fakultas Ilmu Komputer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Universitas Brawijaya,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Malang 2) Fakultas Sains dan Teknologi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' UIN Maulana Malik Ibrahim,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Malang 3) Fakultas Matematika dan Ilmu Pengetahuan Alam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Universitas Gadjah Mada,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Yogyakarta 4) Fakultas Teknik,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Universitas Widyagama Malang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Malang *Email Korespondensi: ksnugroho26@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='com ABSTRAK Gangguan mental yang paling umum dialami seseorang dalam kehidupan sehari-hari adalah depresi dan kecemasan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Stigma sosial membuat penderita depresi dan kecemasan diabaikan lingkungan sekitarnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, mereka beralih ke media sosial seperti Twitter untuk mencari dukungan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Mendeteksi pengguna dengan potensi gangguan depresi dan kecemasan melalui data tekstual tidaklah mudah karena mereka tidak secara eksplisit berbicara tentang kondisi mentalnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dibutuhkan pemodelan yang mampu mengenali potensi pengguna yang mengalami depresi dan kecemasan pada data tekstual sehingga mereka mendapatkan penanganan lebih awal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hal ini dapat dicapai dengan teknik klasifikasi teks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Salah satu pendekatan yang dapat digunakan adalah LSTM sebagai pengembangan aristektur RNN dalam menangani masalah vanishing gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' LSTM standar tidak cukup menangkap informasi karena hanya mampu membaca kalimat dari satu arah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan Bidirectional LSTM (BiLSTM) merupakan LSTM dua arah yang mampu menangkap informasi tanpa mengabaikan konteks dan arti dari suatu kalimat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Model BiLSTM yang diusulkan menunjukkan performa yang lebih tinggi daripada semua model machine learning tradisional dan LSTM standar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Berdasarkan hasil pengujian, akurasi tertinggi yang diperoleh BiLSTM mencapai 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='12%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Penelitian ini telah berhasil mengembangkan model untuk deteksi depresi dan kecemasan pengguna twitter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kata kunci: depresi dan kecemasan, deep learning, RNN, BiLSTM ABSTRACT The most common mental disorders experienced by a person in daily life are depression and anxiety.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Social stigma makes people with depression and anxiety neglected by their surroundings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Therefore, they turn to social media like Twitter for support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Detecting users with potential depression and anxiety disorders through textual data is not easy because they do not explicitly discuss their mental state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' It takes a model that can identify potential users who experience depression and anxiety on textual data to get treatment earlier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Text classification techniques can achieve this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' One approach that can be used is LSTM as an RNN architecture development in dealing with vanishing gradient problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Standard LSTM does not capture enough information because it can only read sentences from one direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Meanwhile, Bidirectional LSTM (BiLSTM) is a two-way LSTM that can capture information without ignoring the context and meaning of a sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' The proposed BiLSTM model is higher than all traditional machine learning models and standard LSTMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Based on the test results, the highest accuracy obtained by BiLSTM reached 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='12%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' This study has succeeded in developing a model for the detection of depression and anxiety in Twitter users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Keywords: depression and anxiety,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' deep learning,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' RNN,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' BiLSTM PENDAHULUAN Gangguan mental didefinisikan sebagai sindrom yang secara klinis ditandai dengan regulasi emosi atau perilaku yang mencerminkan disfungsi dalam proses psikologis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 15 Desember 2021 ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 Seminar Nasional Hasil Riset Prefix - RTR 288 biologis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' atau perkembangan yang mendasari fungsi mental [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gangguan mental menyebabkan penderitaan yang dapat menghambat aktivitas seseorang [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Terlepas dari dampak gangguan mental, adanya stigma sosial tentang gangguan mental merupakan penyakit jiwa yang tidak dapat disembuhkan membuat penderita diabaikan oleh lingkungan disekitarnya dan menghindari menjalani pengobatan yang diperlukan [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gangguan mental yang paling umum adalah depresi dan kecemasan [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Diagnosis awal dan pengobatan merupakan hal penting yang harus dilakukan tepat waktu [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Namun, bagi penderita depresi dan kecemasan dibutuhkan keberanian dan kekuatan besar untuk mencari pengobatan yang tepat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Disisi lain, stigma gangguan mental membuat penderita depresi dan kecemasan beralih pada sumber daya online seperti media sosial Twitter untuk mencari dukungan [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, dibutuhkan suatu pemodelan yang mampu secara otomatis mengenali potensi seseorang mengalami depresi dan kecemasan sehingga memungkinkan diagnosis dan pengobatan yang tepat untuk penanganan lebih awal [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Deteksi depresi dan kecemasan melalui data tekstual telah dilakukan menggunakan Support Vector Machine (SVM) yang dibandingkan dengan Bidirectional Encoder Representations from Transformers (BERT) dan A Lite BERT (ALBERT) [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Performa model tertinggi diperoleh BERT dengan akurasi mencapai 75%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Penelitian lain menggunakan Naïve Bayes (NB) dan Support Vector Regression (SVR) [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil pengujian pada 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='754 tweet menunjukkan SVR memperoleh akurasi lebih baik daripada NB sebesar 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil pengujian juga dibandingkan dengan K-Means Clustering dan SVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' SVM memperoleh akurasi sebesar 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8%, di mana SVM lebih baik dari NB tetapi masih dibawah SVR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Penelitian serupa pada klasifikasi teks telah dilakukan menggunakan Bidirectional LSTM (BiLSTM) [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil pengujian dibandingkan dengan RNN, CNN, LSTM, dan NB menunjukkan precision, recall, dan F1-score tertinggi diperoleh BiLSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Meskipun RNN efektif mengekstrak informasi semantik antar kata, tetapi RNN tidak bisa menangani masalah hilangnya gradien pada kalimat panjang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan Long Short-Term Memory (LSTM) dapat mengatasi masalah hilangnya gradien tetapi hanya sampai batas tertentu dengan membaca informasi satu arah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, BiLSTM diusulkan untuk mengatasi hilangnya gradien dengan mempertimbangkan membaca informasi dari dua arah [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Penelitian tentang deteksi depresi dan kecemasan pengguna Twitter pada bahasa Indonesia belum pernah dilakukan sebelumnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, penelitian ini bertujuan melakukan prediksi depresi dan kecemasan pada data tekstual menggunakan BiLSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' BiLSTM diusulkan karena mampu mengekstrak informasi kontekstual lebih cepat dengan pendekatan dua arah, sehingga tidak menghilangkan arti dan konteks suatu kalimat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Untuk mengevaluasi kinerja model, BiLSTM dibandingkan dengan beberapa metode machine learning tradisional lainnya seperti k-Nearest Neighbor (k-NN), Support Vector Machine(SVM), Decision Tree Classifier (DT), Naïve Bayes (NB) dan Multi Layer Perceptron (MLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Selain itu, arsitektur LSTM umum juga dibandingkan dengan metode yang diusulkan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gambar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kerangka penelitian PengumpulanData (TwitterAPI) Potensi Depresi atau Cemas Text Preprocessing PelatihanModel Evaluasi Kinerja Model BasisData Normal Prediksi ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 Seminar Nasional Hasil Riset Prefix RTR 289 Gambar 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sebaran tweet berdasarkan label pada dataset Tabel 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sampel data pada dateset Index Tweet Label 5 ngga enak bgt akhir2 ini rasanya, sering cemas berlebihan 1 126 Gak tau kenapa perasaan aku sedih gelisah y 1 273 Sedikit cemas banyak rindunya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='. 0 1789 dulu dipaksa untuk menjadi yang paling cemas, sekarang terpaksa untuk jadi yang paling ikhlas � 0 METODE PENELITIAN Penelitian ini terdiri dari empat langkah utama yaitu pengumpulan dataset, text- preprocessing, pelatihan model, dan evaluasi kinerja model, seperti yang ditunjukkan kerangka penelitian pada Gambar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dataset Penelitian ini menggunakan dataset yang diperoleh dari media sosial Twitter dan telah dianotasi sebelumnya [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dataset memiliki 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='751 tweet berbahasa Indonesia yang telah dikategorikan ke dalam dua label berbeda.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Label 1 menyiratkan jika tweet pengguna memiliki potensi kecemasan, kegelisahan atau depresi, sedangkan label 0 adalah sebaliknya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Label 0 terdiri dari 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='857 tweet dan label 1 terdiri dari 894 tweet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dataset memiliki distribusi kelas yang tidak seimbang seperti yang ditunjukkan pada Gambar 2, di mana label 0 memiliki jumlah tweet lebih banyak daripada label 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sampel data untuk setiap label ditunjukkan pada Tabel 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Bidirectional LSTM Long Short-Term Memory (LSTM) [13] adalah pengembangan arsitektur Recurrent Neural Network (RNN) [14] untuk menangani masalah vanishing gradient, di mana kemiringan fungsi kerugian menurun secara eksponensial pada saat memproses data sekuensial yang panjang [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Masalah ini menyebabkan RNN gagal menangkap long term dependencies [16] sehingga dapat mengurangi performa prediksi [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' LSTM mengganti lapisan RNN dengan blok memory cell menggunakan mekanisme gerbang yang terdiri dari forget gate, input gate, dan output gate [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sama halnya dengan RNN, LSTM tersusun atas neuron yang diproses secara berulang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Struktur neuron tunggal pada LSTM ditunjukkan pada Gambar 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gambar 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Neuron tunggal pada arsitektur LSTM 1750 1500 1250 1000 750 500 250 otherwise anxiety/depressioninput Gate OutputGate h(t) ForgetGate c(t 1) X c(t) tanh LSTM f(t) it) o(t) LSTM 2(t) tanh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' h(t 1) T + + h(t) x(t) t 7 t+1 The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 Seminar Nasional Hasil Riset Prefix - RTR 290 Forget gate merupakan gerbang pertama pada LSTM untuk menentukan informasi mana yang akan dipertahankan atau dibuang dari cell state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gerbang ini menerima input ℎ��� dan �� untuk menghasilkan nilai 0 atau 1 pada ���� seperti diuraikan pada persamaan (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Ketika forget gate bernilai 1, maka cell state akan menyimpan informasi, sedangkan jika bernilai 0 maka informasi akan dibuang dari cell state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Meningkatkan bias �� pada forget gate dapat meningkatkan kinerja LSTM [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' �� = ���� [ℎ���, ��] + ��� (1) Input gate merupakan gerbang kedua pada LSTM untuk menentukan informasi apa yang akan disimpan pada cell state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gerbang ini terdiri dari lapisan sigmoid dan lapisan tanh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Lapisan sigmoid memutuskan nilai mana yang akan diperbarui seperti diuraikan pada persamaan (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Lapisan tanh membuat nilai baru C̃� untuk ditambahkan ke cell state seperti diuraikan pada persamaan (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Output kedua lapisan ini digabungkan untuk memperbarui informasi cell state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' �� = �(�� [ℎ���, ��] + ��) (2) C̃� = ���ℎ(�� [ℎ���, ��] + ��) (3) Langkah berikutnya adalah memperbarui nilai cell state lama ���� menjadi �� dengan mengalikan cell state lama dengan �� untuk menghapus nilai pada forget gate sebelumnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Selanjutnya, ditambahkan dengan ��C̃� sebagai nilai baru dan digunakan untuk memperbarui nilai cell state seperti diuraikan pada persamaan (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' �� = �� ���� + �� C̃� (4) Output gate merupakan gerbang terakhir pada LSTM untuk menentukan output dari cell state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Pertama, lapisan sigmoid menentukan bagian dari cell state mana yang menjadi output seperti diuraikan pada persamaan (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Selanjutnya, output tersebut dimasukkan kedalam lapisan tanh dan dikalikan dengan lapisan sigmoid agar output sesuai dengan yang diputuskan sebelumnya seperti diuraikan pada persamaan (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gambar 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Arsitektur BiLSTM yang diusulkan �� = �(�� [ℎ���, ��] + ��) (5) ℎ� = �� tanh(��) (6) Salah satu kelemahan LSTM adalah tidak cukup memperhitungkan informasi dari kata terakhir karena hanya membaca kalimat dari satu arah saja, awal ke akhir [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, kami menggunakan bidirectional LSTM (BiLSTM) untuk membaca kalimat dari dua arah sekaligus, awal ke akhir serta akhir ke awal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Secara teknis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' BiLSTM menerapkan OutputGabungan Fully-Connected DataSekuensial ForwardLSTM (F) danBackwardtSTM (B) (F) @ (8) Layer Sigmoid Layer LSTM (F) LSTM (F) LSTM(F) LSTM (B) LSTM (B) LSTM (B) Output Prediksi representasinxs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='dimanan adatah jumtahkalimat dalam dataset ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 15 Desember 2021 Seminar Nasional Hasil Riset Prefix - RTR 291 dua LSTM terpisah,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' satu untuk arah depan dan satu untuk arah mundur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dua hidden state ℎ� ������� dan ℎ� �������� dari LSTM digabungkan menjadi final hidden state ℎ� ������ seperti yang diuraikan pada persamaan (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sehingga, arsitektur BiLSTM yang kami usulkan disajikan pada Gambar 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' ℎ� ������ = ℎ� ������� ⊕ ℎ� �������� (7) Evaluasi Kinerja Model Confusion matrix dapat digunakan untuk mengetahui kinerja model dengan menghitung rasio prediksi benar maupun salah, serta mengetahui jenis kesalahannya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' True positive (TP) adalah kelas positif yang diprediksi benar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Misalnya, pengguna yang memiliki potensi kecemasan diprediksi memiliki kecemasan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' True negative (TN) adalah kelas negatif yang diprediksi benar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Misalnya, pengguna yang tidak memiliki potensi kecemasan diprediksi tidak memiliki kecemasan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' False positive (FP) adalah kelas negatif yang diprediksi sebagai kelas positif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Misalnya, pengguna yang tidak memiliki kecemasan diprediksi memiliki potensi kecemasan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' False negative (FN) adalah kelas positif yang diprediksi sebagai kelas negatif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Misalnya, pengguna yang memiliki kecemasan diprediksi tidak memiliki potensi kecemasan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Metrik yang paling sering digunakan untuk mengevaluasi model berdasarkan confusion matrix adalah accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Accuracy adalah rasio prediksi benar (TP dan TN) dengan keseluruhan data yang menggambarkan tingkat kedekatan nilai prediksi dengan nilai sebenarnya, seperti diuraikan pada persamaan (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Permasalahan distribusi data tidak seimbang adalah sampel data negatif lebih banyak daripada data positif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, kami menggunakan dua metrik lainnya, precision dan recall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Precision adalah rasio prediksi benar positif (TP) dengan keseluruhan data yang diprediksi positif, seperti diuraikan pada persamaan (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan, recall adalah rasio prediksi benar positif (TP) dengan keseluruhan data yang benar positif, seperti diuraikan pada persamaan (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' �������� = �� + �� �� + �� + �� + �� (8) ��������� = �� �� + �� (9) ������ = �� �� + �� (10) HASIL DAN PEMBAHASAN Dataset yang tersedia memiliki format tidak terstruktur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oleh karena itu, langkah pertama yang dilakukan adalah text preprocessing meliputi menghapus angka, menghapus URL, menghapus username mention, dan menghapus tanda baca.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan stemming, stopword removal dan normalisasi kata slang atau kata gaul tidak dilakukan karena kami tidak ingin mengubah arti dan konteks dari suatu kalimat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Seluruh eksperimen dilakukan pada Google Colab1 environment menggunakan Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6 dengan spesifikasi 1 Tesla V100- SXM2-16 GB GPU dan 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8 GB RAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dataset dibagi menjadi tiga bagian, yaitu data latih, data uji dan data validasi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Pertama, dataset utuh dibagi 80% untuk data latih, sisanya untuk data uji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kemudian, data latih dibagi dua untuk data validasi selama proses pelatihan model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sebagai baseline model, kami menggunakan beberapa metode machine learning tradisional meliputi k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), Decision Tree Classifier (DT), Naïve Bayes (NB) dan Multi Layer Perceptron (MLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Nilai parameter 1 https://colab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='com The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 Seminar Nasional Hasil Riset Prefix - RTR 292 yang telah ditentukan untuk setiap baseline model disajikan pada Tabel 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kami menggunakan skema pembobotan kata Term Frequency -Inverse Document Frequency (TF- IDF) dengan kombinasi bi-gram sebagai metode ekstraksi fitur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Selanjutnya, prosedur validasi silang 10 lipat diterapkan pada data latih selama fase pelatihan model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil pengujian baseline model disajikan pada Tabel 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Berdasarkan Tabel 3 dapat diketahui bahwa MLP memiliki akurasi tertinggi pada fase pelatihan sebesar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9850 maupun fase pengujian sebesar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7422 daripada model lainnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Akurasi validasi silang tertinggi juga diperoleh oleh MLP sebesar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='76 dengan standar deviasi ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0628 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan akurasi pengujian terendah adalah 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6497 diperoleh DT, meskipun memiliki akurasi pelatihan yang sama dengan MLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Jika diperhatikan, akurasi pelatihan dan akurasi pengujian memiliki selisih yang jauh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hal ini dapat disebabkan karena model terlalu naif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil berbeda diperoleh akurasi validasi silang dengan memiliki nilai yang cenderung dekat dengan akurasi pengujian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Tabel 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Nilai parameter untuk baseline model Baseline Model Nama dan Nilai Parameter k-NN n_neighbors=3 SVM kernel= polynomial, C=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0, degree=3 DT criterion=gini, min_samples_split=2, min_samples_leaf=1 NB alpha=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0 MLP hidden_layer_size=25, solver=adam, learning_rate=1e-3, max_iter=100 Tabel 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil pengujian baseline model Baseline Model Akurasi Pelatihan Akurasi Validasi Silang Akurasi Pengujian k-NN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7995 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6786 (± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0645) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6588 SVM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9831 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6836 (± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0688) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6696 DT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7263 (±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0322) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6497 NB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8945 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7286 (±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0633) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6987 MLP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7600 (± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0628) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7422 Pengujian selanjutnya adalah menerapkan arsitektur BiLSTM seperti yang ditunjukkan pada Gambar 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dataset yang telah melalui text-preprocessing kemudian dipisahkan berdasarkan spasi menggunakan tokenizer dari library Keras2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Berikutnya, daftar token kosakata dikonversi menjadi urutan numerik dengan mengganti indeks setiap kosakata dengan nilai integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Setiap token kata dipetakan ke vektor berukuran �, di mana � adalah jumlah kata dalam sebuah kalimat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kami menerapkan strategi zero-padding sehingga semua kalimat memiliki dimensi vektor yang sama � ∈ ��� dengan nilai �� = 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sebagai pembanding, kami juga menerapkan arsitektur LSTM yang umum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Parameter yang telah ditentukan untuk LSTM maupun BiLSTM disajikan pada Tabel 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Jumlah epoch ditentukan sebanyak 25 kali untuk setiap percobaan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan untuk menghindari over- fitting pada model selama fase pelatihan, kami menentukan nilai dropout sebesar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Tabel 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Pengaturan parameter LSTM dan BiLSTM Nama Parameter Nilai Parameter Embedding_size 200 activation sigmoid optimizer adam learning_rate 1e-3 batch_size 64 regularizer L2 2 https://keras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='io/api/preprocessing/text ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 Seminar Nasional Hasil Riset Prefix - RTR 293 Tabel 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hasil pengujian model LSTM dan BiLSTM Model Akurasi Training Loss Precision Recall LSTM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8491 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='3707 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7659 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7673 BiLSTM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9412 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1826 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9759 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8386 Hasil pengujian LSTM dan BiLSTM pada Tabel 5 menunjukkan BiLSTM memiliki kinerja yang lebih baik pada semua metrik evaluasi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' BiLSTM juga unggul jika dibandingkan dengan semua model machine learning tradisional pada Tabel 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Akurasi pengujian tertinggi adalah 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9412 dengan training loss sebesar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1826.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan precision dan recall yang diperoleh adalah 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9759 dan 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8386.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Berdasarkan grafik fase pelatihan pada Error!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Reference source not found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' dan Gambar 6, dapat diketahui bahwa BiLSTM memiliki akurasi pelatihan dan training loss yang lebih stabil pada setiap epochnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sedangkan LSTM pada awal epoch cenderung memiliki akurasi yang kecil tetapi mengalami peningkatan pada setiap epoch berikutnya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sama halnya dengan training loss yang menunjukkan penurunan pada setiap epochnya yang berarti model telah belajar selama fase pelatihan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Arsitektur BiLSTM yang diusulkan dalam penelitian ini menunjukkan peningkatan kinerja dibandingkan dengan baseline model maupun LSTM standar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hal ini juga memperkuat fakta bahwa pendekatan deep learning mampu mencapai performa yang lebih baik daripada pendekatan machine learning tradisional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kami juga mengamati bahwa BiLSTM mampu mengatasi masalah long-term dependency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Keuntungan lain dari penelitian ini adalah kemampuan BiLSTM untuk membaca informasi dari dua arah sekaligus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' (a) (b) Gambar 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Grafik pada fase pelatihan LSTM, (a) akurasi dan validasi pelatihan, (b) training dan validation loss (a) (b) Gambar 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Grafik pada fase pelatihan BiLSTM, (a) akurasi dan validasi pelatihan, (b) training dan validation loss Sedangkan kelemahan LSTM maupun BiLSTM adalah membutuhkan lebih banyak data serta waktu dan biaya komputasi yang lebih tinggi daripada baseline model yang ada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Dengan demikian, kemampuan BiLSTM dalam membaca konteks melalui dua arah sekaligus memberikan hasil yang baik pada deteksi depresi dan kecemasan pengguna Twitter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='82 val accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='68 10 15 20 25 Epochsloss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='65 val_loss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='40 FO Fn 10 15 20 T 25 Epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='925 0060 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='875 curacy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='850 accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='825 val_accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='800 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='775 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='750 0 10 15 20 25 Epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' loss val_loss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='6 $501 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2 0 10 15 20 25 Epochs The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 Seminar Nasional Hasil Riset Prefix - RTR 294 KESIMPULAN Pada penelitian ini kami mengusulkan arsitektur BiLSTM untuk deteksi depresi dan kecemasan pengguna Twitter pada bahasa Indonesia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Berdasarkan hasil pengujian, model kami menunjukkan performa yang lebih tinggi daripada semua model machine learning tradisional dan LSTM standar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Akurasi tertinggi yang diperoleh menggunakan BiLSTM mencapai 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='12%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hal ini dapat diraih karena BiLSTM mampu mengambil informasi dengan membaca konteks melalui dua arah sekaligus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Namun, BiLSTM membutuhkan dataset yang cukup besar untuk menghindari model yang over-fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Selain itu, biaya dan waktu komputasi yang dibutuhkan juga tinggi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Pada penelitian berikutnya, kombinasi word embedding perlu diterapkan agar menghasilkan representasi kata yang lebih kaya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Selain itu, hyperparameter-tuning juga perlu dilakukan guna meningkatkan performa model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' REFERENSI [1] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' del Barrio, “Diagnostic and Statistical Manual of Mental Disorders,” in Encyclopedia of Applied Psychology, Elsevier, 2004, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 607–614.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Bolton, What is Mental Disorder?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Oxford University Press, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Husseini Orabi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Buddhitha, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Husseini Orabi, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Inkpen, “Deep Learning for Depression Detection of Twitter Users,” in Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, 2018, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 88–97, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='18653/v1/W18-0609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [4] World Health Organization, “World Health Statistics - Monitoring Health For The SDGs,” World Heal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Organ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='121, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [5] “Centers for Disease Control and Prevention,” Suicide: Facts at a glance [fact sheet], 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Yates, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Cohan, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Goharian, “Depression and Self-Harm Risk Assessment in Online Forums,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2968–2978, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='18653/v1/D17-1322.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Lexis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', “Prevention of long-term sickness absence and major depression in high-risk employees: a randomised controlled trial,” Occup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Environ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 68, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 400–407, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2011, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1136/oem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='057877.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Camacho-collados, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Espinosa-anke, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Owen, “Towards Preemptive Detection of Depression and Anxiety in Twitter,” in Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 82–89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [9] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Arora and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Arora, “Mining Twitter Data for Depression Detection,” in 2019 International Conference on Signal Processing and Communication (ICSC), Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 186–189, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1109/ICSC45622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8938353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [10] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Meng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Qiu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Yu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Wu, “Sentiment analysis of comment texts based on BiLSTM,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' c, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 51522–51532, 2019, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1109/ACCESS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2909919.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [11] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Gers, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Schmidhuber, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Cummins, “Learning to Forget: Continual Prediction with LSTM,” Neural Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2451–2471, Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2000, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1162/089976600300015015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Rianto, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Wisesa, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hans, “Depression and Anxiety in Twitter (ID),” Kaggle, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='kaggle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='com/stevenhans/depression-and-anxiety-in- twitter-id (accessed Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 11, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [13] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hochreiter and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Schmidhuber, “Long Short-Term Memory,” Neural Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 1735–1780, 1997, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1162/neco.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1735.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 Seminar Nasional Hasil Riset Prefix - RTR 295 [14] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Rumelhart, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Hinton, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Williams, “Learning Internal Representations by Error Propagation,” in Readings in Cognitive Science: A Perspective from Psychology and Artificial Intelligence, MIT Press, 1987, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 318–362.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kim and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Kang, “Comparison of Neural Network Techniques for Text Data Analysis,” Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Cult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Technol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 8, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 231–238, 2020, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='17703/IJACT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Saxena and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sukumar, “Predicting bitcoin price using lstm And Compare its predictability with arima model,” Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Journa Pure Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 119, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 17, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2591–2600, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [17] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Zhao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Wu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Chen, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Liu, “LSTM network: A deep learning approach for Short-term traffic forecast,” IET Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Transp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 68–75, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2017, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1049/iet-its.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='0208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [18] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Jozefowicz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Zaremba, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Sutskever, “An empirical exploration of Recurrent Network architectures,” in 32nd International Conference on Machine Learning, ICML 2015, 2015, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2332–2340.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' [19] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Elfaik and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Nfaoui, “Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 30, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 395–412, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' 2021, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content='1515/jisys-2020-0021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} +page_content=' The 4th Conference on Innovation and Application of Science and Technology (CIASTECH 2021) Universitas Widyagama Malang, 15 Desember 2021 ISSN Cetak : 2622-1276 ISSN Online : 2622-1284 Seminar Nasional Hasil Riset Prefix RTR 296' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9E3T4oBgHgl3EQfcApa/content/2301.04521v1.pdf'} diff --git a/XtFRT4oBgHgl3EQfNzdw/content/tmp_files/2301.13511v1.pdf.txt b/XtFRT4oBgHgl3EQfNzdw/content/tmp_files/2301.13511v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..858b7a5c92b68413fd8e77985e1129fb648d01be --- /dev/null +++ b/XtFRT4oBgHgl3EQfNzdw/content/tmp_files/2301.13511v1.pdf.txt @@ -0,0 +1,832 @@ +David C. Wyld et al. (Eds): MLSC, ITCSS, ACSTY, SOFE, NATP, BDAB - 2023 + +pp. 153-166, 2023. CS & IT - CSCP 2023 DOI: 10.5121/csit.2023.130212 + +PRIVACY-PRESERVING ONLINE SHARING +CHARGING PILE SCHEME WITH DIFFERENT +NEEDS MATCHING + +Zhiyu Huang1,2 + +1School of Computer Science and Engineering, Hunan University of Science +and Technology, Xiangtan, China +2Hunan Key Laboratory for Service Computing and Novel Software +Technology, Xiangtan, China + +ABSTRACT + +With the development of electric vehicles, more and more electric vehicles have difficulties in +parking and charging. One of the reasons is that the number of charging piles is difficult to +support the energy supply of electric vehicles, and a large number of private charging piles +have a long idle time, so the energy supply problem of electric vehicles can be solved by sharing +charging piles. The shared charging pile scheme uses Paillier encryption scheme and improved +scheme to effectively protect user data. The scheme has homomorphism of addition and +subtraction, and can process information without decryption. However, considering that +different users have different needs, the matching is carried out after calculating the needs put +forward by users. This scheme can effectively protect users' privacy and provide matching +mechanisms with different requirements, so that users can better match the appropriate +charging piles. The final result shows that its efficiency is better than the original Paillier +scheme, and it can also meet the security requirements. + +KEYWORDS + +Private charging pile sharing service, Privacy protection, Demand analysis, Homomorphic +encryption, Internet of things + +1. INTRODUCTION + +With the advancement of "carbon peak and carbon neutral" goals and the development of electric +vehicles (EVs), EVs have the potential to effectively reduce air pollution caused by daily +transportation [1]. As the number of EVs on the road increases, the demand for charging +infrastructure also increases. However, the current prevalence and coverage of charging stations +are insufficient to meet this demand[2-3]. Surveys have shown that between 2015 and 2020 in +Table 1, the number of EVs and charging stations has been steadily increasing, with a particularly +notable increase in the number of private charging stations, from 8,000 in 2015 to 874,000 in +2020[16]. However, compared to the growth rate of EVs, the number of charging stations is still +far from adequate. As a result, for those EV users who are unable to install charging stations, the +problem of charging difficulties is becoming increasingly apparent. In 2020, the Chinese +government proposed to include charging stations as one of the fields of the nation's "new +infrastructure", with an estimated investment of approximately 10 billion to build charging +stations. According to international data surveys, it is expected that by 2030, there will be 5 +million EVs on the road in California alone, and 12-24 million private charging stations and 10- +20 million public charging stations globally. Charging facilities have become an indispensable + +154 + + +Computer Science & Information Technology (CS & IT) +infrastructure in new energy development planning [4-5]. Considering the high installation cost +of charging piles [6], other technologies are needed to make up for the shortcomings of charging +piles. + +Table 1 Approximate number of electric vehicles and charging piles in the world + +Year +2015 +2016 +2017 +2018 +2019 +2020 +Number of electric +vehicle +570 +1280 +1840 +2740 +3890 +4840 +Number of public +charging piles +58 +149 +240 +387 +516 +807 +Number of private +charging piles +8 +63 +232 +477 +703 +874 + +With the rapid development of Internet of Things technology, Internet of Things devices connect +everything and have gradually entered the mode of Internet of Everything [7-9]. As one of the +applications of the Internet of Things, the Internet of Vehicles can realize the information +exchange between vehicles and provide certain research value and commercial value. The +application of V2X technology of the Internet of Vehicles in cloud (edge) computing is the +cornerstone of building a smart city and smart transportation [10-12]. At present, the research on +shared charging pile is still in the initial stage. The traditional charging pile sharing scheme +generally consists of three entities, including charging pile provider, electric vehicle and +matching server, in which both buyers and sellers upload their own information to the server for +matching calculation, and the server returns the matching results to both buyers and sellers, as +shown in Fig 1. However, in the traditional charging pile sharing scheme, the user's information +is published or uploaded to the server through simple encryption, and the server needs to decrypt +the participants' information to get their plaintext information. Therefore, in the traditional +scheme, the user's privacy may be attacked and leaked. In the traditional charging pile sharing +system, all information will be published directly on the Internet. One of the biggest problems +faced by the system is that users will expose their private information to the public platform when +they apply for it. For example, a malicious user has used a certain charging station. The charging +pile is marked and recorded, and he may not use it directly through the platform when he knows +that the charging pile is unmanaged for a period of time. + +At the same time, there is also the possibility that the shared service platform exposes the privacy +of customers. Because the location information of electric vehicles may include workplaces, +home addresses, special hospitals or frequently visited entertainment venues, buyers' hobbies and +health status information are leaked, and the privacy of charging pile sellers will also be greatly +affected threaten. On the other hand, once the information of buyers and sellers is obtained by +malicious attackers, not only will there be profitable and targeted advertisements, but also related +work and home addresses will be threatened, and may even lead to personal safety. Therefore, in +order not to disclose the private information of customers, it is necessary to design a secure +service platform. This paper proposes the use of homomorphic encryption technology to protect +the privacy of users. + +In order to meet the above challenges, the main contributions of this paper are summarized as +follows: + +1) We use homomorphic encryption technology to encrypt user information, and at the same +time use homomorphic characteristics to process ciphertext, and match the obtained results in the +cloud server. In the public service platform, users' effective information will not be exposed, and +matching can be completed efficiently. + + +Computer Science & Information Technology (CS & IT) 155 +2) For users with different needs, we designed the demand parameter $\omega$. Through +matching calculation, we can get the matching index parameter W. By comparing W, we can get +the most suitable buyers and sellers. This requirement parameter can better match users with +different requirements. + +3) We use chinese remainder theorem (CRT) to speed up the modular exponentiation in the +decryption process of cloud server, CRT is used to convert ab from Zn^2 to Zp^2 and Zq^2 for +calculation. We use Paillier scheme with optimized parameters, which can speed up the +encryption calculation although it loses homomorphism. + +The remainder of this paper is organized as follows: In Section 2, we introduce the homomorphic +encryption, parameter optimization of Paillier scheme and china remainder theorem. In Section 3, +we introduce the system model and present the proposed scheme. In Section 4, we describe the +performance evaluation results. Finally, we conclude the paper in Section 5. + +2. RELATED WORK + +In this section, homomorphic encryption technology, parameter optimization of Paillier scheme +and Chinese remainder theorem are introduced. + +2.1. Homomorphic Encryption + +Encryption technology is often used to protect privacy, among which homomorphic encryption is +a special encryption method, which has the characteristics of directly calculating encrypted data, +such as addition and multiplication, and will not reveal any information of the original text during +the calculation process. And the calculated result is also encrypted, and the result obtained after +decrypting the processed ciphertext with the key is exactly the result obtained after processing the +original text. Paillier scheme has the homomorphism of addition/subtraction. For plaintext m1 +and m2, there is a function E () that makes E(m1+m2)=E(m1)\cdot E(m2). Paillier scheme +satisfies the standard semantic security of encryption scheme[13], that is, the ciphertext is +indistinguishable (IND-CPA) under the attack of selected plaintext, that is, the information about +plaintext will not be leaked in ciphertext. Its security is proved by the hypothesis of deterministic +composite residue. So far, no algorithm can be cracked in polynomial time, so Paillier encryption +scheme is considered to be safe. The detailed process includes the following steps. + + +KeyGen() : Pick two prime numbers p and q compute n = p * q and λ=lcm(p-1,q-1). +Choose a random number g, and gcd(L(g^λ mod n^2),n) = 1, computer μ=(L(g^λ mod +n^2))^(-1) mod n, where L(x) = (x-1)/n the public and private keys are pk = (n,g) and sk +=(λ,μ) , respectively. + + +Encrypt() : Enter the plaintext message m and select the random number r. Encrypt +plaintext: + +, + + + (1) + + +Decrypt() : Enter ciphertext C. Calculate plaintext message: + + +156 + + +Computer Science & Information Technology (CS & IT) + +. + + + (2) + +2.2. Parameter Optimization of Paillier scheme + +In order to simplify computation without affecting the algorithm's correctness, the algorithm may +take g=n+1 during the key generation phase[14]. This allows for the simplification of the +calculation of $g^m$ during the encryption process. +For g^m=(n+1)^m, using the binomial theorem, we can express g^m as the sum of the product of +the binomial coefficients and the corresponding powers of n and 1, where each term of the sum +can be calculated efficiently. + + +, (3) + +As the previous m-1 terms are multiples of n, under the condition of modulo $n^2$ operation, +they can all be eliminated, thus this modulo exponentiation operation can ultimately be simplified +to one modulo multiplication operation, thus accelerating the encryption process. + + +, + + + (4) + +Decrypt ciphertext c: +. + + + + (5) + +2.3. Chinese remainder theorem + +The Chinese Remainder Theorem, also known as the Sunzi Theorem, originates from the ancient +Chinese mathematical treatise "Sunzi Suanjing" and describes the isomorphism of two algebraic +spaces. Specifically, an algebraic space can be decomposed into several mutually orthogonal +subspaces and the original space corresponds one-to-one to the decomposed space, similar to two +forms of the same space. Specifically, when n = pq and p, q are relatively prime, there exists the +algebraic isomorphism property: a mod n = a mod p + a mod q, thus the operations under mod n +can be transformed into operations under mod p and mod q. By converting to this form, the +calculation efficiency is higher. Therefore, this property can be utilized to accelerate modular +exponentiation operations under mod n. + +3. THE PROPOSED SCHEME + +3.1. System Model + +The matching scheme of shared charging piles consists of multiple electric vehicle buyers, +multiple charging pile sellers, multiple edge proxy servers, a cloud server and a certificate +certification center. All entities communicate through the mobile network. The Figure.1 describes +our system model. + +Electric vehicle (EVs): As a user of shared charging piles, when charging piles are needed, a +charging request will be sent out, and the EV set is expressed as {1, …, i, …, I}. After receiving +the response, you will get the public key of information encryption, and the terminal equipment + + +Computer Science & Information Technology (CS & IT) 157 +of the Internet of Things will encrypt the information to be sent with the public key and send it to +the nearest proxy server. + +Private charging piles (PCPs): As a provider of shared charging piles, there will be J private +charging piles in a given area, and the collection of charging piles is denoted as {1,…, j…, J}. +Each PCP is managed by the owner and is equipped with a socket for EV charging. When each +PCP has free time, it will issue an application for energy supply, and the provided information +will be encrypted on the Internet of Things terminal equipment with the provided public key, and +then the encrypted information will be sent to the nearest proxy server. + +Proxy server: The proxy server has certain computing power, and is mainly responsible for +collecting the encrypted information provided by nearby electric vehicle buyers and charging pile +sellers who apply for matching, and using homomorphic characteristics to calculate the encrypted +information. In the calculation process, important information is protected by Paillier, and the +edge proxy server will not get useful information. + +Cloud server: A cloud server is a server with powerful computing power, which can process +encrypted information sent by proxy servers. After processing the information, use the matching +scheme provided to match the buyer and the seller. After the matching, the best matching object +will be obtained, and then the next round of matching will be carried out. + +Certificate certification (CA): The certificate certification is the only authoritative identity +certification institution and is completely reliable. All user entities need to be registered and +authenticated by the certificate authority, and when the user sends an application, the +corresponding public and private key pairs are generated by the key management center of the +certificate authority and sent to the corresponding users. The information of the certificate +certification center is absolutely confidential, and there is no possibility of collusion. + + + +Figure.1 System model. + + + + +Matching with favor and return result +Cloud Serve +CertificationAuthority +Computer differential value of Enc(loc,val +Enc(loc,val)&favor +Proxv Serve +Enc(loc,val)&favo +Enc(loc,val) +Enc(loc,val)&favor +Enc(loc,val) +EV +EV +EV +U +favor: loc=0.7,val = 0.3 favor: loc=0.5,val = 0.5 favor: loc=0.3,val = 0.7 +Charging Pile +Charging Pile158 + + +Computer Science & Information Technology (CS & IT) +3.2. Requests and Information Encryption + +In the shared charging pile matching system, the certificate certification center is responsible for +managing and issuing public keys and maintaining public key information. In the system, electric +vehicle buyers and charging pile sellers provide information, including location, price, demand +and other information. Buyers of electric vehicles need to provide their own location xi,yi, +proposed price (ri), farthest acceptable distance dimax, demand +i=1,2,···,n={0,1} and the price +proposed by the demand +. The seller of charging piles needs to provide the location +(xj,yj), the price (rj), the demand +j=1,2,···,n={0,1} of charging piles and the price +. +After the user sends a request, the certificate certification center will send the public key pk to the +user, and the electric vehicle buyer and the charging pile seller will select the random numbers ri +and rj respectively, and use the public keys pk, ri and rj to encrypt the provided information. + +, + (6) + +, + + + (7) + +After encryption, Cm are obtained. The buyers of electric vehicles package Cm and w and send +them to the proxy server. The sellers of charging piles send Cm to the proxy server. Except at the +user end, the private information of users is all the information obtained after encryption and +processing. + +3.3. Ciphertext Processing + +The proxy server has a certain computing power, and can use the homomorphic +addition/subtraction characteristics of pailiier encryption scheme to process the ciphertext +information as well as the plaintext. The process is as follows. + + +, + + (8) + +, + + (9) + +The demand of buyers and sellers is encrypted by homomorphic addition, and the information +such as location, price and demand price is encrypted by homomorphic subtraction to get the sum +of the processed information and the difference of the information. Among them, the main +function of (8) is to judge whether the demand of electric vehicle buyers can be met. Information +difference is a difference comparison between the information provided by buyers and sellers, +which can reflect the information similarity of both parties. The smaller the information +difference, the closer the information provided by the charging pile seller is to the preference of +the electric vehicle buyer, which is more suitable for matching and has a higher matching +probability. On the contrary, the larger the information difference between the two parties means +that the user's matching probability will be smaller. + + + + + + + + +Computer Science & Information Technology (CS & IT) 159 +3.4. Information Decryption + +The cloud server owns the private key sk issued by CA, including p and q. In Paillier +cryptosystem, the main cost of decryption is modular exponentiation under Zn^2. With the private +key (decomposition p, q of n), the modular exponentiation under Zn^2 can be converted into Zp^2 +and Zq^2 by CRT. + +The optimization function using CRT is expressed as Lp(x)=(x-1)/p and Lq(x)=(x-1)/q respectively, +and the decryption process needs to be divided by using the following mathematical principles. + +, + + + (10) + +, (11) + +, (12) + +, + + + (13) + +, + + + + (14) + +CRT(mp,mq mod pq) is to use CRT to calculate the modulus index, and the detailed process is as +follows. For the modulus index ab mod n, n=pq, CRT is used to convert ab from Zn to Zp and Zq +for calculation. Calculate the mapping ab of Zp on mp=ap^bp, where ap=a mod p can be obtained +from euler theorem, where φ(p)=p-1 is Euler function. Calculate the mapping mq=aq^bq of aq on +ab, which is the same as the calculation process of mp. Calculate mp and mq separately and then +aggregate them back. + +, + (15) + +Because p and q are coprime, there are q-1(mod p)q+p-1(mod q)p=1. Substituting into the formula +(15) gives: + +, + + (16) + +In Paillier scheme, CRT is used to speed up decryption of plaintext to get plaintext m. After +receiving the processed information, the modular operation under Zn^2 is converted into modular +operation under Zp^2 and Zq^2 by using private key sk by using China remainder theorem, and then +decrypted. + +, + + + (17) + +. + + + (18) + +3.5. System Matching + +After decryption by using CRT-optimized decryption scheme, the sum of information and the +difference between information are obtained. After obtaining the decrypted information, first +calculate the distance between the buyer i and the seller j: + +160 + + +Computer Science & Information Technology (CS & IT) + +, + + + (19) + +The maximum acceptable distance of the EV buyer i is also encrypted and decrypted, because the +maximum acceptable distance does not carry specific information such as location and price, so it +is not regarded as important privacy information, so the cloud server gets the same clear text dimax +as the user. To meet the matching conditions of buyer i, we must first compare the direct distance +between buyer and seller. If ddijdimax, the distance between users i and j does not meet the conditions, the seller j cannot match +the buyer. Remove the unqualified sellers by comparison before proceeding to the next step. + +Demand analysis is an interesting part of this paper. We consider that different buyers of electric +vehicles may have different needs. Specifically, the distance and price are the information that +the buyer i and the seller j must provide. Besides, other related demands αi and αj can be set, +and the sum of them can be obtained through information and calculation. There are three +situations: + +Case1: aαij=0, indicates that neither user has this requirement. + +Case2: aαij=1, it means that only one of the buyer i or the seller j owns the demand, and in case1 +and case2, the corresponding i and j are removed because the demand cannot be provided. + +Case3: aαij=2, it means that i have the demand, and j can also provide the demand. At this time, +whether to use it can be judged by the demand price difference drij . + +i) when drij<0, it means that the price proposed by i is less than that proposed by j. At this time, i +and j cannot match.ii)When aαij=2 and drij>0 are met at the same time, it means that i and j meet +the matching conditions, and the corresponding i and j are added to the matching set. + +After getting the matching set that meets the distance and demand, the cloud server will make the +final price matching. For buyer i, all sellers j who meet the demand will calculate the demand +price difference drij provided by i and j through formula (9). Because there is no information in +the buyer's preferences that can lead to the leakage of location information, w are also +unimportant information that can be obtained in the cloud server only after being decrypted by +the private key sk. At this time, the information in the cloud server includes the location +information difference ddij, demand and value aij, price information difference drij, demand price +difference drij, buyer's preferences w and the number k of sellers j in the number matching set. +When matching the buyer i of the electric vehicle, the seller j in the matching set is calculated +respectively to obtain Wij. + +, + + (20) + +For buyer i, Wij is used as a matching evaluation index to judge the suitability of matching with +seller j. So we sort Wij in descending order. The smallest Wijmin is obtained, which means that the +current sellers j and u are the most suitable matching objects in terms of price and demand, so i +match j. + + +Computer Science & Information Technology (CS & IT) 161 +3.6. Matching Result Return + +After the matching of buyer i is completed, the cloud server will send a request to the +successfully matched i and j. User i and j use the Paillier scheme with optimized parameters to +generate public keys pki and pkj and send them to the cloud server. The cloud server generates a +random number r, and encrypts the private key sk with pki and pkj. + +, + + (21) + +And the matched result is packaged and sent to the proxy server. The proxy server stores the +encrypted address information uploaded by the user. After receiving the packaged result and the +encrypted private key, the proxy server finds the corresponding encrypted address information +CLoci and CLocj and the encrypted price information Crj of the seller j through the matching results i +and j. The proxy server packages and sends the ciphertext of CLocj, Crj and private key sk +encrypted with public key pki to buyer i, and packages and sends the ciphertext of CLocj and +private key sk encrypted with public key pkj to seller j. The buyer I and the seller j use their own +private keys to decrypt and get sk. + +, + + + (22) + +Then use sk to decrypt the encrypted address information CLoci , CLocj and rj. + +, + + + (23) + +At this time, the buyer I is matched, and the I+1th buyer is matched in the next round, and the +matched seller J is eliminated from the matching set until the last seller set is empty, indicating +that the current round of matching has ended. Re-apply to CA, get a new public and private key +pair, and start the next round of matching. + +4. PERFORMANCE EVALUATION RESULTS + +4.1. Number Analysis + +In this section, we consider a 3km*3km scene with different numbers of I buyers and J sellers. +All the simulations are done on python and on 2.5 GHz Inter Core i5-7300HQ CPU and 32G +RAM. Finally, all the simulation results are averaged in 50 simulations, and the consistent results +are finally obtained. + +Paillier encryption algorithm is a public key encryption algorithm based on number theory, which +has high performance in security and time cost. +The following is the time cost of operating on a piece of data of the original Paillier encryption +algorithm: + +Randomly generate public key and private key: O(1) + +Encryption operation: O(log n) + +Decryption operation: O(log n) + + +162 + + +Computer Science & Information Technology (CS & IT) +Addition/subtraction operation (adding/subtracting two ciphertext numbers): O(1) + +Where n refers to the length of the public key (the number of digits of the modulus). + +In our simulation experiment, for I buyers and J buyers, the time cost from issuing an application +to obtaining the corresponding public key is O(1). For all users, because the encryption operation +uses the original Paillier encryption scheme, its time cost corresponds to O(log n). Our setting is +that all users encrypt on the terminal equipment of the Internet of Things, and each user does it +independently, so the time cost is fixed regardless of the number of matching users. When a user +encrypts n data, the time cost is n*O(log n). When the terminal equipment of the Internet of +Things encrypts the information to be sent, the proxy server will add/subtract the ciphertext data, +and the corresponding operation is O(1). Every buyer I in the proxy server needs to add and +subtract with the seller J. For J sellers and K pieces of information, the time cost is J*k*O(1). For +I buyers, the total time cost required for calculation in the proxy server is I*J*k*O(1). Decrypt +each calculated result in the cloud server. Under the condition that all sellers meet the +requirements, the time cost of each decryption operation is O(log n) corresponding to I*J*k +calculation results. After decryption, we execute the matching algorithm, calculate the matching +index wij for M users who satisfy user I, and then get the minimum matching index wijmin for +user I after sorting it, with the time cost of j*O(1). At this time, the buyer I and the seller J are +successfully matched. The time cost corresponding to the above process is shown in Table 2. + +Table 2 Paillier scheme time cost in this paper + + +I buyers +J sellers +n data +Buyer +matching +Index +ranking +Encrypt +O(log n) +O(log n) +k*O(log n) +/ +/ +Process +/ +/ +k*O(1) +I*J*k*O(1) +j*O(1) +Decrypt +O(log n) +O(log n) +k*O(log n) +/ +/ + +In the process of decryption, we use CRT to speed up the calculation process. The original +Paillier encryption algorithm needs to do modular exponential operation under Zn^2. However, +when the cloud server knows the private key sk and the corresponding coefficients p and q, the +modular exponentiation under Zn^2 is transformed to Zp^2 and Zq^2, thus improving the encryption +and decryption efficiency. The time required to decrypt the ciphertext with the original Paillier +encryption scheme and the time required to accelerate the calculation with CRT are shown in the +figure. As can be seen from Figure.2, the decryption time is about 1/3 of that of Paillier +encryption scheme after accelerated calculation with CRT. Compared with DJN scheme [15], the +decryption time is basically the same as that of Paillier scheme. Therefore, using CRT to speed +up the decryption process can effectively improve efficiency. + + + +Computer Science & Information Technology (CS & IT) 163 + + +Figure.2 Computational overhead of data encryption and decryption with different schemes + +After the cloud server gets the matching result, it sends out a successful matching application, +and the buyer I and the seller J call the Paillier encryption algorithm with optimized parameters to +generate a public key pair, and send the public key n to the cloud server, and the private key is +stored at the user end. Compared with the original Paillier encryption algorithm, the encryption +scheme with parameter optimization simplifies the modular exponential operation into a modular +multiplication, which speeds up the encryption process. Its time efficiency is shown in Figure.3. + + + +Figure.3 Comparison of encryption time cost between Paillier scheme and parameter optimization + +4.2. Correctness Analysis + +For ciphertext C in paillier scheme with optimized parameters, its correctness is expressed as: + + + +Encrypt +Decryption by Paillier +1500 +Decryption by CRT +Decryption by DJN +(ms. +1000 +Overhead +on +500 +0: +0 +50 +100 +150 +200 +Number of datas100 - +Paillier +Parameter optimization +80 +(sul) +Overhead +60 +Computation +40 +20 +0 +0 +5 +10 +15 +20 +25 +30 +Number of datas164 + + +Computer Science & Information Technology (CS & IT) + + + + + +, + + + + (24) + +For ciphertext C in Paillier scheme which uses CRT optimization for decryption, its correctness is +expressed as: + + + + + + +, + + + (25) + +Where hp, hq, mp, mq and are obtained from formula (10), formula (11), formula (12) and formula +(13) respectively. + +4.3. Security Analysis + +First of all, we assume that there are curious buyers and sellers, denoted as B, who want to attack +through some information on the network to obtain other users' private information. In this paper, +the original Paillier encryption scheme was adopted before the matching was completed. This +scheme has been fully studied, so far there is no polynomial time algorithm to break it, so the +security of Paillier encryption scheme is considered to be reliable. When B obtains the ciphertext +message and the processed ciphertext message in the proxy server through attack, it cannot obtain +effective information because there is no corresponding private key sk. So in the proxy server, we +think the information is safe and reliable. When the processed information is sent to the cloud +server, the cloud server needs to use the private key sk to decrypt the information. Suppose that B +obtains the sum and difference of the decrypted information in the cloud server through special +means attack, and these B have their own information, they will infer other useful information +through the difference between the existing information and the information obtained by the +attack in the cloud server. When inferring the position of other sellers through the difference +between their own position information and the obtained position information, because there is +only a straight distance, the inferred information cannot locate the specific position of the seller. +From the above, even if the attack obtains the encrypted information in the proxy server or the +decrypted information in the cloud server, B cannot infer the valid information. Therefore, for +curious buyers and sellers, our scheme is safe and effective. + +Secondly, for the premeditated attacker C, in our scheme, in order to prevent C from +eavesdropping, all the communication between entities is encrypted. In our scheme, the random +number r generated each time is different, and the encrypted result is also different. After +attacking the information obtained by the proxy server and the cloud server, the user's location +information and price information cannot be obtained through calculation. And we will refresh +the key after each round of user matching. In this case, we think the scheme is also safe and +effective. + + + + + + +Computer Science & Information Technology (CS & IT) 165 +5. CONCLUSIONS + +In this paper, we solve the security problem of shared charging pile scheme through +homomorphic encryption technology. In order to protect the privacy of users' location and +provide matching strategies for users with different needs, we have formulated a privacy +protection shared charging pile scheme based on users with different needs. First of all, we use +the public key to encrypt the information in the terminal equipment of the Internet of Things, +which effectively protects the privacy information such as location. Through homomorphism, the +ciphertext matching the user is calculated in the proxy server, and CRT is used in the cloud server +to accelerate the encryption process. We design the matching rules, calculate the matching index +W and compare them to get the most suitable matching result. When we return the results, we use +Paillier scheme with optimized parameters to effectively speed up the encryption process. +Finally, our numerical analysis results show that the decryption time after CRT optimization is +about 1/3 of the original Paillier scheme and DJN scheme. The encryption time after parameter +optimization is 1/3 faster than that of the original Paillier scheme. At the same time, we also +analyzed the security of the scheme, and the attacks of both curious users and malicious attackers +are safe and reliable in the scheme on the public platform. + +REFERENCES + +[1] +J. Zhang, H. Yan, N. Ding, J. Zhang, T. Li and S. Su, “Electric Vehicle Charging Network +Development Characteristics and Policy Suggestions,” 2018 International Symposium on Computer, +Consumer and Control (IS3C), 2018, pp. 469-472. +[2] +S. Qiao, "Technical Analysis and Research on DC Charging Pile of Electric Vehicle," 2021 +International Conference on Smart City and Green Energy (ICSCGE), 2021, pp. 89-93. +[3] +Y. Zhang, Y. Wang, F. Li, B. Wu, Y. -Y. Chiang and X. Zhang, "Efficient Deployment of Electric +Vehicle Charging Infrastructure: Simultaneous Optimization of Charging Station Placement and +Charging Pile Assignment," in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. +10, pp. 6654-6659, Oct. 2021. +[4] +A. J. Qarebagh, F. Sabahi and D. Nazarpour, "Optimized Scheduling for Solving Position Allocation +Problem in Electric Vehicle Charging Stations," 2019 27th Iranian Conference on Electrical +Engineering (ICEE), 2019, pp. 593-597. +[5] +H. Hu, S. Ni and L. Zhang, "Analysis of the carrying capacity of charging station based on regional +charging demand," 2020 7th International Conference on Information Science and Control +Engineering (ICISCE), 2020, pp. 1688-1691. +[6] +S. Mallapuram, N. Ngwum, F. Yuan, C. Lu and W. Yu, “Smart city: The state of the art, datasets, +and evaluation platforms,” 2017 IEEE/ACIS 16th International Conference on Computer and +Information Science (ICIS), 2017, pp. 447-452. +[7] I. M. Nafi, S. Tabassum, Q. R. Hassan and F. Abid, "Effect of Electric Vehicle Fast Charging Station +on Residential Distribution Network in Bangladesh," 2021 5th International Conference on Electrical +Engineering and Information Communication Technology (ICEEICT), 2021, pp. 1-5. +[8] C. Liu, K. T. Chau, D. Wu and S. Gao, “Opportunities and Challenges of Vehicle-to-Home, +Vehicle-to-Vehicle, and Vehicle-to-Grid Technologies,” in Proceedings of the IEEE, vol. 101, no. +11, pp. 2409-2427, Nov. 2013. +[9] +Y. Wang, Z. Su and K. Zhang, “A Secure Private Charging Pile Sharing Scheme with Electric +Vehicles in Energy Blockchain,” 2019 18th IEEE International Conference On Trust, Security And +Privacy In Computing And Communications/13th IEEE International Conference On Big Data +Science And Engineering (TrustCom/BigDataSE), 2019, pp. 648-654. +[10] Zhao Tong,Feng Ye,Ming Yan,Hong Liu,Sunitha Basodi.A Survey on Algorithms for Intelligent +Computing and Smart City Applications[J].Big Data Mining and Analytics,2021,4(03):155-172. +[11] Anagnostopoulos T , Luo C , Ramson J , et al. A multi-agent system for distributed smartphone +sensing cycling in smart cities[J]. Journal of Systems and Information Technology, 2020, ahead-of- +print(ahead-of-print). + +166 + + +Computer Science & Information Technology (CS & IT) +[12] Vidal S . Intelligent transport system in smart cities: aspects and challenges of vehicular networks and +cloud[J]. Computing reviews, 2019(7):60. +[13] Paillier P . Public-Key Cryptosystems Based on Composite Degree Residuosity Classes[C]// +Advances in Cryptology - EUROCRYPT '99, International Conference on the Theory and +Application of Cryptographic Techniques, Prague, Czech Republic, May 2-6, 1999, Proceeding. +Springer, Berlin, Heidelberg, 1999. +[14] Catalano D , Gennaro R , Howgrave-Graham N , et al. Paillier's Cryptosystem Revisited. 2002. +[15] Ivan, Damgrd, Mads, et al. A generalization of Paillier's public-key system with applications to +electronic voting[J]. International Journal of Information Security, 2010, 9(6):371-385. +[16] https://baijiahao.baidu.com/s?id=1691272446545480912&wfr=spider&for=pc + + + +© 2023 By AIRCC Publishing Corporation. This article is published under the Creative Commons +Attribution (CC BY) license. + + + diff --git a/XtFRT4oBgHgl3EQfNzdw/content/tmp_files/load_file.txt b/XtFRT4oBgHgl3EQfNzdw/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6a96d51c2f9750278987603c65e89d69728c606 --- /dev/null +++ b/XtFRT4oBgHgl3EQfNzdw/content/tmp_files/load_file.txt @@ -0,0 +1,342 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf,len=341 +page_content='David C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Wyld et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' (Eds): MLSC, ITCSS, ACSTY, SOFE, NATP, BDAB - 2023 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' 153-166, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' CS & IT - CSCP 2023 DOI: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='5121/csit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='130212 PRIVACY-PRESERVING ONLINE SHARING CHARGING PILE SCHEME WITH DIFFERENT NEEDS MATCHING Zhiyu Huang1,2 1School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan, China 2Hunan Key Laboratory for Service Computing and Novel Software Technology, Xiangtan, China ABSTRACT With the development of electric vehicles, more and more electric vehicles have difficulties in parking and charging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' One of the reasons is that the number of charging piles is difficult to support the energy supply of electric vehicles, and a large number of private charging piles have a long idle time, so the energy supply problem of electric vehicles can be solved by sharing charging piles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The shared charging pile scheme uses Paillier encryption scheme and improved scheme to effectively protect user data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The scheme has homomorphism of addition and subtraction, and can process information without decryption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' However, considering that different users have different needs, the matching is carried out after calculating the needs put forward by users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" This scheme can effectively protect users' privacy and provide matching mechanisms with different requirements, so that users can better match the appropriate charging piles." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The final result shows that its efficiency is better than the original Paillier scheme, and it can also meet the security requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' KEYWORDS Private charging pile sharing service, Privacy protection, Demand analysis, Homomorphic encryption, Internet of things 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' INTRODUCTION With the advancement of "carbon peak and carbon neutral" goals and the development of electric vehicles (EVs), EVs have the potential to effectively reduce air pollution caused by daily transportation [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' As the number of EVs on the road increases, the demand for charging infrastructure also increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' However, the current prevalence and coverage of charging stations are insufficient to meet this demand[2-3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Surveys have shown that between 2015 and 2020 in Table 1, the number of EVs and charging stations has been steadily increasing, with a particularly notable increase in the number of private charging stations, from 8,000 in 2015 to 874,000 in 2020[16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' However, compared to the growth rate of EVs, the number of charging stations is still far from adequate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' As a result, for those EV users who are unable to install charging stations, the problem of charging difficulties is becoming increasingly apparent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In 2020, the Chinese government proposed to include charging stations as one of the fields of the nation\'s "new infrastructure", with an estimated investment of approximately 10 billion to build charging stations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' According to international data surveys, it is expected that by 2030, there will be 5 million EVs on the road in California alone, and 12-24 million private charging stations and 10- 20 million public charging stations globally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Charging facilities have become an indispensable 154 Computer Science & Information Technology (CS & IT) infrastructure in new energy development planning [4-5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Considering the high installation cost of charging piles [6], other technologies are needed to make up for the shortcomings of charging piles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Table 1 Approximate number of electric vehicles and charging piles in the world Year 2015 2016 2017 2018 2019 2020 Number of electric vehicle 570 1280 1840 2740 3890 4840 Number of public charging piles 58 149 240 387 516 807 Number of private charging piles 8 63 232 477 703 874 With the rapid development of Internet of Things technology, Internet of Things devices connect everything and have gradually entered the mode of Internet of Everything [7-9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' As one of the applications of the Internet of Things, the Internet of Vehicles can realize the information exchange between vehicles and provide certain research value and commercial value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The application of V2X technology of the Internet of Vehicles in cloud (edge) computing is the cornerstone of building a smart city and smart transportation [10-12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' At present, the research on shared charging pile is still in the initial stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The traditional charging pile sharing scheme generally consists of three entities, including charging pile provider, electric vehicle and matching server, in which both buyers and sellers upload their own information to the server for matching calculation, and the server returns the matching results to both buyers and sellers, as shown in Fig 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" However, in the traditional charging pile sharing scheme, the user's information is published or uploaded to the server through simple encryption, and the server needs to decrypt the participants' information to get their plaintext information." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" Therefore, in the traditional scheme, the user's privacy may be attacked and leaked." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In the traditional charging pile sharing system, all information will be published directly on the Internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' One of the biggest problems faced by the system is that users will expose their private information to the public platform when they apply for it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' For example, a malicious user has used a certain charging station.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The charging pile is marked and recorded, and he may not use it directly through the platform when he knows that the charging pile is unmanaged for a period of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' At the same time, there is also the possibility that the shared service platform exposes the privacy of customers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" Because the location information of electric vehicles may include workplaces, home addresses, special hospitals or frequently visited entertainment venues, buyers' hobbies and health status information are leaked, and the privacy of charging pile sellers will also be greatly affected threaten." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' On the other hand, once the information of buyers and sellers is obtained by malicious attackers, not only will there be profitable and targeted advertisements, but also related work and home addresses will be threatened, and may even lead to personal safety.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Therefore, in order not to disclose the private information of customers, it is necessary to design a secure service platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' This paper proposes the use of homomorphic encryption technology to protect the privacy of users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In order to meet the above challenges, the main contributions of this paper are summarized as follows: 1) We use homomorphic encryption technology to encrypt user information, and at the same time use homomorphic characteristics to process ciphertext, and match the obtained results in the cloud server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" In the public service platform, users' effective information will not be exposed, and matching can be completed efficiently." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Computer Science & Information Technology (CS & IT) 155 2) For users with different needs, we designed the demand parameter $\\omega$.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Through matching calculation, we can get the matching index parameter W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' By comparing W, we can get the most suitable buyers and sellers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' This requirement parameter can better match users with different requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' 3) We use chinese remainder theorem (CRT) to speed up the modular exponentiation in the decryption process of cloud server, CRT is used to convert ab from Zn^2 to Zp^2 and Zq^2 for calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' We use Paillier scheme with optimized parameters, which can speed up the encryption calculation although it loses homomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The remainder of this paper is organized as follows: In Section 2, we introduce the homomorphic encryption, parameter optimization of Paillier scheme and china remainder theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In Section 3, we introduce the system model and present the proposed scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In Section 4, we describe the performance evaluation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Finally, we conclude the paper in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' RELATED WORK In this section, homomorphic encryption technology, parameter optimization of Paillier scheme and Chinese remainder theorem are introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Homomorphic Encryption Encryption technology is often used to protect privacy, among which homomorphic encryption is a special encryption method, which has the characteristics of directly calculating encrypted data, such as addition and multiplication, and will not reveal any information of the original text during the calculation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' And the calculated result is also encrypted, and the result obtained after decrypting the processed ciphertext with the key is exactly the result obtained after processing the original text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Paillier scheme has the homomorphism of addition/subtraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' For plaintext m1 and m2, there is a function E () that makes E(m1+m2)=E(m1)\\cdot E(m2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Paillier scheme satisfies the standard semantic security of encryption scheme[13], that is, the ciphertext is indistinguishable (IND-CPA) under the attack of selected plaintext, that is, the information about plaintext will not be leaked in ciphertext.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Its security is proved by the hypothesis of deterministic composite residue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' So far, no algorithm can be cracked in polynomial time, so Paillier encryption scheme is considered to be safe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The detailed process includes the following steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' \uf06c KeyGen() : Pick two prime numbers p and q compute n = p * q and λ=lcm(p-1,q-1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Choose a random number g, and gcd(L(g^λ mod n^2),n) = 1, computer μ=(L(g^λ mod n^2))^(-1) mod n, where L(x) = (x-1)/n the public and private keys are pk = (n,g) and sk =(λ,μ) , respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' \uf06c Encrypt() : Enter the plaintext message m and select the random number r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Encrypt plaintext: , (1) \uf06c Decrypt() : Enter ciphertext C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Calculate plaintext message: 156 Computer Science & Information Technology (CS & IT) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' (2) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" Parameter Optimization of Paillier scheme In order to simplify computation without affecting the algorithm's correctness, the algorithm may take g=n+1 during the key generation phase[14]." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' This allows for the simplification of the calculation of $g^m$ during the encryption process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' For g^m=(n+1)^m, using the binomial theorem, we can express g^m as the sum of the product of the binomial coefficients and the corresponding powers of n and 1, where each term of the sum can be calculated efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (3) As the previous m-1 terms are multiples of n, under the condition of modulo $n^2$ operation, they can all be eliminated, thus this modulo exponentiation operation can ultimately be simplified to one modulo multiplication operation, thus accelerating the encryption process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (4) Decrypt ciphertext c: .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' (5) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Chinese remainder theorem The Chinese Remainder Theorem, also known as the Sunzi Theorem, originates from the ancient Chinese mathematical treatise "Sunzi Suanjing" and describes the isomorphism of two algebraic spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Specifically, an algebraic space can be decomposed into several mutually orthogonal subspaces and the original space corresponds one-to-one to the decomposed space, similar to two forms of the same space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Specifically, when n = pq and p, q are relatively prime, there exists the algebraic isomorphism property: a mod n = a mod p + a mod q, thus the operations under mod n can be transformed into operations under mod p and mod q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' By converting to this form, the calculation efficiency is higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Therefore, this property can be utilized to accelerate modular exponentiation operations under mod n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' THE PROPOSED SCHEME 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' System Model The matching scheme of shared charging piles consists of multiple electric vehicle buyers, multiple charging pile sellers, multiple edge proxy servers, a cloud server and a certificate certification center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' All entities communicate through the mobile network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The Figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='1 describes our system model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Electric vehicle (EVs): As a user of shared charging piles, when charging piles are needed, a charging request will be sent out, and the EV set is expressed as {1, …, i, …, I}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' After receiving the response, you will get the public key of information encryption, and the terminal equipment Computer Science & Information Technology (CS & IT) 157 of the Internet of Things will encrypt the information to be sent with the public key and send it to the nearest proxy server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Private charging piles (PCPs): As a provider of shared charging piles, there will be J private charging piles in a given area, and the collection of charging piles is denoted as {1,…, j…, J}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Each PCP is managed by the owner and is equipped with a socket for EV charging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' When each PCP has free time, it will issue an application for energy supply, and the provided information will be encrypted on the Internet of Things terminal equipment with the provided public key, and then the encrypted information will be sent to the nearest proxy server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Proxy server: The proxy server has certain computing power, and is mainly responsible for collecting the encrypted information provided by nearby electric vehicle buyers and charging pile sellers who apply for matching, and using homomorphic characteristics to calculate the encrypted information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In the calculation process, important information is protected by Paillier, and the edge proxy server will not get useful information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Cloud server: A cloud server is a server with powerful computing power, which can process encrypted information sent by proxy servers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' After processing the information, use the matching scheme provided to match the buyer and the seller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' After the matching, the best matching object will be obtained, and then the next round of matching will be carried out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Certificate certification (CA): The certificate certification is the only authoritative identity certification institution and is completely reliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' All user entities need to be registered and authenticated by the certificate authority, and when the user sends an application, the corresponding public and private key pairs are generated by the key management center of the certificate authority and sent to the corresponding users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The information of the certificate certification center is absolutely confidential, and there is no possibility of collusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='1 System model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Matching with favor and return result Cloud Serve CertificationAuthority Computer differential value of Enc(loc,val Enc(loc,val)&favor Proxv Serve Enc(loc,val)&favo Enc(loc,val) Enc(loc,val)&favor Enc(loc,val) EV EV EV U favor: loc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='7,val = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='3 favor: loc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='5,val = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='5 favor: loc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='3,val = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='7 Charging Pile Charging Pile158 Computer Science & Information Technology (CS & IT) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Requests and Information Encryption In the shared charging pile matching system, the certificate certification center is responsible for managing and issuing public keys and maintaining public key information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In the system, electric vehicle buyers and charging pile sellers provide information, including location, price, demand and other information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Buyers of electric vehicles need to provide their own location xi,yi, proposed price (ri), farthest acceptable distance dimax, demand i=1,2,···,n={0,1} and the price proposed by the demand .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The seller of charging piles needs to provide the location (xj,yj), the price (rj), the demand j=1,2,···,n={0,1} of charging piles and the price .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' After the user sends a request, the certificate certification center will send the public key pk to the user, and the electric vehicle buyer and the charging pile seller will select the random numbers ri and rj respectively, and use the public keys pk, ri and rj to encrypt the provided information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (6) , (7) After encryption, Cm are obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The buyers of electric vehicles package Cm and w and send them to the proxy server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The sellers of charging piles send Cm to the proxy server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Except at the user end, the private information of users is all the information obtained after encryption and processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Ciphertext Processing The proxy server has a certain computing power, and can use the homomorphic addition/subtraction characteristics of pailiier encryption scheme to process the ciphertext information as well as the plaintext.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The process is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (8) , (9) The demand of buyers and sellers is encrypted by homomorphic addition, and the information such as location, price and demand price is encrypted by homomorphic subtraction to get the sum of the processed information and the difference of the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Among them, the main function of (8) is to judge whether the demand of electric vehicle buyers can be met.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Information difference is a difference comparison between the information provided by buyers and sellers, which can reflect the information similarity of both parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The smaller the information difference, the closer the information provided by the charging pile seller is to the preference of the electric vehicle buyer, which is more suitable for matching and has a higher matching probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=" On the contrary, the larger the information difference between the two parties means that the user's matching probability will be smaller." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Computer Science & Information Technology (CS & IT) 159 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Information Decryption The cloud server owns the private key sk issued by CA, including p and q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' In Paillier cryptosystem, the main cost of decryption is modular exponentiation under Zn^2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' With the private key (decomposition p, q of n), the modular exponentiation under Zn^2 can be converted into Zp^2 and Zq^2 by CRT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' The optimization function using CRT is expressed as Lp(x)=(x-1)/p and Lq(x)=(x-1)/q respectively, and the decryption process needs to be divided by using the following mathematical principles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (10) , (11) , (12) , (13) , (14) CRT(mp,mq mod pq) is to use CRT to calculate the modulus index, and the detailed process is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' For the modulus index ab mod n, n=pq, CRT is used to convert ab from Zn to Zp and Zq for calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Calculate the mapping ab of Zp on mp=ap^bp, where ap=a mod p can be obtained from euler theorem, where φ(p)=p-1 is Euler function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Calculate the mapping mq=aq^bq of aq on ab, which is the same as the calculation process of mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Calculate mp and mq separately and then aggregate them back.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (15) Because p and q are coprime, there are q-1(mod p)q+p-1(mod q)p=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' Substituting into the formula (15) gives: , (16) In Paillier scheme, CRT is used to speed up decryption of plaintext to get plaintext m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' After receiving the processed information, the modular operation under Zn^2 is converted into modular operation under Zp^2 and Zq^2 by using private key sk by using China remainder theorem, and then decrypted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' , (17) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' (18) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' System Matching After decryption by using CRT-optimized decryption scheme, the sum of information and the difference between information are obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' After obtaining the decrypted information, first calculate the distance between the buyer i and the seller j: 160 Computer Science & Information Technology (CS & IT) , (19) The maximum acceptable distance of the EV buyer i is also encrypted and decrypted, because the maximum acceptable distance does not carry specific information such as location and price, so it is not regarded as important privacy information, so the cloud server gets the same clear text dimax as the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' To meet the matching conditions of buyer i, we must first compare the direct distance between buyer and seller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/XtFRT4oBgHgl3EQfNzdw/content/2301.13511v1.pdf'} +page_content=' If ddij 0) of programs +in the (new) cartesian closed category VectFr, which +generalises smooth manifolds and extends Frölicher +spaces (see e.g. [13,35]) with a vector space structure. +Intuitively, we replace the Heaviside step-function +usually arising in the interpretation of conditionals +by smooth approximations. In particular, we interpret +the conditional of Example 1 as +�if s + θ < 0 then 0 else 1�η(θ, s) := ση(s + θ) + +0.60 +0.55 +LBO +0.50 +0.45 +0.40 +SCORE +0.35 +REPARAM +0 +2000 +4000 +6000 +8000 +10000 +IterationFast and Correct Optimisation for Probabilistic Programming via Smoothing +5 +where ση is a smooth function. For instance we can choose ση(x) := σ( x +η ) where +σ(x) := +1 +1+exp(−x) is the (logistic) sigmoid function (cf. Fig. 2). Thus, the pro- +gram M is interpreted by a smooth function �M�η, for which the reparameter- +isation gradient may be estimated unbiasedly. Therefore, we apply stochastic +gradient descent on the smoothed program. +Contributions +The high-level contribution of this paper is laying a theoretical foundation for +correct yet efficient (variational) inference for probabilistic programming. We +employ a smoothed interpretation of programs to obtain unbiased (reparame- +terisation) gradient estimators and establish technical pre-conditions by type +systems. In more detail: +1. We present a simple (higher-order) programming language with conditionals. +We employ trace types to capture precisely the samples drawn in a fully eager +call-by-value evaluation strategy. +2. We endow our language with both a (measurable) denotational value seman- +tics and a smoothed (hence approximate) value semantics. For the latter we +furnish a categorical model based on Frölicher spaces. +3. We develop type systems enforcing vital technical pre-conditions: unbiased- +ness of the reparameterisation gradient estimator and the correctness of +stochastic gradient descent, as well as the uniform convergence of the smooth- +ing to the original problem. Thus, our smoothing approach in principle yields +correct solutions up to arbitrary error tolerances. +4. We conduct an empirical evaluation demonstrating that our approach ex- +hibits a similar convergence to an unbiased correction of the reparameterised +gradient estimator by [23] – our main baseline. However our estimator is sim- +pler and more efficient: it is faster and attains orders of magnitude reduction +in work-normalised variance. +Outline. In the next section we introduce a simple higher-order probabilistic pro- +gramming language, its denotational value semantics and operational semantics; +Optimisation Problem 1 is then stated. Section 3 is devoted to a smoothed deno- +tational value semantics, and we state the Smooth Optimisation Problem 2. In +Sections 4 and 5 we develop annotation based type systems enforcing the correct- +ness of SGD and the convergence of the smoothing, respectively. Related work +is briefly discussed in Section 6 before we present the results of our empirical +evaluation in Section 7. We conclude in Section 8 and discuss future directions. +Notation. We use the following conventions: bold font for vectors and lists, ++ +for concatenation of lists, ∇θ for gradients (w.r.t. θ),[φ] for the Iverson bracket of +a predicate φ and calligraphic font for distributions, in particular N for normal +distributions. Besides, we highlight noteworthy items using red. + +6 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +2 +A Simple Programming Language +In this section, we introduce our programming language, which is the simply- +typed lambda calculus with reals, augmented with conditionals and sampling +from continuous distributions. +2.1 +Syntax +The raw terms of the programming language are defined by the grammar: +M ::= x | θi | r | + | · | − | −1 | exp | log +| if M < 0 then M else M | sample D | λx. M | M M +where x and θi respectively range over (denumerable collections of) variables and +parameters, r ∈ R, and D is a probability distribution over R (potentially with a +support which is a strict subset of R). As is customary we use infix, postfix and +prefix notation: M + N (addition), M · N (multiplication), M −1 (inverse), and +−M (numeric negation). We frequently omit the underline to reduce clutter. +Example 2 (Encoding the ELBO for Variational Inference). +We consider the +example used by [23] in their Prop. 2 to prove the biasedness of the reparam- +eterisation gradient. (In Example 1 we discussed a simplified version thereof.) +The joint density is +p(z) := N(z | 0, 1) · +� +N(0 | −2, 1) +if z < 0 +N(0 | 5, 1) +otherwise +and they use a variational family with density qθ(z) := N(z | θ, 1), which is +reparameterised using a standard normal noise distribution and transformation +s �→ s + θ. +First, we define an auxiliary term for the pdf of normals with mean m and +standard derivation s: +N ≡ λx, m, s. +�√ +2π · s +�−1 · exp +� +−0.5 · +� +(x + (−m)) · s−1�2� +Then, we can define +M ≡ +� +λz. log (N z 0 1) + (if z < 0 then log (N 0 (−2) 1) else log (N 0 5 1)) +� +�� +� +log p +− +log (N z θ 1) +� +�� +� +log qθ +� � +sample N + θ +� +2.2 +A Basic Trace-Based Type System +Types are generated from base types (R and R>0, the reals and positive reals) +and trace types (typically Σ, which is a finite list of probability distributions) + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +7 +as well as by a trace-based function space constructor of the form τ • Σ → τ ′. +Formally types are defined by the following grammar: +trace types +Σ ::= [D1, . . . , Dn] +n ≥ 0 +base types +ι ::= R | R>0 +safe types +σ ::= ι | σ • [] → σ +types +τ ::= ι | τ • Σ → τ +where Di are probability distributions. Intuitively a trace type is a description +of the space of execution traces of a probabilistic program. Using trace types, a +distinctive feature of our type system is that a program’s type precisely charac- +terises the space of its possible execution traces [24]. We use list concatenation +notation ++ for trace types, and the shorthand τ1 → τ2 for function types of the +form τ1 • [] → τ2. Intuitively, a term has type τ • Σ → τ ′ if, when given a value +of type τ, it reduces to a value of type τ ′ using all the samples in Σ. +Dual context typing judgements of the form, Γ | Σ ⊢ M : τ, are defined +in Fig. 3b, where Γ = x1 : τ1, · · · , xn : τn, θ1 : τ ′ +1, · · · , θm : τ ′ +m is a finite map +describing a set of variable-type and parameter-type bindings; and the trace type +Σ precisely captures the distributions from which samples are drawn in a (fully +eager) call-by-value evaluation of the term M. +The subtyping of types, as defined in Fig. 3a, is essentially standard; for +contexts, we define Γ ⊑ Γ ′ if for every x : τ in Γ there exists x : τ ′ in Γ ′ such +that τ ′ ⊑ τ. +Trace types are unique (cf. Appendix A.1): +Lemma 1. If Γ | Σ ⊢ M : τ and Γ | Σ′ ⊢ M : τ ′ then Σ = Σ′. +A term has safe type σ if it does not contain sample D or σ is a base type. +Thus, perhaps slightly confusingly, we have +| [D] ⊢ sample D : R, and R +is considered a safe type. Note that we use the metavariable σ to denote safe +types. +Conditionals. The branches of conditionals must have a safe type. Otherwise it +would not be clear how to type terms such as +M ≡ if x < 0 then (λx. sample N ) else (λx. sample E + sample E) +N ≡ (λf. f (f sample N )) M +because the branches draw a different number of samples from different distribu- +tions, and have types R•[N] → R and R•[E, E] → R, respectively. However, for +M ′ ≡ if x < 0 then sample N else sample E + sample E we can (safely) type +x : R | [N, E, E] ⊢ M ′ : R +| [] ⊢ λx. M ′ : R • [N, E, E] → R +| [N, N, E, E, N, E, E] ⊢ (λf. f (f sample N )) (λx. M ′) : R + +8 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +ι ⊑ ι +R>0 ⊑ R +τ ′ +1 ⊑ τ1 +τ2 ⊑ τ ′ +2 +(τ1 • Σ → τ2) ⊑ (τ ′ +1 • Σ → τ ′ +2) +(a) Subtyping +Γ | Σ ⊢ M : τ +Γ ′ | Σ ⊢ M : τ ′ Γ ⊑ Γ ′, τ ⊑ τ ′ +x : τ | [] ⊢ x : τ +| [] ⊢ r : R r ∈ R +| [] ⊢ r : R>0 +r ∈ R>0 +| [] ⊢ ◦ : R → R → R ◦ ∈ {+, ·} +| [] ⊢ ◦ : R>0 → R>0 → R>0 +◦ ∈ {+, ·} +| [] ⊢ − : R → R +| [] ⊢ −1 : R>0 → R>0 +| [] ⊢ exp : R → R>0 +| [] ⊢ log : R>0 → R +Γ | Σ ⊢ L : R +Γ | Σ′ ⊢ M : σ +Γ | Σ′′ ⊢ N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢ if L < 0 then M else N : σ +| [D] ⊢ sample D : R +Γ, y : τ1 | Σ ⊢ M : τ2 +Γ | [] ⊢ λy. M : τ1 • Σ → τ2 +Γ | Σ1 ⊢ M : τ1 • Σ3 → τ2 +Γ | Σ2 ⊢ N : τ1 +Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ M N : τ2 +(b) Typing judgments +Fig. 3: A Basic Trace-based Type System +Example 3. Consider the following terms: +L ≡ λx. sample N + sample N +M ≡ if x < 0 then (λy. y + y) sample N else (sample N + sample N ) +We can derive the following typing judgements: +| [] ⊢ L : R>0 • [N, N] → R +x : R>0 | [N, N, N] ⊢ M : R +| [] ⊢ λx. M : R>0 • [N, N, N] → R +| [N, N, N, N] ⊢ (λx. M) sample N : R +| [N, N] ⊢ (λf. f (f 0)) (λx. sample N ) : R +Note that if x < 0 then (λx. sample N ) else (λx. x) is not typable. +2.3 +Denotational Value Semantics +Next, we endow our language with a (measurable) value semantics. It is well- +known that the category of measurable spaces and measurable functions is not +cartesian-closed [1], which means that there is no interpretation of the lambda + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +9 +calculus as measurable functions. These difficulties led [15] to develop the cate- +gory QBS of quasi-Borel spaces. In Appendix A.2 we recall the definition. No- +tably, morphisms can be combined piecewisely, which we need for conditionals. +We interpret our programming language in the category QBS of quasi-Borel +spaces. Types are interpreted as follows: +�R� := (R, MR) +�R>0� := (R>0, MR>0) +�[D1, . . . , Dn]� := (R, MR)n +�τ1 • Σ → τ2� := �τ1� × �Σ� ⇒ �τ2� +where MR is the set of measurable functions R → R; similarly for MR>0. (As for +trace types, we use list notation (and list concatenation) for traces.) +We first define a handy helper function for interpreting application. For f : +�Γ� × Rn1 ⇒ �τ1 • Σ3 → τ2� and g : �Γ� × Rn2 ⇒ �τ1� define +f @ g : �Γ� × Rn1+n2+|Σ3| ⇒ �τ2� +(γ, s1 ++ s2 ++ s3) �→ f(γ, s1)(g(γ, s2), s3) +s1 ∈ Rn1, s2 ∈ Rn2, s3 ∈ R|Σ3| +We interpret terms-in-context, �Γ | Σ ⊢ M : τ� : �Γ�×�Σ� → �τ�, as follows: +�Γ | [D] ⊢ sample D : R�(γ, [s]) := s +�Γ | [] ⊢ λy. M : τ1 • Σ → τ2�(γ, []) := +(v, s) ∈ �τ1� × �Σ� �→ �Γ, x : τ1 | Σ ⊢ M : τ2�((γ, v), s) +�Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ M N : τ� := +�Γ | Σ1 ⊢ M : τ1 • Σ3 → τ2� @ �Γ | Σ2 ⊢ N : τ1� +�Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ if L < 0 then M else N : τ�(γ, s1 ++ s2 ++ s3)) := +� +�Γ | Σ2 ⊢ M : τ�(γ, s2) +if �Γ | Σ1 ⊢ L : R�(γ, s1) < 0 +�Γ | Σ3 ⊢ N : τ�(γ, s3) +otherwise +It is not difficult to see that this interpretation of terms-in-context is well- +defined and total. For the conditional clause, we may assume that the trace type +and the trace are presented as partitions Σ1 ++ Σ2 ++ Σ3 and s1 ++ s2 ++ s3 +respectively. This is justified because it follows from the judgement Γ | Σ1 ++ +Σ2 ++ Σ3 ⊢ if L < 0 then M else N : τ that Γ | Σ1 ⊢ L : R, Γ | Σ2 ⊢ M : σ +and Γ | Σ3 ⊢ N : σ are provable; and we know that each of Σ1, Σ2 and Σ3 is +unique, thanks to Lemma 1; their respective lengths then determine the partition +s1 ++ s2 ++ s3. Similarly for the application clause, the components Σ1 and Σ2 +are determined by Lemma 1, and Σ3 by the type of M. +2.4 +Relation to Operational Semantics +We can also endow our language with a big-step CBV sampling-based semantics +similar to [7,26], as defined in Fig. 6 of Appendix A. We write M ⇓s +w V to mean +that M reduces to value V , which is a real constant or an abstraction, using the +execution trace s and accumulating weight w. + +10 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Based on this, we can define the value- and weight-functions: +valueM(s) := +� +V +if M ⇓s +w V +undef +otherwise +weightM(s) := +� +w +if M ⇓s +w V +0 +otherwise +Our semantics is a bit non-standard in that for conditionals we evaluate +both branches eagerly. The technical advantage is that for every (closed) term- +in-context, | [D1, · · · , Dn] ⊢ M : ι, M reduces to a (unique) value using exactly +the traces of the length encoded in the typing, i.e., n. +So in this sense, the operational semantics is “total”: there is no divergence. +Notice that there is no partiality caused by partial primitives such as 1/x, thanks +to the typing. +Moreover there is a simple connection to our denotational value semantics: +Proposition 1. Let | [D1, . . . , Dn] ⊢ M : ι. Then +1. dom(valueM) = Rn +2. �M� = valueM +3. weightM(s) = �n +j=1 pdfDj(sj) +2.5 +Problem Statement +We are finally ready to formally state our optimisation problem: +Problem 1. +Optimisation +Given: term-in-context, θ1 : ι1, · · · , θm : ιm | [D1, . . . , Dn] ⊢ M : R +Find: +argminθ Es1∼D1,...,sn∼Dn [�M�(θ, s)] +3 +Smoothed Denotational Value Semantics +Now we turn to our smoothed denotational value semantics, which we use to +avoid the bias in the reparameterisation gradient estimator. It is parameterised +by a family of smooth functions ση : R → [0, 1]. Intuitively, we replace the +Heaviside step-function arising in the interpretation of conditionals by smooth +approximations (cf. Fig. 2). In particular, conditionals if z < 0 then 0 else 1 are +interpreted as z �→ ση(z) rather than [z ≥ 0] (using Iverson brackets). +Our primary example is ση(x) := σ( x +η ), where σ is the (logistic) sigmoid +σ(x) := +1 +1+exp(−x), see Fig. 2. Whilst at this stage no further properties other +than smoothness are required, we will later need to restrict ση to have good +properties, in particular to convergence to the Heaviside step function. +As a categorical model we propose vector Frölicher spaces VectFr, which (to +our knowledge) is a new construction, affording a simple and direct interpretation +of the smoothed conditionals. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +11 +3.1 +Frölicher Spaces +We recall the definition of Frölicher spaces, which generalise smooth spaces4: A +Frölicher space is a triple (X, CX, FX) where X is a set, CX ⊆ Set(R, X) is a +set of curves and FX ⊆ Set(X, R) is a set of functionals. satisfying +1. if c ∈ CX and f ∈ FX then f ◦ c ∈ C∞(R, R) +2. if c : R → X such that for all f ∈ FX, f ◦ c ∈ C∞(R, R) then c ∈ CX +3. if f : X → R such that for all c ∈ CX, f ◦ c ∈ C∞(R, R) then f ∈ FX. +A morphism between Frölicher spaces (X, CX, FX) and (Y, CY , FY ) is a map +φ : X → Y satisfying f ◦ φ ◦ c ∈ C∞(R, R) for all f ∈ FY and c ∈ CX. +Frölicher spaces and their morphisms constitute a category Fr, which is well- +known to be cartesian closed [13,35]. +3.2 +Vector Frölicher Spaces +To interpret our programming language smoothly we would like to interpret +conditionals as ση-weighted convex combinations of its branches: +�if L < 0 then M else N�η(γ, s1 ++ s2 ++ s3) := +ση(−�L�η(γ, s1)) · �M�η(γ, s2) + ση(�L�η(γ, s1)) · �N�η(γ, s3) +(4) +By what we have discussed so far, this only makes sense if the branches have +ground type because Frölicher spaces are not equipped with a vector space +structure but we take weighted combinations of morphisms. In particular if +φ1, φ2 : X → Y and α : X → R are morphisms then α φ1 + φ2 ought to be +a morphism too. Therefore, we enrich Frölicher spaces with an additional vector +space structure: +Definition 1. A R-vector Frölicher space is a Frölicher space (X, CX, FX) such +that X is an R-vector space and whenever c, c′ ∈ CX and α ∈ C∞(R, R) then +α c + c′ ∈ CX (defined pointwise). +A morphism between R-vector Frölicher spaces is a morphism between Frölicher +spaces, i.e. φ : (X, CX, FX) → (Y, CY , FY ) is a morphism if for all c ∈ CX and +f ∈ FY , f ◦ φ ◦ c ∈ C∞(R, R). +R-vector Frölicher space and their morphisms constitute a category VectFr. +There is an evident forgetful functor fully faithfully embedding VectFr in Fr. +Note that the above restriction is a bit stronger than requiring that CX is also a +vector space. (α is not necessarily a constant.) The main benefit is the following, +which is crucial for the interpretation of conditionals as in Eq. (4): +Lemma 2. If φ1, φ2 ∈ VectFr(X, Y ) and α ∈ VectFr(X, R) then α φ1 + φ2 ∈ +VectFr(X, Y ) (defined pointwisely). +Proof. Suppose c ∈ CX and f ∈ FY . Then (α1 φ1 + φ2) ◦ c = (α ◦ c) · (φ1 ◦ c) + +(φ2 ◦ c) ∈ CY (defined pointwisely) and the claim follows. +4 C∞(R, R) is the set of smooth functions R → R + +12 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Similarly as for Frölicher spaces, if X is an R-vector space then any C ⊆ +Set(X, R) generates a R-vector Frölicher space (X, CX, FX), where +FX := {f : X → R | ∀c ∈ C. f ◦ c ∈ C∞(R, R)} +�CX := {c : R → X | ∀f ∈ FX. f ◦ c ∈ C∞(R, R)} +CX := +� n +� +i=1 +αi ci | n ∈ N, ∀i ≤ n. αi ∈ C∞(R, R), ci ∈ �CX +� +Having modified the notion of Frölicher spaces generated by a set of curves, the +proof for cartesian closure carries over (more details are provided in Appendix B) +and we conclude: +Proposition 2. VectFr is cartesian closed. +3.3 +Smoothed Interpretation +We have now discussed all ingredients to interpret our language (smoothly) in +the cartesian closed category VectFr. We call �M�η the η-smoothing of �M� (or +of M, by abuse of language). The interpretation is mostly standard and follows +Section 2.3, except for the case for conditionals. The latter is given by Eq. (4), +for which the additional vector space structure is required. +Finally, we can phrase a smoothed version of our Optimisation Problem 1: +Problem 2. +η-Smoothed Optimisation +Given: term-in-context, θ1 : ι1, · · · , θm : ιm | [D1, . . . , Dn] ⊢ M : R, and +accuracy coefficient η > 0 +Find: +argminθ Es1∼D1,...,sn∼Dn [�M�η(θ, s)] +4 +Correctness of SGD for Smoothed Problem and +Unbiasedness of the Reparameterisation Gradient +Next, we apply stochastic gradient descent (SGD) with the reparameterisation +gradient estimator to the smoothed problem (for the batch size N = 1): +θk+1 := θk − γk · ∇θ�M�η (θk, sk) +sk ∼ D +(5) +where θ | [s ∼ D] ⊢ M : R (slightly abusing notation in the trace type). +A classical choice for the step-size sequence is γk ∈ Θ(1/k), which satisfies +the so-called Robbins-Monro criterion: +� +k∈N +γk = ∞ +� +k∈N +γ2 +k < ∞ +(6) +In this section we wish to establish the correctness of the SGD procedure +applied to the smoothing Eq. (5). + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +13 +4.1 +Desiderata +First, we ought to take a step back and observe that the optimisation problems +we are trying to solve can be ill-defined due to a failure of integrability: take +M ≡ (λx. exp (x · x)) sample N : we have Ez∼N [�M�(z)] = ∞, independently of +parameters. Therefore, we aim to guarantee: +(SGD0) The optimisation problems (both smoothed and unsmoothed) are +well-defined. +Since E[�M�η(θ, s)] (and E[�M�(θ, s)]) may not be a convex function in the +parameters θ, we cannot hope to always find global optima. We seek instead +stationary points, where the gradient w.r.t. the parameters θ vanishes. The fol- +lowing results (whose proof is standard) provide sufficient conditions for the +convergence of SGD to stationary points (see e.g. [3] or [2, Chapter 2]): +Proposition 3 (Convergence). Suppose (γk)k∈N satisfies the Robbins-Monro +criterion Eq. (6) and g(θ) := Es[f(θ, s)] is well-defined. If Θ ⊆ Rm satisfies +(SGD1) Unbiasedness: ∇θg(θ) = Es[∇θf(θ, s)] for all θ ∈ Θ +(SGD2) g is L-Lipschitz smooth on Θ for some L > 0: +∥∇θg(θ) − ∇θg(θ′)∥ ≤ L · ∥θ − θ′∥ +for all θ, θ′ ∈ Θ +(SGD3) Bounded Variance: supθ∈Θ Es[∥∇θfk(θ, s)∥2] < ∞ +then infi∈N E[∥∇g(θi)∥2] = 0 or θi ̸∈ Θ for some i ∈ N. +Unbiasedness (SGD1) requires commuting differentiation and integration. +The validity of this operation can be established by the dominated convergence +theorem [21, Theorem 6.28], see Appendix C.1. To be applicable the partial +derivatives of f w.r.t. the parameters need to be dominated uniformly by an +integrable function. Formally: +Definition 2. Let f : Θ × Rn → R and g : Rn → R. We say that g uniformly +dominates f if for all (θ, s) ∈ Θ × Rn, |f(θ, s)| ≤ g(s). +Also note that for Lipschitz smoothness (SGD2) it suffices to uniformly bound +the second-order partial derivatives. +In the remainder of this section we present two type systems which restrict +the language to guarantee properties (SGD0) to (SGD3). +4.2 +Piecewise Polynomials and Distributions with Finite Moments +As a first illustrative step we consider a type system ⊢poly, which restricts terms +to (piecewise) polynomials, and distributions with finite moments. Recall that a +distribution D has (all) finite moments if for all p ∈ N, Es∼D[|s|p] < ∞. Distri- +butions with finite moments include the following commonly used distributions: +normal, exponential, logistic and gamma distributions. A non-example is the +Cauchy distribution, which famously does not even have an expectation. + +14 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Definition 3. For a distribution D with finite moments, f : Rn → R has (all) +finite moments if for all p ∈ N, Es∼D[|f(s)|p] < ∞. +Functions with finite moments have good closure properties: +Lemma 3. If f, g : Rn → R have (all) finite moments so do −f, f + g, f · g. +In particular, if a distribution has finite moments then polynomials do, too. +Consequently, intuitively, it is sufficient to simply (the details are explicitly +spelled out in Appendix C.2): +1. require that the distributions D in the sample rule have finite moments: +| [D] ⊢poly sample D : R D has finite moments +2. remove the rules for −1, exp and log from the type system ⊢poly. +Type Soundness I: Well-Definedness. Henceforth, we fix parameters θ1 : +ι1, . . . , θm : ιm. Intuitively, it is pretty obvious that �M� is a piecewise polynomial +whenever θ | Σ ⊢poly M : ι. Nonetheless, we prove the property formally to +illustrate our proof technique, a variant of logical relations, employed throughout +the rest of the paper. +We define a slightly stronger logical predicate P(n) +τ +on Θ × Rn → �τ�, which +allows us to obtain a uniform upper bound: +1. f ∈ P(n) +ι +if f is uniformly dominated by a function with finite moments +2. f ∈ P(n) +τ1•Σ3→τ2 if for all n2 ∈ N and g ∈ P(n+n2) +τ1 +, f ⊙ g ∈ P(n+n2+|Σ3|) +τ2 +where for f : Θ × Rn1 → �τ1 • Σ3 → τ2� and g : Θ × Rn1+n2 → �τ1� we define +f ⊙ g : Θ × Rn1+n2+|Σ3| → τ2 +(θ, s1 ++ s2 ++ s3) �→ f(θ, s1)(g(θ, s1 ++ s2), s3) +Intuitively, g may depend on the samples in s2 (in addition to s1) and the function +application may consume further samples s3 (as determined by the trace type +Σ3). By induction on safe types we prove the following result, which is important +for conditionals: +Lemma 4. If f ∈ P(n) +ι +and g, h ∈ P(n) +σ +then [f(−) < 0]·g+[f(−) ≥ 0]·h ∈ P(n) +σ . +Proof. For base types it follows from Lemma 3. Hence, suppose σ has the form +σ1•[] → σ2. Let n2 ∈ N and x ∈ Pn+n2 +σ1 +. By definition, (g⊙x), (h⊙x) ∈ P(n+n2) +σ2 +. +Let �f be the extension (ignoring the additional samples) of f to Θ×Rn+n2 → R. +It is easy to see that also �f ∈ P(n+n2) +ι +By the inductive hypothesis, +[ �f(−) < 0] · (g ⊙ x) + [ �f(−) ≥ 0] · (h ⊙ x) ∈ P(n+n2) +σ2 +Finally, by definition, +([f(−) < 0] · g + [f(−) ≥ 0] · h) ⊙ x = [ �f(−) < 0] · (g ⊙ x) + [ �f(−) ≥ 0] · (h ⊙ x) + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +15 +Assumption 1 We assume that Θ ⊆ �ι1� × · · · × �ιm� is compact. +Lemma 5 (Fundamental). +If θ, x1 : τ1, . . . , xℓ : τℓ | Σ ⊢poly M : τ, n ∈ N, +ξ1 ∈ P(n) +τ1 , . . . , ξℓ ∈ P(n) +τℓ +then �M� ∗ ⟨ξ1, . . . , ξℓ⟩ ∈ P(n+|Σ|) +τ +, where +�M� ∗ ⟨ξ1, . . . , ξℓ⟩ : Θ × Rn+|Σ| → �τ� +(θ, s ++ s′) �→ �M�((θ, ξ1(θ, s), . . . , ξℓ(θ, s)), s′) +It is worth noting that, in contrast to more standard fundamental lemmas, here +we need to capture the dependency of the free variables on some number n of +further samples. E.g. in the context of (λx. x) sample N the subterm x depends +on a sample although this is not apparent if we consider x in isolation. +Lemma 5 is proven by structural induction (cf. Appendix C.2 for details). The +most interesting cases include: parameters, primitive operations and condition- +als. In the case for parameters we exploit the compactness of Θ (Assumption 1). +For primitive operations we note that as a consequence of Lemma 3 each P(n) +ι +is closed under negation5, addition and multiplication. Finally, for conditionals +we exploit Lemma 3. +Type Soundness II: Correctness of SGD. Next, we address the integrability +for the smoothed problem as well as (SGD1) to (SGD3). We establish that not +only �M�η but also its partial derivatives up to order 2 are uniformly dominated +by functions with finite moments. For this to possibly hold we require: +Assumption 2 For every η > 0, +sup +x∈R +|ση(x)| < ∞ +sup +x∈R +|σ′ +η(x)| < ∞ +sup +x∈R +|σ′′ +η(x)| < ∞ +Note that, for example, the logistic sigmoid satisfies Assumption 2. +We can then prove a fundamental lemma similar to Lemma 5, mutatis mu- +tandis, using a logical predicate in VectFr. We stipulate f ∈ Q(n) +ι +if its partial +derivatives up to order 2 are uniformly dominated by a function with finite mo- +ments. In addition to Lemma 3 we exploit standard rules for differentiation (such +as the sum, product and chain rule) as well as Assumption 2. We conclude: +Proposition 4. If θ | Σ ⊢poly M : R then the partial derivatives up to order 2 +of �M�η are uniformly dominated by a function with all finite moments. +Consequently, the Smoothed Optimisation Problem 2 is not only well-defined +but, by the dominated convergence theorem [21, Theorem 6.28], the reparame- +terisation gradient estimator is unbiased. Furthermore, (SGD1) to (SGD3) are +satisfied and SGD is correct. +5 for ι = R + +16 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Discussion. The type system ⊢poly is simple yet guarantees correctness of SGD. +However, it is somewhat restrictive; in particular, it does not allow the expression +of many ELBOs arising in variational inference directly as they often have the +form of logarithms of exponential terms (cf. Example 2). +4.3 +A Generic Type System with Annotations +Next, we present a generic type system with annotations. In Section 4.4 we give +an instantiation to make ⊢poly more permissible and in Section 5 we turn towards +a different property: the uniform convergence of the smoothings. +Typing judgements have the form Γ | Σ ⊢? M : τ, where “?” indicates +the property we aim to establish, and we annotate base types. Thus, types are +generated from +trace types +Σ ::= [s1 ∼ D1, . . . , sn ∼ Dn] +base types +ι ::= R | R>0 +safe types +σ ::= ιβ | σ • [] → σ +types +τ ::= ια | τ • Σ → τ +Annotations are drawn from a set and may possibly restricted for safe types. +Secondly, the trace types are now annotated with variables, typically Σ = [s1 ∼ +D1, . . . , sn ∼ Dn] where the variables sj are pairwise distinct. +For the subtyping relation we can constrain the annotations at the base type +level (see Fig. 8a); the extension to higher types is accomplished as before. +The typing rules have the same form but they are extended with the annota- +tions on base types and side conditions possibly constraining them. For example, +the rules for addition, exponentiation and sampling are modified as follows: +| [] ⊢? + : ια1 → ια2 → ια (cond. Add) +| [] ⊢? exp : Rα → Rα′ +>0 +(cond. Exp) +| [sj ∼ D] ⊢? sample D : Rα (cond. Sample) +The rules for subtyping, variables, abstractions and applications do not need to +be changed at all but they use annotated types instead of the types of Section 2.2. +Γ | Σ ⊢? M : τ +Γ ′ | Σ ⊢? M : τ ′ Γ ⊑? Γ ′, τ ⊑? τ ′ +x : τ | [] ⊢? x : τ +Γ, y : τ1 | Σ ⊢? M : τ2 +Γ | [] ⊢? λy. M : τ1 • Σ → τ2 +Γ | Σ2 ⊢? M : τ1 • Σ3 → τ2 +Γ | Σ1 ⊢? N : τ1 +Γ | Σ1 ++ Σ2 ++ Σ3 ⊢? M N : τ2 +The full type system is presented in Appendix C.3. +⊢poly can be considered a special case of ⊢? whereby we use the singleton ∗ +as annotations, a contradictory side condition (such as false) for the undesired +primitives −1, exp and log, and use the side condition “D has finite moments” +for sample as above. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +17 +Table 1: Overview of type systems in this paper. +property +Section +judgement annotation +totality +Section 2.2 +⊢ +– +correctness SGD +Section 4.2 +⊢poly +none/∗ +Section 4.4 +⊢SGD +0/1 +uniform convergence Section 5.1 +⊢unif +(f, ∆)/(t, ∆) +Table 1 provides an overview of the type systems of this paper and their +purpose. ⊢? and its instantiations refine the basic type system of Section 2.2 in +the sense that if a term-in-context is provable in the annotated type system, +then its erasure (i.e. erasure of the annotations of base types and distributions) +is provable in the basic type system. This is straightforward to check. +4.4 +A More Permissible Type System +In this section we discuss another instantiation, ⊢SGD, of the generic type system +system to guarantee (SGD0) to (SGD3), which is more permissible than ⊢poly. +In particular, we would like to support Example 2, which uses logarithms and +densities involving exponentials. Intuitively, we need to ensure that subterms +involving exp are “neutralised” by a corresponding log. To achieve this we an- +notate base types with 0 or 1, ordered discretely. 0 is the only annotation for +safe base types and can be thought of as “integrable”; 1 denotes “needs to be +passed through log”. More precisely, we constrain the typing rules such that if +θ | Σ ⊢SGD M : ι(e) then6 loge ◦�M� and the partial derivatives of loge ◦�M�η +up to order 2 are uniformly dominated by a function with finite moments. +We subtype base types as follows: ι(e1) +1 +⊑SGD ι(e2) +2 +if ι1 ⊑ ι2 (as defined in +Fig. 3a) and e1 = e2, or ι1 = R>0 = ι2 and e1 ≤ e2. The second disjunct may +come as a surprise but we ensure that terms of type R(0) +>0 cannot depend on +samples at all. +In Fig. 4 we list the most important rules; we relegate the full type sys- +tem to Appendix C.4. exp and log increase and decrease the annotation respec- +tively. The rules for the primitive operations and conditionals are motivated by +the closure properties of Lemma 3 and the elementary fact that log ◦(f · g) = +(log ◦f) + (log ◦g) and log ◦(f −1) = − log ◦f for f, g : Θ × Rn → R. +Example 4. θ : R(0) +>0 | [N, N] ⊢SGD log (θ−1 · exp (sample N )) + sample N : R(0) +Note that the branches of conditionals need to have safe type, which rules out +branches with type R(1). This is because logarithms do not behave nicely when +composed with addition as used in the smoothed interpretation of conditionals. +6 using the convention log0 is the identity + +18 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +| [] ⊢SGD exp : R(0) → R(1) +>0 +| [] ⊢SGD log : R(e) +>0 → R(0) +| [] ⊢SGD + : ι(0) → ι(0) → ι(0) +| [] ⊢SGD · : ι(e) → ι(e) → ι(e) +| [] ⊢SGD − : R(0) → R(0) +| [] ⊢SGD +−1 : R(e) +>0 → R(e) +>0 +Γ | Σ ⊢SGD L : ι(0) +Γ | Σ′ ⊢SGD M : σ +Γ | Σ′′ ⊢SGD N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢SGD if L < 0 then M else N : σ +| [sj ∼ D] ⊢SGD sample D : R(0) D has finite moments +Fig. 4: Excerpt of the typing rules (cf. Appendix C.4) for the correctness of SGD. +Besides, observe that in the rules for logarithm and inverses e = 0 is allowed, +which may come as a surprise7. This is e.g. necessary for the typability of the +variational inference Example 2: +Example 5 (Typing for Variational Inference). It holds | [] ⊢ N : R(0) → R(0) → +R(0) +>0 → R(1) +>0 and θ : R(0) | [s1 ∼ N] ⊢ M : R(0). +Type Soundness. To formally establish type soundness, we can use a logical +predicate, which is very similar to the one in Section 4.2 (N.B. the additional +Item 2): in particular f ∈ Q(n) +ι(e) if +1. partial derivatives of loge ◦f up to order 2 are uniformly dominated by a +function with finite moments +2. if ι(e) is R(0) +>0 then f is dominated by a positive constant function +Using this and a similar logical predicate for �(−)� we can show: +Proposition 5. If θ1 : ι(0), . . . , θm : ι(0) +m | Σ ⊢SGD M : ι(0) then +1. all distributions in Σ have finite moments +2. �M� and for each η > 0 the partial derivatives up to order 2 of �M�η are +uniformly dominated by a function with finite moments. +Consequently, again the Smoothed Optimisation Problem 2 is not only well- +defined but by the dominated convergence theorem, the reparameterisation gra- +dient estimator is unbiased. Furthermore, (SGD1) to (SGD3) are satisfied and +SGD is correct. +5 +Uniform Convergence +In the preceding section we have shown that SGD with the reparameterisation +gradient can be employed to correctly (in the sense of Proposition 3) solve the +7 Recall that terms of type R(0) +>0 cannot depend on samples. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +19 +Smoothed Optimisation Problem 2 for any fixed accuracy coefficient. However, +a priori, it is not clear how a solution of the Smoothed Problem 2 can help to +solve the original Problem 1. +The following illustrates the potential for significant discrepancies: +Example 6. Consider M ≡ if 0 < 0 then θ ·θ +1 else (θ −1)·(θ −1). Notice that +the global minimum and the only stationary point of �M�η is at θ = 1 +2 regardless +of η > 0, where �M�η( 1 +2) = 3 +4. On the other hand �M�( 1 +2) = 1 +4 and the global +minimum of �M� is at θ = 1. +In this section we investigate under which conditions the smoothed objective +function converges to the original objective function uniformly in θ ∈ Θ: +(Unif) Es∼D [�M�η(θ, s)] +unif. +−−−→ Es∼D [�M�(θ, s)] as η ↘ 0 for θ ∈ Θ +We design a type system guaranteeing this. +The practical significance of uniform convergence is that before running SGD, +for every error tolerance ϵ > 0 we can find an accuracy coefficient η > 0 such +that the difference between the smoothed and original objective function does +not exceed ϵ, in particular for θ∗ delivered by the SGD run for the η-smoothed +problem. +Discussion of Restrictions. To rule out the pathology of Example 6 we require +that guards are non-0 almost everywhere. +Furthermore, as a consequence of the uniform limit theorem [29], (Unif) +can only possibly hold if the expectation Es∼D [�M�(θ, s)] is continuous (as +a function of the parameters θ). For a straightforward counterexample take +M ≡ if θ < 0 then 0 else 1, we have Es[�M�(θ)] = [θ ≥ 0] which is discontin- +uous, let alone differentiable, at θ = 0. Our approach is to require that guards +do not depend directly on parameters but they may do so, indirectly, via a dif- +feomorphic8 reparameterisation transform; see Example 8. We call such guards +safe. +In summary, our aim, intuitively, is to ensure that guards are the composition +of a diffeomorphic transformation of the random samples (potentially depending +on parameters) and a function which does not vanish almost everywhere. +5.1 +Type System for Guard Safety +In order to enforce this requirement and to make the transformation more ex- +plicit, we introduce syntactic sugar, transform sample D by T, for applications +of the form T sample D. +Example 7. As expressed in Eq. (2), we can obtain samples from N(µ, σ2) via +transform sample N by (λs. s · σ + µ), which is syntactic sugar for the term +(λs. s · σ + µ) sample N . +8 Example 12 in Appendix D illustrates why it is not sufficient to restrict the repa- +rameterisation transform to bijections (rather, we require it to be a diffeomorphism). + +20 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +We propose another instance of the generic type system of Section 4.3, ⊢unif, +where we annotate base types by α = (g, ∆), where g ∈ {f, t} denotes whether +we seek to establish guard safety and ∆ is a finite set of sj capturing possible +dependencies on samples. We subtype base types as follows: ι(g1,∆1) +1 +⊑unif ι(g2,∆2) +2 +if ι1 ⊑ ι2 (as defined in Fig. 3a), ∆1 ⊆ ∆2 and g1 ⪯ g2, where t ⪯ f. This is +motivated by the intuition that we can always drop9 guard safety and add more +dependencies. +The rule for conditionals ensures that only safe guards are used. The unary +operations preserve variable dependencies and guard safety. Parameters and con- +stants are not guard safe and depend on no samples (see Appendix D for the +full type system): +Γ | Σ ⊢unif L : ι(t,∆) +Γ | Σ′ ⊢unif M : σ +Γ | Σ′′ ⊢unif N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢unif if L < 0 then M else N : σ +| [] ⊢unif − : R(g,∆) → R(g,∆) +θi : ι(f,∅) | [] ⊢unif θi : ι(f,∅) +| [] ⊢unif r : ι(f,∅) r ∈ �ι� +θ | [] ⊢unif T : Rα → Rα +θ | [sj ∼ D] ⊢unif transform sample D by T : R(t,{sj}) T diffeomorphic +A term θ | [] ⊢unif T : Rα → Rα is diffeomorphic if �T�(θ, []) = �T�η(θ, []) : +R → R is a diffeomorphism for each θ ∈ Θ, i.e. differentiable and bijective with +differentiable inverse. +First, we can express affine transformations, in particular, the location-scale +transformations as in Example 7: +Example 8 (Location-Scale Transformation). The term-in-context +σ : R(f,∅) +>0 , µ : R(f,∅) | [] ⊢ λs. σ · s + µ : R(f,{s1}) → R(f,{s1}) +is diffeomorphic. (However for σ : R(f,∅) it is not because it admits σ = 0.) +Hence, the reparameterisation transform +G ≡ σ : R(f,∅) +>0 , µ : R(f,∅) | [s1 : D] ⊢ transform sample D by (λs.s·σ+µ) : R(t,{s1}) +which has g-flag t, is admissible as a guard term. Notice that G depends on the +parameters, σ and µ, indirectly through a diffeomorphism, which is permitted +by the type system. +If guard safety is sought to be established for the binary operations, we +require that operands do not share dependencies on samples: +| [] ⊢unif ◦ : ι(f,∆) → ι(f,∆) → ι(f,∆) ◦ ∈ {+, ·} +| [] ⊢unif ◦ : ι(t,∆1) → ι(t,∆2) → ι(t,∆1∪∆2) ◦ ∈ {+, ·}, ∆1 ∩ ∆2 = ∅ +This is designed to address: +9 as long as it is not used in guards + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +21 +Example 9 (Non-Constant Guards). We have | [] ⊢ (λx.x + (−x)) : R(f,{s1}) → +R(f,{s1}), noting that we must use g = f for the + rule; and because R(t,{sj}) ⊑unif +R(f,{sj}), we have +| [] ⊢ (λx.x + (−x)) : R(t,{s1}) → R(f,{s1}). +Now transform sample D by (λy.y) has type R(t,{s1}) with the g-flag necessar- +ily set to t; and so the term +M ≡ +� +λx.x + (−x) +� +transform sample D by (λy.y) +which denotes 0, has type R(f,{s1}), but not R(t,{s1}). It follows that M cannot +be used in guards (notice the side condition of the rule for conditional), which +is as desired: recall Example 6. Similarly consider the term +N ≡ +� +λx.(λy z.if y + (−z) < 0 then M1 else M2) x x +� +(transform sample D by (λy.y)) +(7) +When evaluated, the term y + (−z) in the guard has denotation 0. For the same +reason as above, the term N is not refinement typable. +The type system is however incomplete, in the sense that there are terms-in- +context that satisfy the property (Unif) but which are not typable. +Example 10 (Incompleteness). The following term-in-context denotes the “iden- +tity”: +| [] ⊢ (λx.(2 · x) + (−x)) : R(t,{s1}) → R(f,{s1}) +but it does not have type R(t,{s1}) → R(t,{s1}). Then, using the same reasoning +as Example 9, the term +G ≡ (λx.(2 · x) + (−x)) (transform sample D by (λy.y)) +has type R(f,{s1}), but not R(t,{s1}), and so if G < 0 then 0 else 1 is not typable, +even though G can safely be used in guards. +5.2 +Type Soundness +Henceforth, we fix parameters θ1 : ι(f,∅) +1 +, . . . , θm : ι(f,∅) +m +. +Now, we address how to show property (Unif), i.e. that for θ | Σ ⊢unif M : +ι(g,∆), the η-smoothed E[�M�η(θ, s)] converges uniformly for θ ∈ Θ as η ↘ 0. For +this to hold we clearly need to require that ση has good (uniform) convergence +properties (as far as the unavoidable discontinuity at 0 allows for): +Assumption 3 For every δ > 0, ση +unif. +−−−→ [(−) > 0] on (−∞, −δ) ∪ (δ, ∞). +Observe that in general even if M is typable �M�η does not converge uniformly +in both θ and s because �M� may still be discontinuous in s: + +22 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Example 11. For M ≡ if (transform sample N by (λs. s+θ)) < 0 then 0 else 1, +�M�(θ, s) = [s + θ ≥ 0], which is discontinuous, and �M�η(θ, s) = ση(s + θ). +However, if θ | Σ ⊢ M : ι(g,∆) then �M�η does converge to �M� uniformly +almost uniformly, i.e., uniformly in θ ∈ Θ and almost uniformly in s ∈ Rn. +Formally, we define: +Definition 4. Let f, fη : Θ × Rn → R, µ be a measure on Rn. We say that fη +converges uniformly almost uniformly to f (notation: fη +u.a.u. +−−−−→ f) if there exist +sequences (δk)k∈N, (ϵk)k∈N and (ηk)k∈N such that limk→∞ δk = 0 = limk→∞ ϵk; +and for every k ∈ N and θ ∈ Θ there exists U ⊆ Rn such that +1. µ(U) < δk and +2. for every 0 < η < ηk and s ∈ Rn \ U, |fη(θ, s) − f(θ, s)| < ϵk. +If f, fη are independent of θ this notion coincides with standard almost uniform +convergence. For M from Example 11 �M�η +u.a.u. +−−−→ �M� holds although uniform +convergence fails. +However, uniform almost uniform convergence entails uniform convergence +of expectations: +Lemma 6. Let f, fη : Θ × Rn → R have finite moments. +If fη +u.a.u. +−−−−→ f then Es∼D[fη(θ, s)] +unif. +−−−→ Es∼D[f(θ, s)]. +As a consequence, it suffices to establish �M�η +u.a.u. +−−−→ �M�. We achieve this by +positing an infinitary logical relation between sequences of morphisms in VectFr +(corresponding to the smoothings) and morphisms in QBS (corresponding to +the measurable standard semantics). We then prove a Fundamental Lemma 17 +(details are in Appendix D). Not surprisingly the case for conditionals is most +interesting. This makes use of Assumption 3 and exploits that guards, for which +the typing rules assert the guard safety flag to be t, can only be 0 at sets of +measure 0. We conclude: +Theorem 1. If θ1 : ι(f,∅) +1 +, . . . , θm : ι(f,∅) +m +| Σ ⊢unif M : R(g,∆) then �M�η +u.a.u. +−−−−→ +�M�. In particular, if �M�η and �M� also have finite moments then +Es∼D[�M�η(θ, s)] +unif. +−−−→ Es∼D[�M�(θ, s)] +as η ↘ 0 for θ ∈ Θ +We finally note that ⊢unif can be made more permissible by adding syntactic +sugar for a-fold (for a ∈ N>0) addition a · M ≡ M + · · · + M and multiplication +M a ≡ M · · · · · M. This admits more terms as guards, but safely (see Fig. 10). +6 +Related Work +[23] is both the starting point for our work and the most natural source for +comparison. They correct the (biased) reparameterisation gradient estimator for +non-differentiable models by additional non-trivial boundary terms. They present + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +23 +an efficient method for affine guards only. Besides, they are not concerned with +the convergence of gradient-based optimisation procedures; nor do they discuss +how assumptions they make may be manifested in a programming language. +In the context of the reparameterisation gradient, [25] and [18] relax discrete +random variables in a continuous way, effectively dealing with a specific class of +discontinuous models. [41] use a similar smoothing for discontinuous optimisation +but they do not consider a full programming language. +Motivated by guaranteeing absolute continuity (which is a necessary but not +sufficient criterion for the correctness of e.g. variational inference), [24] use an +approach similar to our trace types to track the samples which are drawn. They +do not support standard conditionals but their “work-around” is also eager in the +sense of combining the traces of both branches. Besides, they do not support a +full higher-order language, in which higher-order terms can draw samples. Thus, +they do not need to consider function types tracking the samples drawn during +evaluation. +7 +Empirical Evaluation +We evaluate our smoothed gradient estimator (Smooth) against the biased repa- +rameterisation estimator (Reparam), the unbiased correction of it (LYY18) +due to [23], and the unbiased (Score) estimator [31,40,27]. The experimental +setup is based on that of [23]. The implementation is written in Python, using +automatic differentiation (provided by the jax library) to implement each of +the above estimators for an arbitrary probabilistic program. For each estima- +tor and model, we used the Adam [19] optimiser for 10, 000 iterations using a +learning rate of 0.001, with the exception of xornet for which we used 0.01. +The initial model parameters θ0 were fixed for each model across all runs. In +each iteration, we used N = 16 Monte Carlo samples from the gradient esti- +mator. For the Lyy18 estimator, a single subsample for the boundary term was +used in each estimate. For our smoothed estimator we use accuracy coefficients +η ∈ {0.1, 0.15, 0.2}. Further details are discussed in Appendix E.1. +Compilation for First-Order Programs. All our benchmarks are first-order. We +compile a potentially discontinuous program to a smooth program (parame- +terised by ση) using the compatible closure of +if L < 0 then M else N ⇝ (λw. ση(−w) · M + ση(w) · N) L +Note that the size only increases linearly and that we avoid of an exponential +blow-up by using abstractions rather than duplicating the guard L. +Models. We include the models from [23], an example from differential privacy +[11] and a neural network for which our main competitor, the estimator of [23], +is not applicable (see Appendix E.2 for more details). + +24 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +(a) temperature +(b) textmsg +(c) influenza +(d) cheating +(e) xornet +Fig. 5: ELBO trajectories for each model. A single colour is used for each esti- +mator and the accuracy coefficient η = 0.1, 0.15, 0.2 for Smooth is represented +by dashed, solid and dotted lines respectively. +Analysis of Results +We plot the ELBO trajectories in Fig. 5 and include data on the computational +cost and variance in Table 2 in Appendix E.3. +The ELBO graph for the temperature model in Fig. 5a and the cheating +model in Fig. 5d shows that the Reparam estimator is biased, converging to +suboptimal values when compared to the Smooth and Lyy18 estimators. For +the temperature model we can also see from the graph and the data in Ta- + +X106 +0.0 +-0.5 +-1.0 +ELBO +-1.5 +SMOOTH +-2.0 +SCORE +REPARAM +2.5 +LYY18 +0 +2000 +4000 +6000 +8000 +10000 +Iteration-300 +-350 +-400 +LBO +-450 +SMOOTH +-500 +SCORE +REPARAM +-550 +LYY18 +0 +2000 +4000 +6000 +8000 +10000 +IterationX106 +0 +-1 +ELBO +-2 +-3 +SMOOTH +SCORE +-4 +REPARAM +LYY18 +0 +2000 +4000 +6000 +8000 +10000 +Iteration-66 +-68 +-70 +LBO +-72 +-74 +-76 +SMOOTH +SCORE +-78 +REPARAM +LYY18 +-80 +0 +2000 +4000 +6000 +8000 +10000 +Iteration0 +-2000 +.4000 +ELBO +-6000 +-8000 +SMOOTH +SCORE +REPARAM +-10000 +0 +2000 +4000 +6000 +8000 +10000 +IterationFast and Correct Optimisation for Probabilistic Programming via Smoothing +25 +ble 2a that the Score estimator exhibits extremely high variance, and does not +converge. +Finally, the xornet model shows the difficulty of training step-function based +neural nets. The Lyy18 estimator is not applicable here since there are non-affine +conditionals. In Fig. 5e, the Reparam estimator makes no progress while other +estimators manage to converge to close to 0 ELBO, showing that they learn a +network that correctly classifies all points. In particular, the Smooth estimator +converges the quickest. +Summa summarum, the results reveal where the Reparam estimator is bi- +ased and that the Smooth estimator does not have the same limitation. Where +the Lyy18 estimator is defined, they converge to roughly the same objective +value; and the smoothing approach is generalisable to more complex models such +as neural networks with non-linear boundaries. Our proposed Smooth estimator +has consistently significantly lower work-normalised variance, up to 3 orders of +magnitude. +8 +Conclusion and Future Directions +We have discussed a simple probabilistic programming language to formalise +an optimisation problem arising e.g. in variational inference for probabilistic +programming. We have endowed our language with a denotational (measurable) +value semantics and a smoothed approximation of potentially discontinuous pro- +grams, which is parameterised by an accuracy coefficient. We have proposed +type systems to guarantee pleasing properties in the context of the optimisation +problem: For a fixed accuracy coefficient, stochastic gradient descent converges +to stationary points even with the reparameterisation gradient (which is unbi- +ased). Besides, the smoothed objective function converges uniformly to the true +objective as the accuracy is improved. +Our type systems can be used to independently check these two properties +to obtain partial theoretical guarantees even if one of the systems suffers from +incompleteness. We also stress that SGD and the smoothed unbiased gradient +estimator can even be applied to programs which are not typable. +Experiments with our prototype implementation confirm the benefits of re- +duced variance and unbiasedness. Compared to the unbiased correction of the +reparameterised gradient estimator due to [23], our estimator has a similar con- +vergence, but is simpler, faster, and attains orders of magnitude (2 to 3,000 x) +reduction in work-normalised variance. +Future Directions. A natural avenue for future research is to make the language +and type systems more complete, i.e. to support more well-behaved programs, +in particular programs involving recursion. +Furthermore, the choice of accuracy coefficients leaves room for further in- +vestigations. We anticipate it could be fruitful not to fix an accuracy coefficient +upfront but to gradually enhance it during the optimisation either via a pre- +determined schedule (dependent on structural properties of the program), or +adaptively. + +26 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +References +1. Aumann, R.J.: Borel structures for function spaces. Illinois Journal of Mathematics +5 (1961) +2. Bertsekas, D.: Convex optimization algorithms. Athena Scientific (2015) +3. Bertsekas, D.P., Tsitsiklis, J.N.: Gradient convergence in gradient methods with +errors. SIAM J. Optim. 10(3), 627–642 (2000) +4. Bingham, E., Chen, J.P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, +T., Singh, R., Szerlip, P.A., Horsfall, P., Goodman, N.D.: Pyro: Deep universal +probabilistic programming. J. Mach. Learn. Res. 20, 28:1–28:6 (2019) +5. Bishop, C.M.: Pattern recognition and machine learning, 5th Edition. Information +science and statistics, Springer (2007) +6. Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational inference: A review for +statisticians. Journal of the American Statistical Association 112(518), 859–877 +(2017). https://doi.org/10.1080/01621459.2017.1285773 +7. Borgström, J., Lago, U.D., Gordon, A.D., Szymczak, M.: A lambda-calculus foun- +dation for universal probabilistic programming. In: Proceedings of the 21st ACM +SIGPLAN International Conference on Functional Programming, ICFP 2016, +Nara, Japan, September 18-22, 2016. pp. 33–46 (2016) +8. Botev, Z., Ridder, A.: Variance Reduction. In: Wiley StatsRef: Statistics Reference +Online, pp. 1–6 (2017) +9. Cusumano-Towner, M.F., Saad, F.A., Lew, A.K., Mansinghka, V.K.: Gen: a +general-purpose probabilistic programming system with programmable inference. +In: McKinley, K.S., Fisher, K. (eds.) Proceedings of the 40th ACM SIGPLAN +Conference on Programming Language Design and Implementation, PLDI 2019, +Phoenix, AZ, USA, June 22-26, 2019. pp. 221–236. ACM (2019). https://doi. +org/10.1145/3314221.3314642, https://doi.org/10.1145/3314221.3314642 +10. Dahlqvist, F., Kozen, D.: Semantics of higher-order probabilistic programs with +conditioning. Proc. ACM Program. Lang. 4(POPL), 57:1–57:29 (2020) +11. Davidson-Pilon, C.: Bayesian Methods for Hackers: Probabilistic Programming and +Bayesian Inference. Addison-Wesley Professional (2015) +12. Ehrhard, T., Tasson, C., Pagani, M.: Probabilistic coherence spaces are fully ab- +stract for probabilistic PCF. In: The 41st Annual ACM SIGPLAN-SIGACT Sym- +posium on Principles of Programming Languages, POPL ’14, San Diego, CA, USA, +January 20-21, 2014. pp. 309–320 (2014) +13. Frölicher, A., Kriegl, A.: Linear Spaces and Differentiation Theory. Interscience, J. +Wiley and Son, New York (1988) +14. Glynn, P.W., Whitt, W.: The asymptotic efficiency of simulation estimators. Op- +erations research 40(3), 505–520 (1992) +15. Heunen, C., Kammar, O., Staton, S., Yang, H.: A convenient category for higher- +order probability theory. Proc. Symposium Logic in Computer Science (2017) +16. Heunen, C., Kammar, O., Staton, S., Yang, H.: A convenient category for higher- +order probability theory. In: 32nd Annual ACM/IEEE Symposium on Logic in +Computer Science, LICS 2017, Reykjavik, Iceland, June 20-23, 2017. pp. 1–12 +(2017) +17. Hur, C., Nori, A.V., Rajamani, S.K., Samuel, S.: A provably correct sampler for +probabilistic programs. In: 35th IARCS Annual Conference on Foundation of Soft- +ware Technology and Theoretical Computer Science, FSTTCS 2015, December +16-18, 2015, Bangalore, India. pp. 475–488 (2015) + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +27 +18. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. +In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, +France, April 24-26, 2017, Conference Track Proceedings (2017) +19. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Bengio, Y., +LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR +2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015) +20. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: Bengio, Y., Le- +Cun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR +2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings (2014) +21. Klenke, A.: Probability Theory: A Comprehensive Course. Universitext, Springer +London (2014) +22. Lee, W., Yu, H., Rival, X., Yang, H.: Towards verified stochastic variational infer- +ence for probabilistic programs. PACMPL 4(POPL) (2020) +23. Lee, W., Yu, H., Yang, H.: Reparameterization gradient for non-differentiable mod- +els. In: Advances in Neural Information Processing Systems 31: Annual Conference +on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December +2018, Montréal, Canada. pp. 5558–5568 (2018) +24. Lew, A.K., Cusumano-Towner, M.F., Sherman, B., Carbin, M., Mansinghka, V.K.: +Trace types and denotational semantics for sound programmable inference in prob- +abilistic languages. Proc. ACM Program. Lang. 4(POPL), 19:1–19:32 (2020) +25. Maddison, C.J., Mnih, A., Teh, Y.W.: The concrete distribution: A continuous re- +laxation of discrete random variables. In: 5th International Conference on Learning +Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track +Proceedings (2017) +26. Mak, C., Ong, C.L., Paquet, H., Wagner, D.: Densities of almost surely terminating +probabilistic programs are differentiable almost everywhere. In: Yoshida, N. (ed.) +Programming Languages and Systems - 30th European Symposium on Program- +ming, ESOP 2021, Held as Part of the European Joint Conferences on Theory and +Practice of Software, ETAPS 2021, Luxembourg City, Luxembourg, March 27 - +April 1, 2021, Proceedings. Lecture Notes in Computer Science, vol. 12648, pp. +432–461. Springer (2021) +27. Minh, A., Gregor, K.: Neural variational inference and learning in belief networks. +In: Proceedings of the 31th International Conference on Machine Learning, ICML +2014, Beijing, China, 21-26 June 2014. JMLR Workshop and Conference Proceed- +ings, vol. 32, pp. 1791–1799. JMLR.org (2014) +28. Mityagin, B.: The zero set of a real analytic function (2015) +29. Munkres, J.R.: Topology. Prentice Hall, New Delhi„ 2nd. edn. (1999) +30. Murphy, K.P.: Machine Learning: A Probabilististic Perspective. MIT Press (2012) +31. Ranganath, R., Gerrish, S., Blei, D.M.: Black box variational inference. In: Pro- +ceedings of the Seventeenth International Conference on Artificial Intelligence and +Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014. pp. 814–822 +(2014) +32. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and ap- +proximate inference in deep generative models. In: Proceedings of the 31th In- +ternational Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 +June 2014. JMLR Workshop and Conference Proceedings, vol. 32, pp. 1278–1286. +JMLR.org (2014) +33. Shumway, R.H., Stoffer, D.S.: Time Series Analysis and Its Applications. Springer +Texts in Statistics, Springer-Verlag (2005) + +28 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +34. Soudjani, S.E.Z., Majumdar, R., Nagapetyan, T.: Multilevel monte carlo method +for statistical model checking of hybrid systems. In: Bertrand, N., Bortolussi, L. +(eds.) Quantitative Evaluation of Systems - 14th International Conference, QEST +2017, Berlin, Germany, September 5-7, 2017, Proceedings. Lecture Notes in Com- +puter Science, vol. 10503, pp. 351–367. Springer (2017) +35. Stacey, A.: Comparative smootheology. Theory and Applications of Categories +25(4), 64–117 (2011) +36. Staton, S.: Commutative semantics for probabilistic programming. In: Program- +ming Languages and Systems - 26th European Symposium on Programming, ESOP +2017, Held as Part of the European Joint Conferences on Theory and Practice of +Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings. pp. 855– +879 (2017) +37. Staton, S., Yang, H., Wood, F.D., Heunen, C., Kammar, O.: Semantics for prob- +abilistic programming: higher-order functions, continuous distributions, and soft +constraints. In: Proceedings of the 31st Annual ACM/IEEE Symposium on Logic +in Computer Science, LICS ’16, New York, NY, USA, July 5-8, 2016. pp. 525–534 +(2016) +38. Titsias, M.K., Lázaro-Gredilla, M.: Doubly stochastic variational bayes for non- +conjugate inference. In: Proceedings of the 31th International Conference on Ma- +chine Learning, ICML 2014, Beijing, China, 21-26 June 2014. pp. 1971–1979 (2014) +39. Vákár, M., Kammar, O., Staton, S.: A domain theory for statistical probabilistic +programming. PACMPL 3(POPL), 36:1–36:29 (2019) +40. Wingate, D., Weber, T.: Automated variational inference in probabilistic program- +ming. CoRR abs/1301.1299 (2013) +41. Zang, I.: Discontinuous optimization by smoothing. Mathematics of Operations +Research 6(1), 140–152 (1981) +42. Zhang, C., Butepage, J., Kjellstrom, H., Mandt, S.: Advances in Variational Infer- +ence. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 2008–2026 (2019) + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +29 +A +Supplementary Materials for Section 2 +A.1 +Supplementary Materials for Section 2.2 +Lemma 1. If Γ | Σ ⊢ M : τ and Γ | Σ′ ⊢ M : τ ′ then Σ = Σ′. +Proof (sketch). We define an equivalence relation ≈ on types by +1. ι ≈ ι′ +2. (τ1 • Σ → τ2) ≈ (τ ′ +1 • Σ′ → τ ′ +2) iff τ1 ≈ τ ′ +1 implies Σ = Σ′ and τ2 ≈ τ ′ +2 +Intuitively, two types are related by ≈ if for (inductively) related arguments +they draw the same samples and again have related return types. We extend the +relation to contexts: Γ ≈ Γ ′ if for all x : τ in Γ and x : τ ′ in Γ ′, τ ≈ τ ′. +Then we show by induction that if Γ | Σ ⊢ M : τ, Γ ′ | Σ′ ⊢ M : τ ′ and +Γ ≈ Γ ′ then Σ = Σ′ and τ ≈ τ ′. Finally, this strengthened statement allows us +to prove the tricky case of the lemma: application. +A.2 +Supplementary Materials for Section 2.3 +Like measurable space (X, ΣX), a quasi Borel space (QBS) is a pair (X, MX) +where X is a set; but instead of axiomatising the measurable subsets ΣX, QBS +axiomatises the admissible random elements MX. The set MX, which is a col- +lection of functions R → X, must satisfy the following closure properties: +– if α ∈ MX and f : R → R is measurable, then α ◦ f ∈ MX +– if α : R → X is constant then α ∈ MX +– given a countable partition of the reals R = � +i∈N Si where each Si is Borel, +and {αi}i∈N ⊆ MX, the function r �→ αi(r) where r ∈ Si is in MX. +The QBS morphisms (X, MX) → (Y, MY ) are functions f : X → Y such that +f ◦ α ∈ MY whenever α ∈ MX. +Lemma 7 (Substitution). Let Γ, x : τ ′ | Σ ⊢ M : τ and Γ | [] ⊢ N : τ ′. +Then �M� +�� +γ, �N�(γ, []) +� +, s +� += �M[N/x]�(γ, s). +A.3 +Supplementary Materials for Section 2.4 +The following can be verified by structural induction on M: +Lemma 8 (Substitution). +If Γ, x : τ ′ | Σ ⊢ M : τ and Γ | [] ⊢ N : τ ′ then +Γ | Σ ⊢ M[N/x] : τ. +Note that it may not necessarily hold that Γ, x : τ ′ | Σ ⊢ M : τ and Γ | Σ′ ⊢ +N : τ ′ imply Γ | Σ ++ Σ′ ⊢ M[N/x] : τ. Take M ≡ x + x and N ≡ sample N . +Then note that +x : R | [] ⊢ M : R +| [N] ⊢ N : R +| [N, N] ⊢ M[N/x]. + +30 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +V ⇓[] +1 V +sample D ⇓[s] +pdfD(s) s +L ⇓s1 +w1 r +M ⇓s2 +w2 V +N ⇓s3 +w3 V ′ +if L < 0 then M else N ⇓s1++s2++s3 +w1·w2·w3 +V +r < 0 +L ⇓s1 +w1 r +M ⇓s2 +w2 V +N ⇓s3 +w3 V ′ +if L < 0 then M else N ⇓s1++s2++s3 +w1·w2·w3 +V ′ r ≥ 0 +M1 ⇓s1 +w1 r1 +M2 ⇓s2 +w2 r2 +M1 ◦ M2 ⇓s1++s2 +w1·w2 r1 ◦ r2 +◦ ∈ {+, ·} +M ⇓s +w r +op M ⇓s +w op(r) op ∈ {−, −1, exp, log}, r ∈ dom(op) +M ⇓s1 +w1 λx. M ′ +N ⇓s2 +w2 V ′ +M ′[V ′/x] ⇓s3 +w3 V +M N ⇓s1++s2++s3 +w1·w2·w3 +V +Fig. 6: Operational big-step sampling-based semantics +Discussion. Lemma 8 is a slightly stronger version of the usual substitution +lemma for a CBV language: if Γ, x : τ ′ | Σ ⊢ M : τ and Γ | Σ′ ⊢ V : τ then +Γ | Σ ++ Σ′ ⊢ M[V/x] : τ; note that Σ′ = [] necessarily, and we also have +Γ | Σ ++ Σ′ ⊢ (λx.M) V : τ. Consequently, subject reduction holds for CBV +β-reduction. +B +Supplementary Materials for Section 3 +Remark 1. Suppose φ : X → Y is a function and (X, CX, FX) and (Y, CY , CX) +are vector Frölicher spaces, where the former is generated by C0 ⊆ Set(R, X). +Then φ is a morphism iff for all f ∈ FY and c ∈ C0, f ◦ φ ◦ c ∈ C∞(R, R) (i.e. it +is not necessary to check c ∈ CX \ C0). +(Note that C ⊆ �CX ⊆ CX. Therefore, if f : X → R is such that for all c ∈ CX ⊇ C, +f ◦ c ∈ C∞(R, R) then f ∈ FX.) +Proposition 2. VectFr is cartesian closed. +Proof. +1. Singleton vector spaces are terminal objects. +2. Suppose (X1, CX1, FX1) and (X2, CX2, FX2) are vector Frölicher spaces. Con- +sider the vector Frölicher space on X1 × X2 generated by {⟨c1, c2⟩ | c1 ∈ +CX1, c2 ∈ CX2}. By construction (X1 × X2, CX1×X2, FX1×X2) is a vector +Frölicher space and πi : (X1 × X2, CX1×X2, FX1×X2) → (Xi, CXi, FXi) are +morphisms. Now, suppose Z and f : Z → X1 and g : Z → X2 are mor- +phisms. Clearly, h := ⟨f, g⟩ is the unique morphism Z → X1 × X2 such that +π1 ◦ h = f and π2 ◦ h = g. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +31 +3. Finally, suppose (X, CX, FX) and (Y, CY , FY ) are vector Frölicher spaces. +Consider the vector Frölicher space on the hom-set VectFr(X, Y ) generated +by {c : R → VectFr(X, Y ) | ((r, x) �→ c(r)(x)) ∈ Fr(R × X, Y )}. Define +eval : VectFr(X, Y ) × X → Y by eval(f, x) := f(x). To see that this is a +morphism by Remark 1 it suffices to consider c1 : R → CX⇒Y such that +((r, x) �→ c1(r)(x)) ∈ Fr(R × X, Y ), c2 ∈ CX and g ∈ FY . Note that +g ◦ eval ◦ ⟨c1, c2⟩ = g ◦ ((r, x) �→ c1(r)(x)) +� +�� +� +∈Fr(R×X,Y ) +◦ ⟨id, c2⟩ +� �� � +∈CR×X +which is in C∞(R, R) by definition of morphisms. Clearly, this satisfies the +required universal property. +C +Supplementary Materials for Section 4 +C.1 +Supplementary Materials for Section 4.1 +The following immediately follows from a well-known result about exchanging +differentiation and integration, which is a consequence of the dominated conver- +gence theorem [21, Theorem 6.28]: +Lemma 9. Let U ⊆ R be open. Suppose g : R × Rn → R satisfies +1. for each x ∈ R, s �→ g(x, s) is integrable +2. g is continuously differentiable everywhere +3. there exists integrable h : Rn → R such that for all x ∈ U and s ∈ Rn, +| ∂g +∂x(x, s)| ≤ h(s). +Then for all x ∈ U, +∂ +∂x +� +g(x, s) ds = +� ∂g +∂x(x, s) ds. +Corollary 1. Let i ∈ {1, . . . , m}, M > 0 and U := BM(0) ⊆ Rm be the open +M-ball. Suppose g : Rm × Rn → R satisfies +1. for each x ∈ Rm, s �→ g(x, s) is integrable +2. g is continuously differentiable everywhere +3. there exists integrable h : Rn → R such that for all x ∈ U and s ∈ Rn, +| ∂g +∂xi (x, s)| ≤ h(s). +Then for all x ∈ U, +∂ +∂xi +� +g(x, s) ds = +� +∂g +∂xi (x, s) ds. +C.2 +Supplementary Materials for Section 4.2 +Lemma 3. If f, g : Rn → R have (all) finite moments so do −f, f + g, f · g. +Proof. For negation it is trivial. For addition it can be checked as follows: +E[|(f + g)(s)|p] ≤ E [|2f(s)|p + |2g(s)|p] +≤ 2p · E [|f(s)|p] + 2p · E [|g(s)|p] < ∞ +For multiplication it follows from Cauchy-Schwarz: +E[|(f · g)(s)|p] = E [|f(s)|p · |g(s)|p] ≤ +� +E [|f(s)|2p] · E [|g(s)|2p] < ∞ + +32 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Γ | Σ ⊢poly M : τ +Γ ′ | Σ ⊢poly M : τ ′ Γ ⊑poly Γ ′, τ ⊑poly τ ′ +x : τ | [] ⊢poly x : τ +| [] ⊢poly r : R r ∈ R +| [] ⊢poly r : R>0 +r ∈ R>0 +| [] ⊢poly ◦ : ι → ι → ι ◦ ∈ {+, ·} +| [] ⊢poly − : R → R +Γ | Σ ⊢poly L : R +Γ | Σ′ ⊢poly M : σ +Γ | Σ′′ ⊢poly N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢poly if L < 0 then M else N : σ +| [sj ∼ D] ⊢poly sample D : R D has finite moments +Γ, y : τ1 | Σ ⊢poly M : τ2 +Γ | [] ⊢poly λy. M : τ1 • Σ → τ2 +Γ | Σ1 ⊢poly M : τ1 • Σ3 → τ2 +Γ | Σ2 ⊢poly N : τ1 +Γ | Σ1 ++ Σ2 ++ Σ3 ⊢poly M N : τ2 +Fig. 7: Typing judgements for ⊢poly. +Lemma 5 (Fundamental). +If θ, x1 : τ1, . . . , xℓ : τℓ | Σ ⊢poly M : τ, n ∈ N, +ξ1 ∈ P(n) +τ1 , . . . , ξℓ ∈ P(n) +τℓ +then �M� ∗ ⟨ξ1, . . . , ξℓ⟩ ∈ P(n+|Σ|) +τ +, where +�M� ∗ ⟨ξ1, . . . , ξℓ⟩ : Θ × Rn+|Σ| → �τ� +(θ, s ++ s′) �→ �M�((θ, ξ1(θ, s), . . . , ξℓ(θ, s)), s′) +Proof. We prove the claim by induction on M. +1. For constants r and variables xi this is obvious; for parameters θi it is ensured +by Assumption 1. +2. �sample D�((), [s]) = s clearly has finite moments because D does. +3. Next, to show �+� ∈ P(0) +ι→ι→ι (multiplication can be checked analogously) let +n1, n2 ∈ N, f1 ∈ P(n1) +ι +, f2 ∈ P(n1+n2) +ι +. By definition f1 and f2 are uniformly +dominated by some g1 and g2, respectively, with finite moments. By Lemma 3 +g1 + g2 has finite moments to and +|(�+� ⊙ f1 ⊙ f2)(θ, s1 ++ s2)| ≤ |f1(θ, s1)| + |f2(θ, s1 ++ s2)| +≤ g1(s1) + g2(s1 ++ s2) +4. The reasoning for − is straightforward and −1, exp and log cannot occur. +5. The claim for conditionals follows Lemma 4. +6. For applications it follows immediately from the inductive hypothesis and +the definition. +Suppose θ, x1 : τ1, . . . , θ, xℓ : τℓ | Σ1 ++ Σ2 ++ Σ3 : τℓ ⊢poly M N : τ +because θ, x1 : τ1, . . . , θ, xℓ : τℓ | Σ1 : τℓ ⊢poly M : τ ′ • Σ3 → τ and +θ, x1 : τ1, . . . , θ, xℓ : τℓ | Σ2 : τℓ ⊢poly M N : τ ′. +Let n ∈ N and ξ1 ∈ P(n) +τ1 , . . . , ξℓ ∈ P(n) +τℓ . By the inductive hypothesis, +�M� ∗ ⟨ξ1, . . . , ξℓ⟩ ∈ P(n+|Σ1|) +τ ′•Σ3→τ +�N� ∗ ⟨ξ1, . . . , ξℓ⟩ ∈ P(n+|Σ1|+|Σ2|) +τ ′ + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +33 +By definition of P(n+|Σ1|) +τ ′•Σ3→τ, +(�M� ∗ ⟨ξ1, . . . , ξℓ⟩) ⊙ (�N� ∗ ⟨ξ1, . . . , ξℓ⟩) ∈ P(n+|Σ1|+|Σ2|+|Σ3|) +τ +and by definition of ⊙ and ∗, +(�M� ∗ ⟨ξ1, . . . , ξℓ⟩) ⊙ (�N� ∗ ⟨ξ1, . . . , ξℓ⟩) = �M N� ∗ ⟨ξ1, . . . , ξℓ⟩ +7. For abstractions suppose θ, x1 : τ1, . . . , xℓ : τℓ | [] ⊢poly λy. M : τ • Σ → τ ′ +because θ, x1 : τ1, . . . , xℓ : τℓ, y : τ | Σ ⊢poly M : τ ′; let n ∈ N and ξ1 ∈ +P(n) +τ1 , . . . , ξℓ ∈ P(n) +τℓ . +To show the claim, suppose n2 ∈ N and g ∈ P(n+n2) +τ +. By definition of the +logical predicate we need to verify (�M� ∗ ⟨ξ1, . . . , ξℓ⟩) ⊙ g ∈ P(n+n2+|Σ|) +τ ′ +. +Call �ξi the extension of ξi to Θ × Rn+n2 → R. By the inductive hypothesis, +�M� ∗ ⟨�ξ1, . . . , �ξℓ, g⟩ ∈ P(n+n2+|Σ|) +τ ′ +Finally it suffices to observe that +(�M� ∗ ⟨ξ1, . . . , ξℓ⟩) ⊙ g = �M� ∗ ⟨�ξ1, . . . , �ξℓ, g⟩ +C.3 +Supplementary Materials for Section 4.3 +See Fig. 8. +C.4 +Supplementary Materials for Section 4.4 +We define the logical predicate Q(n) +τ +on Θ × Rn → �τ� in VectFr: +1. f ∈ Q(n) +ι(e) if +(a) partial derivatives of loge ◦f up to order 2 are uniformly dominated by +a function with finite moments +(b) if ι(e) is R(0) +>0 then f is dominated by a positive constant function +2. f ∈ P(n) +τ1•Σ3→τ2 if for all n2 ∈ N and g ∈ Q(n+n2) +τ1 +, f ⊙ g ∈ Q(n+n2+|Σ3|) +τ2 +. +Lemma 10 (Fundamental). If θ1 : ι(0) +1 , . . . , θm : ι(0) +m , x1 : τ1, . . . , xℓ : τℓ | +Σ ⊢SGD M : τ, n ∈ N, ξ1 ∈ Q(n) +τ1 , . . . , ξℓ ∈ Q(n) +τℓ +then �M�η ∗ ⟨ξ1, . . . , ξℓ⟩ ∈ +Q(n+|Σ|) +τ +. +Proof. Similar to Lemma 5, exploiting standard rules for logarithm and partial +derivatives. + +34 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +ια ⊑? ια′ (cond. subt. 1) +Rα +>0 ⊑? Rα′ (cond. subt. 2) +τ ′ +1 ⊑? τ1 +τ2 ⊑? τ ′ +2 +(τ1 • Σ → τ2) ⊑? (τ ′ +1 • Σ → τ ′ +2) +(a) Subtyping +Γ | Σ ⊢? M : τ +Γ ′ | Σ ⊢? M : τ ′ Γ ⊑? Γ ′, τ ⊑? τ ′ +x : τ | [] ⊢? x : τ +Γ, y : τ1 | Σ ⊢? M : τ2 +Γ | [] ⊢? λy. M : τ1 • Σ → τ2 +Γ | Σ2 ⊢? M : τ1 • Σ3 → τ2 +Γ | Σ1 ⊢? N : τ1 +Γ | Σ1 ++ Σ2 ++ Σ3 ⊢? M N : τ2 +θi : ια | [] ⊢? θi : ια (cond. Para) +| [] ⊢? r : ια (cond. Const), r ∈ �ι� +| [] ⊢? + : ια1 → ια2 → ια (cond. Add) +| [] ⊢? · : ια1 → ια2 → ια (cond. Mul) +| [] ⊢? − : Rα → Rα (cond. Min) +| [] ⊢? +−1 : Rα +>0 → Rα +>0 +(cond. Inv) +| [] ⊢? exp : Rα → Rα′ +>0 +(cond. Exp) +| [] ⊢? log : Rα +>0 → Rα′ (cond. Log) +Γ | Σ ⊢? L : ια +Γ | Σ′ ⊢? M : σ +Γ | Σ′′ ⊢? N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢? if L < 0 then M else N : σ +(cond. If) +| [sj ∼ D] ⊢? sample D : Rα (cond. Sample) +(b) Typing rules for ⊢? +Fig. 8: Generic type system with annotations. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +35 +Γ | Σ ⊢SGD M : τ +Γ ′ | Σ ⊢SGD M : τ ′ Γ ⊑SGD Γ ′, τ ⊑SGD τ ′ +x : τ | [] ⊢SGD x : τ +Γ, y : τ1 | Σ ⊢SGD M : τ2 +Γ | [] ⊢SGD λy. M : τ1 • Σ → τ2 +Γ | Σ2 ⊢SGD M : τ1 • Σ3 → τ2 +Γ | Σ1 ⊢SGD N : τ1 +Γ | Σ1 ++ Σ2 ++ Σ3 ⊢SGD M N : τ2 +θi : ι(0) | [] ⊢SGD θi : ι(0) +| [] ⊢SGD r : ι(0) r ∈ �ι� +| [] ⊢SGD + : ι(0) → ι(0) → ι(0) +| [] ⊢SGD · : ι(e) → ι(e) → ι(e) +| [] ⊢SGD − : R(0) → R(0) +| [] ⊢SGD +−1 : R(e) +>0 → R(e) +>0 +| [] ⊢SGD exp : R(0) → R(1) +>0 +| [] ⊢SGD log : R(e) +>0 → R(0) +Γ | Σ ⊢SGD L : ι(0) +Γ | Σ′ ⊢SGD M : σ +Γ | Σ′′ ⊢SGD N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢SGD if L < 0 then M else N : σ +| [sj ∼ D] ⊢SGD sample D : R(0) D has finite moments +Fig. 9: Typing rules for ⊢SGD +D +Supplementary Materials for Section 5 +Example 12 (Divergence). Suppose M ≡ if ((λz. z3+θ) sample N ) < 0 then 0 else 1. +Let φθ(z) := z3 + θ. Note that despite being bijective, φθ : R → R is not a dif- +feomorphism because φ−1 +θ (α) = +3√ +α − θ is not differentiable at α = θ. Then +Ez∼N [�M�(θ, z)] = +� ∞ +− 3√−θ +N(z | 0, 1) dz +∂ +∂θEz∼N [�M�(θ, z)] = 1 +3 · N(− +3√ +−θ | 0, 1) · θ− 2 +3 +Therefore θ �→ Ez∼N [�M�(θ, z)] is not differentiable at 0. Besides, for θ = 0, +Ez∼N +� ∂ +∂θ�M(θ, z)�η +� += Ez∼N +� +σ′ +η(z3) +� +→ ∞ +D.1 +Properties of Uniform Almost Uniform Convergence +Let µ(U) = Es∼D[[s ∈ U]], where D has finite moments and φθ be a diffeomor- +phism. We continue assuming compactness of Θ. +Lemma 11. limk∈N supθ∈Θ µ(φ−1 +θ (Rn \ Bk(0))) = 0 + +36 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Γ | Σ ⊢unif M : τ +Γ ′ | Σ ⊢unif M : τ ′ Γ ⊑unif Γ ′, τ ⊑unif τ ′ +x : τ | [] ⊢unif x : τ +Γ, y : τ1 | Σ ⊢unif M : τ2 +Γ | [] ⊢unif λy. M : τ1 • Σ → τ2 +Γ | Σ2 ⊢unif M : τ1 • Σ3 → τ2 +Γ | Σ1 ⊢unif N : τ1 +Γ | Σ1 ++ Σ2 ++ Σ3 ⊢unif M N : τ2 +θi : ι(f,∅) | [] ⊢unif θi : ι(f,∅) +| [] ⊢unif r : ι(f,∅) r ∈ �ι� +| [] ⊢unif ◦ : ι(f,∆) → ι(f,∆) → ι(f,∆) ◦ ∈ {+, ·} +| [] ⊢unif ◦ : ι(t,∆1) → ι(t,∆2) → ι(t,∆1∪∆2) ◦ ∈ {+, ·}, ∆1 ∩ ∆2 = ∅ +| [] ⊢unif − : R(g,∆) → R(g,∆) +| [] ⊢unif +−1 : R(g,∆) +>0 +→ R(g,∆) +>0 +| [] ⊢unif exp : R(g,∆) → R(g,∆) +>0 +| [] ⊢unif log : R(g,∆) +>0 +→ R(g,∆) +Γ | Σ ⊢unif L : ι(t,∆) +Γ | Σ′ ⊢unif M : σ +Γ | Σ′′ ⊢unif N : σ +Γ | Σ ++ Σ′ ++ Σ′′ ⊢unif if L < 0 then M else N : σ +| [sj ∼ D] ⊢unif sample D : R(t,{sj}) +θ | [] ⊢unif T : Rα → Rα +θ | [sj ∼ D] ⊢unif transform sample D by T : R(t,{sj}) T diffeomorphic +Γ | [] ⊢unif M : ι(t,∆) +Γ | [] ⊢unif a · M : ι(t,∆) a ∈ N>0 +Γ | [] ⊢unif M : ι(t,∆) +Γ | [] ⊢unif M a : ι(t,∆) a ∈ N>0 +Fig. 10: Typing rules for ⊢unif. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +37 +Proof. Let s0 ∈ Rn be arbitrary. +δ(i) +∗ +:= sup +θ∈Θ +|φ(i) +θ (s0)| +d(i) +k +:= sup +θ∈Θ +sup +s∈Bk(s0) +∥∇θφ(i) +θ (s)∥ +thus if s ∈ Bk(s0), +|φθ(s)(i)| ≤ ∥φ(i) +θ (0)∥ + ⟨∇φ(i) +θ (ζ), x⟩ +≤ δi +∗ + ∥∇φ(i) +θ (ζ)∥ · ∥x∥ +≤ δi +∗ + d(i) +k · k +Let +δ(i) +k +:= δi +∗ + d(i) +k · k +δk := √n · +n +max +i=1 δ(i) +k +Note that for s ∈ Rn, if ∥φθ(s)∥ > δk then |φ(i) +θ (s)| > δ(i) +r +for some 1 ≤ i ≤ n +and thus s ∈ Rn \ Bk(s0). As a consequence, φ−1 +θ (Rn \ Bδk(0)) ⊆ Rn \ Bk(s0). +Now, it suffices to observe that limk µ(Rn \ Bk(s0)) = 0. +Lemma 12. For each k ∈ N there exists c > 0 such that µ(φ−1 +θ (U ∩ Bk(0))) ≤ +c · Leb(U ∩ Bk(0)). +Proof. Let f : Rn → R be the density of µ . Then +µ(φ−1 +θ (U ∩ Bk(0))) = +� +φ−1 +θ +(U∩Bk(0)) +f(s) ds += +� +U∩Bk(0) +f(φ−1 +θ (z)) · | det Jφ−1 +θ (z)| dz +Lemma 13. Suppose fη ◦ φ(−)(−) +u.a.u. +−−−−→ f ◦ φ(−)(−) and f ̸= 0 a.e. +Then ση ◦ fη ◦ φ(−)(−) +u.a.u. +−−−−→ [f(φ(−)(−)) > 0]. +Proof. Let δk, ϵk and ηk be witnesses for fη ◦ φ(−)(−) +u.a.u. +−−−→ f ◦ φ(−)(−). +For i ∈ N define Vi := {z ∈ Rn | |f(z)| < 1 +i }. For every k ∈ N there exists +ik ∈ N such that Leb(Vik ∩ Bk(0)) < 1 +k. (This is because Leb((−) ∩ Bk(0)) is a +finite measure and ∩i∈NVi ⊆ f −1(0) and f ̸= 0 a.e.) +Furthermore, for k ∈ N let Kk ∈ N be such that ϵKk < +1 +2ik . By Assumption 3 +there exists 0 < η′ +k < ηKk such that for all 0 < η < η′ +k and y > +1 +2ik , ση(−y) < 1 +k +and ση(y) > 1 − 1 +k. We also define +δ′ +k := δKk + sup +θ∈Θ +µ(φ−1 +θ (Rn \ Bk(0))) + 1 +k +ϵ′ +k := 1 +k + +38 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +By Lemma 11, lim δ′ +k = 0 = lim ϵ′ +k. +Now, suppose θ ∈ Θ and k ∈ N. +Define U ′ := UKk ∪ φ−1 +θ (Vik) where UKk ⊆ Rn is the corresponding set for +[f(φ(−)(−)) > 0], θ and Kk. It holds +µ(U ′) ≤ µ(UKk) + µ(φ−1 +θ (Rn \ Bk(0))) + µ(φ−1 +θ (Vik ∩ Bk(0))) +≤ µ(UKk) + µ(φ−1 +θ (Rn \ Bk(0))) + c · Leb(Vik ∩ Bk(0)) +≤ δ′ +k +Besides, for 0 < η < η′ +k and s ∈ Rn \ U ′, |fη(φθ(s)) − f(φθ(s))| < +1 +2ik and +|f(φθ(s))| ≥ 1 +ik thus |fη(φθ(s))| > +1 +2ik . +Consequently, |ση(fη(φθ(s))) − [f(φθ(s)) > 0]| < 1 +k. +Lemma 14. If f : U1 × U2 → R (for open and connected U1, U2 ⊆ R) is contin- +uously differentiable and gη +u.a.u. +−−−−→ g : Θ ×Rn → U1 and hη +u.a.u. +−−−−→ h : Θ ×Rn → +U2, g, h are also bounded on bounded subsets of Rn then f ◦ ⟨gη, hη⟩ +u.a.u. +−−−−→ +f ◦ ⟨g, h⟩ : Θ × Rn → R. . +Proof. First, note that f ◦ ⟨g, h⟩ is bounded on bounded subsets of Rn because +f is continuously differentiable and g and h also satisfies this property. +Let δ(i) +k , ϵ(i) +k +and η(i) +k +(i ∈ {1, 2}) be witnesses for gη +u.a.u. +−−−→ g and hη +u.a.u. +−−−→ h. +W.l.o.g. all ϵ(i) +k +≤ 1. Observe that for k ∈ N, +Mk := +sup +(θ,s)∈Θ×Bk(0) +∥(g(θ, s), h(θ, s))∥ + +√ +2 < ∞ +because g(Θ × Bk(0)) and h(Θ × Bk(0)) are bounded by assumption (also +Assumption 1) and therefore +dk := +sup +(x,y)∈Mk∩(U1×U2) +∥∇f(x, y)∥ < ∞ +is well-defined. For k ∈ N there exists Kk ≥ k such that each +√ +2 · dk · ϵ(i) +Kk < 1 +k. +Define +δk := µ(Rn \ Bk(0)) + δ(1) +Kk + δ(2) +Kk +ϵk := 1 +k +ηk := min{η(1) +Kk, η(2) +Kk} +Note that by Lemma 11, lim δk = 0 = lim ϵk. +Let θ ∈ Θ and k ∈ N. Let V := (Rn \ Bk(0)) ∪ V (1) ∪ V (2), where V (1) (and +V (2), respectively) are the sets for g (and h, respectively), θ and Kk. Note that +µ(V ) ≤ δk. Besides for every 0 < η < ηk and s ∈ Rn \ V , |gη(θ, s)| ≤ |g(θ, s)| + +ϵ(1) +Kk ≤ |g(θ, s)|+1 (similarly for h). Hence, every point between (gη(θ, s), hη(θ, s)) +and (g(θ, s), h(θ, s)) is in BMk(0) ∩ (U1 × U2) and therefore by the mean value + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +39 +theorem, +|f(gη(θ, s), hη(θ, s)) − f(g(θ, s), h(θ, s))| +≤ +sup +ζ∈BMk (0)∩(U1×U2) +|⟨∇f(ζ), (gη(θ, s) − g(θ, s), hη(θ, s) − h(θ, s))⟩| +≤ +sup +ζ∈BMk (0)∩(U1×U2) +∥∇f(ζ)∥ · ∥(gη(θ, s) − g(θ, s), hη(θ, s) − h(θ, s))∥ +< dk · +√ +2 · max{ϵ(1) +Kk, ϵ(2) +Kk} +≤ ϵk +using the Cauchy–Schwarz inequality in the second step. +Lemma 6. Let f, fη : Θ × Rn → R have finite moments. +If fη +u.a.u. +−−−−→ f then Es∼D[fη(θ, s)] +unif. +−−−→ Es∼D[f(θ, s)]. +Proof. It suffices to show the uniform convergence of Es∼D[|fη(θ, s) − f(θ, s)|] +to 0. +By assumption there exists M > 0 such that Es∼D +� +|fη(θ, s) − f(θ, s)|2� +< M +for all η > 0 and θ ∈ Θ. +Let ϵ > 0. By uniform almost uniform convergence of fη to f there exists k +such that δk · M, ϵk < ϵ +2. +Suppose θ ∈ Θ and η < ηk. Let U ⊆ Rn be the witness for almost uniform +convergence of fη, k and θ. In particular, Es∼D[[s ∈ U]] · M < δk · M < ϵ +2 and +for every s ∈ Rn \ U, |fη(θ, s) − f(θ, s)| < ϵk < ϵ +2. +Es∼D[|fη(θ, s) − f(θ, s)|] +≤ Es∼D [[s ∈ U] · |fη(θ, s) − f(θ, s)|] + Es∼D [[s ∈ Rn \ U] · |fη(θ, s) − f(θ, s)|] +≤ Es∼D [[s ∈ U]] · Es∼D +� +|fη(θ, s) − f(θ, s)|2� ++ Es∼D +� +[s ∈ Rn \ U] · ϵ +2 +� +≤ ϵ +D.2 +Type Soundness +In order to aggregate the effect of transformations we employ lists (typically +denoted by Φ) of diffeomorphisms. A list [φ(1) +(−), . . . , φ(n) +(−)] of diffeomorphisms +Θ × R → R defines a diffeomorphism +φ(−) : Θ × Rn → Rn +(θ, [s1, . . . , sn]) �→ +� +φ(1) +θ (s1), . . . , φ(n) +θ (sn) +� +and we use concatentation notation. +We posit the following infinitary logical relation RΦ +τ between sequences of +elements Θ × Rn → �τ� in VectFr (corresponding to the smoothings) and +Θ × Rn → �τ� in QBS (corresponding to the measurable standard semantics): + +40 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +1. (fη, f) ∈ RΦ +ι(f,∆) if fη +u.a.u. +−−−→ f +2. (fη, f) ∈ RΦ +ι(t,∆) if fη +u.a.u. +−−−→ f, fη = gη ◦ φ(−) and f = g ◦ φ(−), where +(a) φ is defined by Φ as above +(b) g : Rn → R is piecewise analytic and non-constant +(c) on each piece g may only depend on (transformed) zj if sj ∈ ∆ +3. (fη, f) ∈ RΦ +τ1•Σ3→τ2 iff for all Φ2 and (gη, g) ∈ RΦ++Φ2 +τ1 +, there exists Φ3 such +that |Φ3| = |Σ3| and (fη ⊙ gη, f ⊙ g) ∈ RΦ++Φ2++Φ3 +τ2 +. +Note that Item 2b implies f ̸= 0 a.e. because non-constant analytic functions +vanish on negligible sets [28] and diffeomorphisms preserve negligibility. +Lemma 15. If (fη, f) ∈ RΦ +R(t,∆) and (gη, g), (hη, h) ∈ RΦ +σ then +((ση ◦ (−fη)) · gη + (ση ◦ fη) · hη, [f(−) < 0] · g + [f(−) ≥ 0] · h) ∈ RΦ +σ +Proof. We focus on the argument for the case where σ is the annotated base type, +in particular ι(t,∆), which is most interesting; the extension to higher orders can +be obtained similarly as for Lemma 4. Clearly, Items 2b and 2c are satisfied and +u.a.u. convergence follows from Lemmas 13 and 14. +Intuitively, Φ describes how samples which may have been drawn during +execution are transformed We can add additional samples, which are ignored: +Lemma 16. Let (fη, f) ∈ RΦ,τ and Φ′ be a list of diffeomorphisms. Then +(gη, g) ∈ RΦ++Φ′,τ, where gη(θ, s ++ s′) := fη(θ, s) and g(θ, s ++ s′) := f(θ, s). +Lemma 17 (Fundamental). +If θ1 : ι(f,∅) +1 +, . . . , θm : ι(f,∅) +m +, x1 : τ1, . . . , xℓ : τℓ | +Σ ⊢ M : τ, Φ be a list of diffeomorphisms, (ξ(1) +η , ξ(1)) ∈ RΦ +τ1, . . . , (ξ(ℓ) +η , ξ(ℓ)) ∈ +RΦ +τℓ then there exists a list Φ′ of diffeomorphisms that |Σ| = |Φ′| and (�M�η ∗ +⟨ξ(1) +η , . . . , ξ(ℓ) +η ⟩, �M�∗⟨ξ(1), . . . , ξ(ℓ)⟩) ∈ RΦ++Φ′ +τ +, where ∗ is defined as in Lemma 5. +Proof. The claim is proven by induction on the typing judgements. We focus on +the most interesting cases: +1. For conditionals we exploit the inductive hypothesis and Lemma 15. +2. Suppose θ | [sj ∼ D] ⊢unif transform sample D by T : R(t,{sj}) because T +is diffeomorphic. We define +g(sj) := sj +φθ(s) := �T�(θ, [])(s, []) = �T�η(θ, [])(s, []) +and therefore we can easily see that +�transform sample D by T�η = g ◦ φ(−) = �transform sample D by T� +and (�transform sample D by T�η, �transform sample D by T�) ∈ R +[φ(−)] +R(t,{sj }) +follows immediately. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +41 +3. For addition we focus on the interesting case | [] ⊢unif + : ι(t,∆1) → ι(t,∆2) → +ι(t,∆1∪∆2), where ∆1 ∩∆2 = ∅. Let Φ, Φ1 and Φ2 be lists of diffeomorphisms, +(f (1) +η , f (1)) ∈ RΦ++Φ1 +ι(t,∆1) and (f (2) +η , f (2)) ∈ RΦ++Φ1++Φ2 +ι(t,∆2) +. By definition there are +decompositions +f (1) +η += g(1) +η +◦ φ(1) +(−) +f (1) = g(1) ◦ φ(1) +(−) +f (2) +η += g(2) +η +◦ φ(2) +(−) +f (2) = g(2) ◦ φ(2) +(−) +Let � +g(1) +η +and � +g(1) be the extension of g(1) +η +and g(1), respectively, to R|Φ|+|Φ1|+|Φ2| → +R. Note that +�+�η ⊙ f (1) +η +⊙ f (2) +η += (� +g(1) +η ++ g(2) +η ) ◦ φ(2) +θ +�+� ⊙ f (1) ⊙ f (2) = (� +g(1) + g(2)) ◦ φ(2) +θ +Clearly (using Lemma 16), � +g(1) + g(2) is again piecewise analytic and on +each piece depends on (transformed) samples either g(1) or g(2) depends on. +Furthermore, on each piece � +g(1) + g(2) is not constant because g(1) and g(2) +are not constant and depend on different variables. +E +Supplementary Materials for Section 7 +E.1 +Experimental Setup +To generate the ELBO trajectories shown in Fig. 5, we separately took 1000 +samples of the ELBO every 100 iterations, taking extra samples to reduce the +variance in the graphs presented. The random samples were the same across +estimators, which leads to the correlation in noise seen in their trajectories. +Table 2 compares the average variance of the estimators, where the average +is taken over a single optimisation trajectory. For each estimator, we took 1000 +Monte Carlo samples of the gradient every 100 iterations to compute the vari- +ance of the estimator at that iteration; we then computed the average of these +variances. Since the gradients are vectors, the variance was measured in two +ways: averaging the component-wise variances and the variance of the L2 norm. +We then separately benchmark each estimator by measuring how many iter- +ations each can complete in a fixed time budget and setting the computational +cost to be the reciprocal of that. This is then used to compute a work-normalised +variance [14,8] that is taken to be the product of the computational cost and +variance. Intuitively, we divide by the relative time taken since we can reduce +the variance by the same factor running the faster estimator more times. +E.2 +Models +We include the models from [23], which are as follows: +– temperature [34] models a controller keeping the temperature of a room +within set bounds. The discontinuity arises from the discrete state of the + +42 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +controller, being either on or off, which disrupts the continuous state rep- +resenting the temperature of the room. Given a set of noisy measurements +of the room temperature, the goal is to infer the controller state at each +of 21 time steps. The model has a 41-dimensional latent variable and 80 +if-statements. +– textmsg [11] models daily text message rates, and the goal is to discover a +change in the rate over the 74-day period of data given. The non-differentiability +arises from the point at which the rate is modelled to change. The model has +a 3-dimensional latent variable (the two rates and the point at which they +change) and 37 if-statements. +– influenza [33] models the US influenza mortality for 1969. In each month, +the mortality rate depends on the dominant virus strain being of type 1 or +type 2, producing a non-differentiablity for each month. Given the mortality +data, the goal is to infer the dominant virus strain in each month. The model +has a 37-dimensional latent variable and 24 if-statements. +Additionally, we introduce the following models: +– cheating [11] simulates a differential privacy setting where students taking +an exam are surveyed to determine the prevalence of cheating without ex- +posing the details for any individual. Students are tasked to toss a coin, on +heads they tell the truth (cheating or not cheating) and on tails they toss a +second coin to determine their answer. The tossing of coins here is a source +of discontinuity. The goal, given the proportion of students who answered +yes, is to predict a posterior on the cheating rate. In this model there are 300 +if-statements and a 301-dimensional latent space, although we only optimise +over a single dimension with the other 300 being sources of randomness. +– xornet is a simple multi-layer neural network trained to compute the exclusive- +or (XOR) function. It has a 2-4-2-1 network architecture with two inputs +and one output, and all activation functions being the Heaviside step func- +tion which is traditionally infeasible for gradient-based optimisation because +of the discontinuity at 0 and a zero gradient everywhere else. The model +has a 25-dimensional latent space (for all the weights and biases) and 28 +if-statements. Note that this model is not applicable to the Lyy18 estimator +since the branch conditions are not all affine in the latent space. +E.3 +Analysis of Results +The ELBO graph for the temperature model in Fig. 5a shows that the Reparam estimator +is biased, converging to a suboptimal value when compared to the Smooth and +Lyy18 estimators. We can also see from the graph and the data in Table 2a that +the Score estimator exhibits extremely high variance, and does not converge. +The textmsg and influenza ELBO graphs in Fig. 5b and Fig. 5c both show +all estimators converging towards roughly the same value, with Score exhibiting +a larger variance. The work-normalised variance of the Smooth estimator across +both model is the lowest across both variance measures. + +Fast and Correct Optimisation for Probabilistic Programming via Smoothing +43 +For the cheating model in Fig. 5d, we have another visual indicator of the +bias of the Reparam gradient. Here Smooth outperforms again with the lowest +work-normalised variance (ignoring that of Reparam since it is biased). +Finally, the xornet model shows the difficulty of training step-function based +neural nets. The Lyy18 estimator is not applicable here since the boundary +integral has no general efficient estimator for non-affine conditionals, which is +the case here. In Fig. 5e, the Reparam estimator makes no progress while other +estimators manage to converge to close to 0 ELBO, showing that they learn a +network that correctly classifies all points. In particular, the Smooth estimator +converges the quickest. +To summarise, the results show cases where the Reparam estimator is biased +and how the Smooth estimator do not have the same limitation. Where the +Lyy18 estimator is defined, they converge to roughly the same objective value; +and the smoothing approach is generalisable to more complex models such as +neural networks with non-linear boundaries. Our proposed Smooth estimator +has consistently significantly lower work-normalised variance, up to 3 orders of +magnitude. + +44 +Basim Khajwal, C.-H. Luke Ong, and Dominik Wagner(�) +Table 2: Computational cost and work-normalised variances, all given as ratios +with respect to the Score estimator (whose data are omitted since they would +be a row of 1s). We chose η = 0.15 for Smooth. +(a) temperature +Estimator +Cost +Avg(V (.)) V (∥.∥2) +Smooth +1.62e+00 3.17e-10 2.09e-09 +Reparam 1.28e+00 1.48e-08 2.01e-08 +Lyy18 +9.12e+00 1.22e-06 4.76e-05 +(b) textmsg +Estimator +Cost +Avg(V (.)) V (∥.∥2) +Smooth +2.00e+00 2.29e-02 3.79e-02 +Reparam 1.18e+00 1.43e-02 2.29e-02 +Lyy18 +4.00e+00 5.76e-02 8.46e-02 +(c) influenza +Estimator +Cost +Avg(V (.)) V (∥.∥2) +Smooth +1.47e+00 9.15e-03 4.58e-03 +Reparam 1.17e+00 7.45e-03 3.68e-03 +Lyy18 +8.30e+00 5.88e-02 2.91e-02 +(d) cheating +Estimator +Cost +Avg(V (.)) V (∥.∥2) +Smooth +1.59e+00 3.64e-03 5.94e-03 +Reparam 9.66e-01 +6.47e-19 1.74e-18 +Lyy18 +2.51e+00 5.39e-02 1.34e-01 +(e) xornet +Estimator +Cost +Avg(V (.)) V (∥.∥2) +Smooth +1.66e+00 9.57e-03 4.46e-02 +Reparam 3.51e-01 +7.55e-09 2.37e-09 + diff --git a/adE1T4oBgHgl3EQfwwXB/content/tmp_files/load_file.txt b/adE1T4oBgHgl3EQfwwXB/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2ffcb33c7a56151956d412aaef6f73cea2eb798 --- /dev/null +++ b/adE1T4oBgHgl3EQfwwXB/content/tmp_files/load_file.txt @@ -0,0 +1,1677 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf,len=1676 +page_content='Fast and Correct Gradient-Based Optimisation for Probabilistic Programming via Smoothing Basim Khajwal1, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong1,2 , and Dominik Wagner1(�) 1 University of Oxford 2 NTU Singapore Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We study the foundations of variational inference, which frames posterior inference as an optimisation problem, for probabilis- tic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The dominant approach for optimisation in practice is stochastic gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, a variant using the so-called reparameterisation gradient estimator exhibits fast convergence in a tra- ditional statistics setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Unfortunately, discontinuities, which are read- ily expressible in programming languages, can compromise the correct- ness of this approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We consider a simple (higher-order, probabilistic) programming language with conditionals, and we endow our language with both a measurable and a smoothed (approximate) value semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We present type systems which establish technical pre-conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus we can prove stochastic gradient descent with the reparameterisation gradient estimator to be correct when applied to the smoothed problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, we can solve the original problem up to any error tolerance by choosing an accuracy coefficient suitably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Empirically we demonstrate that our approach has a similar convergence as a key competitor, but is simpler, faster, and attains orders of magnitude reduction in work- normalised variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Keywords: probabilistic programming · variational inference · reparam- eterisation gradient · value semantics · type systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1 Introduction Probabilistic programming is a programming paradigm which has the vision to make statistical methods, in particular Bayesian inference, accessible to a wide audience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is achieved by a separation of concerns: the domain experts wishing to gain statistical insights focus on modelling, whilst the inference is per- formed automatically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (In some recent systems [4,9] users can improve efficiency by writing their own inference code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') In essence, probabilistic programming languages extend more traditional pro- gramming languages with constructs such as score or observe (as well as sample ) to define the prior p(z) and likelihood p(x | z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The task of infer- ence is to derive the posterior p(z | x), which is in principle governed by Bayes’ law yet usually intractable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Whilst the paradigm was originally conceived in the context of statistics and Bayesian machine learning, probabilistic programming has in recent years arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='03415v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='PL] 9 Jan 2023 2 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) proven to be a very fruitful subject for the programming language community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Researchers have made significant theoretical contributions such as underpinning languages with rigorous (categorical) semantics [37,36,16,39,12,10] and investi- gating the correctness of inference algorithms [17,7,22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The latter were mostly designed in the context of “traditional” statistics and features such as condition- als, which are ubiquitous in programming, pose a major challenge for correctness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Inference algorithms broadly fall into two categories: Markov chain Monte Carlo (MCMC), which yields a sequence of samples asymptotically approaching the true posterior, and variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Variational Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In the variational inference approach to Bayesian statis- tics [42,30,5,6], the problem of approximating difficult-to-compute posterior prob- ability distributions is transformed to an optimisation problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The idea is to approximate the posterior probability p(z | x) using a family of “simpler” den- sities qθ(z) over the latent variables z, parameterised by θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The optimisation problem is then to find the parameter θ∗ such that qθ∗(z) is “closest” to the true posterior p(z | x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Since the variational family may not contain the true posterior, qθ∗ is an approximation in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In practice, variational inference has proven to yield good approximations much faster than MCMC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Formally, the idea is captured by minimising the KL-divergence [30,5] be- tween the variational approximation and the true posterior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is equivalent to maximising the ELBO function, which only depends on the joint distribution p(x, z) and not the posterior, which we seek to infer after all: ELBOθ := Ez∼qθ(z)[log p(x, z) − log qθ(z)] (1) Gradient Based Optimisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In practice, variants of Stochastic Gradi- ent Descent (SGD) are frequently employed to solve optimisation problems of the following form: argminθ Es∼q(s)[f(θ, s)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In its simplest version, SGD follows Monte Carlo estimates of the gradient in each step: θk+1 := θk − γk · 1 N N � i=1 ∇θf � θk, s(i) k � � �� � gradient estimator where s(i) k ∼ q � s(i) k � and γk is the step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the correctness of SGD it is crucial that the estimation of the gradient is unbiased, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' correct in expectation: Es(1),.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=',s(N)∼q � 1 N N � i=1 ∇θf � θ, s(i)�� = ∇θEs∼q(s)[f(θ, s)] This property, which is about commuting differentiation and integration, can be established by the dominated convergence theorem [21, Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 3 Note that we cannot directly estimate the gradient of the ELBO in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (1) with Monte Carlo because the distribution w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' which the expectation is taken also depends on the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However, the so-called log-derivative trick can be used to derive an unbiased estimate, which is known as the Score or REIN- FORCE estimator [31,40,27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Reparameterisation Gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Whilst the score estimator has the virtue of being very widely applicable, it unfortunately suffers from high variance, which can cause SGD to yield very poor results3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The reparameterisation gradient estimator—the dominant approach in varia- tional inference—reparameterises the latent variable z in terms of a base random variable s (viewed as the entropy source) via a diffeomorphic transformation φθ, such as a location-scale transformation or cumulative distribution function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For example, if the distribution of the latent variable z is a Gaussian N(z | µ, σ2) with parameters θ = {µ, σ} then the location-scale transformation using the standard normal as the base distribution gives rise to the reparameterisation z ∼ N(z | µ, σ2) ⇐⇒ z = φµ,σ(s), s ∼ N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (2) where φµ,σ(s) := s · σ + µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The key advantage of this setup (often called “repa- rameterisation trick” [20,38,32]) is that we have removed the dependency on θ from the distribution w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' which the expectation is taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Therefore, we can now differentiate (by backpropagation) with respect to the parameters θ of the variational distributions using Monte Carlo simulation with draws from the base distribution s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus, succinctly, we have ∇θ Ez∼qθ(z)[f(θ, z)] = ∇θ Es∼q(s)[f(θ, φθ(s))] = Es∼q(s)[∇θ f(θ, φθ(s))] The main benefit of the reparameterisation gradient estimator is that it has a significantly lower variance than the score estimator, resulting in faster con- vergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Bias of the Reparameterisation Gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Unfortunately, the reparame- terisation gradient estimator is biased for non-differentiable models [23], which are readily expressible in programming languages with conditionals: Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The counterexample in [23, Proposition 2], where the objective func- tion is the ELBO for a non-differentiable model, can be simplified to f(θ, s) = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 · θ2 + � 0 if s + θ < 0 1 otherwise Observe that (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1a): ∇θ Es∼N (0,1) [f(θ, s)] = −θ + N(−θ | 0, 1) ̸= −θ = Es∼N (0,1) [∇θf(θ, s)] 3 see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5a 4 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) −1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 1 −1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 1 θ (a) Dashed red: biased estima- tor Es∼N (0,1) [∇θf(θ, s)], solid green: true gradient ∇θ Es∼N (0,1) [f(θ, s)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (b) ELBO trajectories (higher means bet- ter) obtained with our implementation (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Section 7) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1: Bias of the reparameterisation gradient estimator for Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Crucially this may compromise convergence to critical points or maximisers: even if we can find a point where the gradient estimator vanishes, it may not be a critical point (let alone optimum) of the original optimisation problem (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1b) Informal Approach As our starting point we take a variant of the simply typed lambda calculus with reals, conditionals and a sampling construct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We abstract the optimisation of the ELBO to the following generic optimisation problem argminθ Es∼D[�M�(θ, s)] (3) where �M� is the value function [7,26] of a program M and D is independent of the parameters θ and it is determined by the distributions from which M samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Owing to the presence of conditionals, the function �M� may not be continuous, let alone differentiable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2: (Logistic) sig- moid function ση (dot- ted: η = 1 3, dashed: η = 1 15) and the Heav- iside step function (red, solid).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Example 1 can be expressed as (λz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 · θ2 + (if z < 0 then 0 else 1)) (sample N + θ) Our approach is based on a denotational semantics �(−)�η (for accuracy coefficient η > 0) of programs in the (new) cartesian closed category VectFr, which generalises smooth manifolds and extends Frölicher spaces (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' [13,35]) with a vector space structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, we replace the Heaviside step-function usually arising in the interpretation of conditionals by smooth approximations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, we interpret the conditional of Example 1 as �if s + θ < 0 then 0 else 1�η(θ, s) := ση(s + θ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='55 LBO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='40 SCORE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='35 REPARAM 0 2000 4000 6000 8000 10000 IterationFast and Correct Optimisation for Probabilistic Programming via Smoothing 5 where ση is a smooth function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For instance we can choose ση(x) := σ( x η ) where σ(x) := 1 1+exp(−x) is the (logistic) sigmoid function (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus, the pro- gram M is interpreted by a smooth function �M�η, for which the reparameter- isation gradient may be estimated unbiasedly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Therefore, we apply stochastic gradient descent on the smoothed program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Contributions The high-level contribution of this paper is laying a theoretical foundation for correct yet efficient (variational) inference for probabilistic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We employ a smoothed interpretation of programs to obtain unbiased (reparame- terisation) gradient estimators and establish technical pre-conditions by type systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In more detail: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We present a simple (higher-order) programming language with conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We employ trace types to capture precisely the samples drawn in a fully eager call-by-value evaluation strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We endow our language with both a (measurable) denotational value seman- tics and a smoothed (hence approximate) value semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the latter we furnish a categorical model based on Frölicher spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We develop type systems enforcing vital technical pre-conditions: unbiased- ness of the reparameterisation gradient estimator and the correctness of stochastic gradient descent, as well as the uniform convergence of the smooth- ing to the original problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus, our smoothing approach in principle yields correct solutions up to arbitrary error tolerances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We conduct an empirical evaluation demonstrating that our approach ex- hibits a similar convergence to an unbiased correction of the reparameterised gradient estimator by [23] – our main baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However our estimator is sim- pler and more efficient: it is faster and attains orders of magnitude reduction in work-normalised variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Outline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In the next section we introduce a simple higher-order probabilistic pro- gramming language, its denotational value semantics and operational semantics;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Optimisation Problem 1 is then stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Section 3 is devoted to a smoothed deno- tational value semantics, and we state the Smooth Optimisation Problem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In Sections 4 and 5 we develop annotation based type systems enforcing the correct- ness of SGD and the convergence of the smoothing, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Related work is briefly discussed in Section 6 before we present the results of our empirical evaluation in Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We conclude in Section 8 and discuss future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We use the following conventions: bold font for vectors and lists, ++ for concatenation of lists, ∇θ for gradients (w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' θ),[φ] for the Iverson bracket of a predicate φ and calligraphic font for distributions, in particular N for normal distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, we highlight noteworthy items using red.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 6 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) 2 A Simple Programming Language In this section, we introduce our programming language, which is the simply- typed lambda calculus with reals, augmented with conditionals and sampling from continuous distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Syntax The raw terms of the programming language are defined by the grammar: M ::= x | θi | r | + | · | − | −1 | exp | log | if M < 0 then M else M | sample D | λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M | M M where x and θi respectively range over (denumerable collections of) variables and parameters, r ∈ R, and D is a probability distribution over R (potentially with a support which is a strict subset of R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' As is customary we use infix, postfix and prefix notation: M + N (addition), M · N (multiplication), M −1 (inverse), and −M (numeric negation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We frequently omit the underline to reduce clutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Example 2 (Encoding the ELBO for Variational Inference).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We consider the example used by [23] in their Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2 to prove the biasedness of the reparam- eterisation gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (In Example 1 we discussed a simplified version thereof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') The joint density is p(z) := N(z | 0, 1) · � N(0 | −2, 1) if z < 0 N(0 | 5, 1) otherwise and they use a variational family with density qθ(z) := N(z | θ, 1), which is reparameterised using a standard normal noise distribution and transformation s �→ s + θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' First, we define an auxiliary term for the pdf of normals with mean m and standard derivation s: N ≡ λx, m, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' �√ 2π · s �−1 · exp � −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 · � (x + (−m)) · s−1�2� Then, we can define M ≡ � λz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' log (N z 0 1) + (if z < 0 then log (N 0 (−2) 1) else log (N 0 5 1)) � �� � log p − log (N z θ 1) � �� � log qθ � � sample N + θ � 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 A Basic Trace-Based Type System Types are generated from base types (R and R>0, the reals and positive reals) and trace types (typically Σ, which is a finite list of probability distributions) Fast and Correct Optimisation for Probabilistic Programming via Smoothing 7 as well as by a trace-based function space constructor of the form τ • Σ → τ ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Formally types are defined by the following grammar: trace types Σ ::= [D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , Dn] n ≥ 0 base types ι ::= R | R>0 safe types σ ::= ι | σ • [] → σ types τ ::= ι | τ • Σ → τ where Di are probability distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively a trace type is a description of the space of execution traces of a probabilistic program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Using trace types, a distinctive feature of our type system is that a program’s type precisely charac- terises the space of its possible execution traces [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We use list concatenation notation ++ for trace types, and the shorthand τ1 → τ2 for function types of the form τ1 • [] → τ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, a term has type τ • Σ → τ ′ if, when given a value of type τ, it reduces to a value of type τ ′ using all the samples in Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Dual context typing judgements of the form, Γ | Σ ⊢ M : τ, are defined in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3b, where Γ = x1 : τ1, · · · , xn : τn, θ1 : τ ′ 1, · · · , θm : τ ′ m is a finite map describing a set of variable-type and parameter-type bindings;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and the trace type Σ precisely captures the distributions from which samples are drawn in a (fully eager) call-by-value evaluation of the term M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The subtyping of types, as defined in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3a, is essentially standard;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' for contexts, we define Γ ⊑ Γ ′ if for every x : τ in Γ there exists x : τ ′ in Γ ′ such that τ ′ ⊑ τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Trace types are unique (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1): Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If Γ | Σ ⊢ M : τ and Γ | Σ′ ⊢ M : τ ′ then Σ = Σ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A term has safe type σ if it does not contain sample D or σ is a base type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus, perhaps slightly confusingly, we have | [D] ⊢ sample D : R, and R is considered a safe type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that we use the metavariable σ to denote safe types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The branches of conditionals must have a safe type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Otherwise it would not be clear how to type terms such as M ≡ if x < 0 then (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample N ) else (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample E + sample E) N ≡ (λf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f (f sample N )) M because the branches draw a different number of samples from different distribu- tions, and have types R•[N] → R and R•[E, E] → R, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However, for M ′ ≡ if x < 0 then sample N else sample E + sample E we can (safely) type x : R | [N, E, E] ⊢ M ′ : R | [] ⊢ λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M ′ : R • [N, E, E] → R | [N, N, E, E, N, E, E] ⊢ (λf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f (f sample N )) (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M ′) : R 8 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and Dominik Wagner(�) ι ⊑ ι R>0 ⊑ R τ ′ 1 ⊑ τ1 τ2 ⊑ τ ′ 2 (τ1 • Σ → τ2) ⊑ (τ ′ 1 • Σ → τ ′ 2) (a) Subtyping Γ | Σ ⊢ M : τ Γ ′ | Σ ⊢ M : τ ′ Γ ⊑ Γ ′,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' τ ⊑ τ ′ x : τ | [] ⊢ x : τ | [] ⊢ r : R r ∈ R | [] ⊢ r : R>0 r ∈ R>0 | [] ⊢ ◦ : R → R → R ◦ ∈ {+,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ·} | [] ⊢ ◦ : R>0 → R>0 → R>0 ∈ {+,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ·} | [] ⊢ − : R → R | [] ⊢ −1 : R>0 → R>0 | [] ⊢ exp : R → R>0 | [] ⊢ log : R>0 → R Γ | Σ ⊢ L : R Γ | Σ′ ⊢ M : σ Γ | Σ′′ ⊢ N : σ Γ | Σ ++ Σ′ ++ Σ′′ ⊢ if L < 0 then M else N : σ | [D] ⊢ sample D : R Γ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' y : τ1 | Σ ⊢ M : τ2 Γ | [] ⊢ λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2 Γ | Σ1 ⊢ M : τ1 • Σ3 → τ2 Γ | Σ2 ⊢ N : τ1 Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ M N : τ2 (b) Typing judgments Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3: A Basic Trace-based Type System Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consider the following terms: L ≡ λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample N + sample N M ≡ if x < 0 then (λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' y + y) sample N else (sample N + sample N ) We can derive the following typing judgements: | [] ⊢ L : R>0 • [N, N] → R x : R>0 | [N, N, N] ⊢ M : R | [] ⊢ λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : R>0 • [N, N, N] → R | [N, N, N, N] ⊢ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M) sample N : R | [N, N] ⊢ (λf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f (f 0)) (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample N ) : R Note that if x < 0 then (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample N ) else (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' x) is not typable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 Denotational Value Semantics Next, we endow our language with a (measurable) value semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It is well- known that the category of measurable spaces and measurable functions is not cartesian-closed [1], which means that there is no interpretation of the lambda Fast and Correct Optimisation for Probabilistic Programming via Smoothing 9 calculus as measurable functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' These difficulties led [15] to develop the cate- gory QBS of quasi-Borel spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 we recall the definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' No- tably, morphisms can be combined piecewisely, which we need for conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We interpret our programming language in the category QBS of quasi-Borel spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Types are interpreted as follows: �R� := (R, MR) �R>0� := (R>0, MR>0) �[D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , Dn]� := (R, MR)n �τ1 • Σ → τ2� := �τ1� × �Σ� ⇒ �τ2� where MR is the set of measurable functions R → R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' similarly for MR>0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (As for trace types, we use list notation (and list concatenation) for traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') We first define a handy helper function for interpreting application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For f : �Γ� × Rn1 ⇒ �τ1 • Σ3 → τ2� and g : �Γ� × Rn2 ⇒ �τ1� define f @ g : �Γ� × Rn1+n2+|Σ3| ⇒ �τ2� (γ, s1 ++ s2 ++ s3) �→ f(γ, s1)(g(γ, s2), s3) s1 ∈ Rn1, s2 ∈ Rn2, s3 ∈ R|Σ3| We interpret terms-in-context, �Γ | Σ ⊢ M : τ� : �Γ�×�Σ� → �τ�, as follows: �Γ | [D] ⊢ sample D : R�(γ, [s]) := s �Γ | [] ⊢ λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2�(γ, []) := (v, s) ∈ �τ1� × �Σ� �→ �Γ, x : τ1 | Σ ⊢ M : τ2�((γ, v), s) �Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ M N : τ� := �Γ | Σ1 ⊢ M : τ1 • Σ3 → τ2� @ �Γ | Σ2 ⊢ N : τ1� �Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ if L < 0 then M else N : τ�(γ, s1 ++ s2 ++ s3)) := � �Γ | Σ2 ⊢ M : τ�(γ, s2) if �Γ | Σ1 ⊢ L : R�(γ, s1) < 0 �Γ | Σ3 ⊢ N : τ�(γ, s3) otherwise It is not difficult to see that this interpretation of terms-in-context is well- defined and total.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the conditional clause, we may assume that the trace type and the trace are presented as partitions Σ1 ++ Σ2 ++ Σ3 and s1 ++ s2 ++ s3 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is justified because it follows from the judgement Γ | Σ1 ++ Σ2 ++ Σ3 ⊢ if L < 0 then M else N : τ that Γ | Σ1 ⊢ L : R, Γ | Σ2 ⊢ M : σ and Γ | Σ3 ⊢ N : σ are provable;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and we know that each of Σ1, Σ2 and Σ3 is unique, thanks to Lemma 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' their respective lengths then determine the partition s1 ++ s2 ++ s3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Similarly for the application clause, the components Σ1 and Σ2 are determined by Lemma 1, and Σ3 by the type of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 Relation to Operational Semantics We can also endow our language with a big-step CBV sampling-based semantics similar to [7,26], as defined in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 6 of Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We write M ⇓s w V to mean that M reduces to value V , which is a real constant or an abstraction, using the execution trace s and accumulating weight w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 10 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Based on this, we can define the value- and weight-functions: valueM(s) := � V if M ⇓s w V undef otherwise weightM(s) := � w if M ⇓s w V 0 otherwise Our semantics is a bit non-standard in that for conditionals we evaluate both branches eagerly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The technical advantage is that for every (closed) term- in-context, | [D1, · · · , Dn] ⊢ M : ι, M reduces to a (unique) value using exactly the traces of the length encoded in the typing, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' So in this sense, the operational semantics is “total”: there is no divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Notice that there is no partiality caused by partial primitives such as 1/x, thanks to the typing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Moreover there is a simple connection to our denotational value semantics: Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let | [D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , Dn] ⊢ M : ι.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' dom(valueM) = Rn 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' �M� = valueM 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' weightM(s) = �n j=1 pdfDj(sj) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 Problem Statement We are finally ready to formally state our optimisation problem: Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Optimisation Given: term-in-context, θ1 : ι1, · · · , θm : ιm | [D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , Dn] ⊢ M : R Find: argminθ Es1∼D1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=',sn∼Dn [�M�(θ, s)] 3 Smoothed Denotational Value Semantics Now we turn to our smoothed denotational value semantics, which we use to avoid the bias in the reparameterisation gradient estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It is parameterised by a family of smooth functions ση : R → [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, we replace the Heaviside step-function arising in the interpretation of conditionals by smooth approximations (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, conditionals if z < 0 then 0 else 1 are interpreted as z �→ ση(z) rather than [z ≥ 0] (using Iverson brackets).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Our primary example is ση(x) := σ( x η ), where σ is the (logistic) sigmoid σ(x) := 1 1+exp(−x), see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Whilst at this stage no further properties other than smoothness are required, we will later need to restrict ση to have good properties, in particular to convergence to the Heaviside step function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' As a categorical model we propose vector Frölicher spaces VectFr, which (to our knowledge) is a new construction, affording a simple and direct interpretation of the smoothed conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Frölicher Spaces We recall the definition of Frölicher spaces, which generalise smooth spaces4: A Frölicher space is a triple (X, CX, FX) where X is a set, CX ⊆ Set(R, X) is a set of curves and FX ⊆ Set(X, R) is a set of functionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' satisfying 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' if c ∈ CX and f ∈ FX then f ◦ c ∈ C∞(R, R) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' if c : R → X such that for all f ∈ FX, f ◦ c ∈ C∞(R, R) then c ∈ CX 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' if f : X → R such that for all c ∈ CX, f ◦ c ∈ C∞(R, R) then f ∈ FX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A morphism between Frölicher spaces (X, CX, FX) and (Y, CY , FY ) is a map φ : X → Y satisfying f ◦ φ ◦ c ∈ C∞(R, R) for all f ∈ FY and c ∈ CX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Frölicher spaces and their morphisms constitute a category Fr, which is well- known to be cartesian closed [13,35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Vector Frölicher Spaces To interpret our programming language smoothly we would like to interpret conditionals as ση-weighted convex combinations of its branches: �if L < 0 then M else N�η(γ, s1 ++ s2 ++ s3) := ση(−�L�η(γ, s1)) · �M�η(γ, s2) + ση(�L�η(γ, s1)) · �N�η(γ, s3) (4) By what we have discussed so far, this only makes sense if the branches have ground type because Frölicher spaces are not equipped with a vector space structure but we take weighted combinations of morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular if φ1, φ2 : X → Y and α : X → R are morphisms then α φ1 + φ2 ought to be a morphism too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Therefore, we enrich Frölicher spaces with an additional vector space structure: Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A R-vector Frölicher space is a Frölicher space (X, CX, FX) such that X is an R-vector space and whenever c, c′ ∈ CX and α ∈ C∞(R, R) then α c + c′ ∈ CX (defined pointwise).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A morphism between R-vector Frölicher spaces is a morphism between Frölicher spaces, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' φ : (X, CX, FX) → (Y, CY , FY ) is a morphism if for all c ∈ CX and f ∈ FY , f ◦ φ ◦ c ∈ C∞(R, R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' R-vector Frölicher space and their morphisms constitute a category VectFr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' There is an evident forgetful functor fully faithfully embedding VectFr in Fr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that the above restriction is a bit stronger than requiring that CX is also a vector space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (α is not necessarily a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') The main benefit is the following, which is crucial for the interpretation of conditionals as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (4): Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If φ1, φ2 ∈ VectFr(X, Y ) and α ∈ VectFr(X, R) then α φ1 + φ2 ∈ VectFr(X, Y ) (defined pointwisely).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose c ∈ CX and f ∈ FY .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then (α1 φ1 + φ2) ◦ c = (α ◦ c) · (φ1 ◦ c) + (φ2 ◦ c) ∈ CY (defined pointwisely) and the claim follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4 C∞(R, R) is the set of smooth functions R → R 12 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Similarly as for Frölicher spaces, if X is an R-vector space then any C ⊆ Set(X, R) generates a R-vector Frölicher space (X, CX, FX), where FX := {f : X → R | ∀c ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f ◦ c ∈ C∞(R, R)} �CX := {c : R → X | ∀f ∈ FX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f ◦ c ∈ C∞(R, R)} CX := � n � i=1 αi ci | n ∈ N, ∀i ≤ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' αi ∈ C∞(R, R), ci ∈ �CX � Having modified the notion of Frölicher spaces generated by a set of curves, the proof for cartesian closure carries over (more details are provided in Appendix B) and we conclude: Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' VectFr is cartesian closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 Smoothed Interpretation We have now discussed all ingredients to interpret our language (smoothly) in the cartesian closed category VectFr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We call �M�η the η-smoothing of �M� (or of M, by abuse of language).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The interpretation is mostly standard and follows Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3, except for the case for conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The latter is given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (4), for which the additional vector space structure is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Finally, we can phrase a smoothed version of our Optimisation Problem 1: Problem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' η-Smoothed Optimisation Given: term-in-context, θ1 : ι1, · · · , θm : ιm | [D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , Dn] ⊢ M : R, and accuracy coefficient η > 0 Find: argminθ Es1∼D1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=',sn∼Dn [�M�η(θ, s)] 4 Correctness of SGD for Smoothed Problem and Unbiasedness of the Reparameterisation Gradient Next, we apply stochastic gradient descent (SGD) with the reparameterisation gradient estimator to the smoothed problem (for the batch size N = 1): θk+1 := θk − γk · ∇θ�M�η (θk, sk) sk ∼ D (5) where θ | [s ∼ D] ⊢ M : R (slightly abusing notation in the trace type).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A classical choice for the step-size sequence is γk ∈ Θ(1/k), which satisfies the so-called Robbins-Monro criterion: � k∈N γk = ∞ � k∈N γ2 k < ∞ (6) In this section we wish to establish the correctness of the SGD procedure applied to the smoothing Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 13 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Desiderata First, we ought to take a step back and observe that the optimisation problems we are trying to solve can be ill-defined due to a failure of integrability: take M ≡ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' exp (x · x)) sample N : we have Ez∼N [�M�(z)] = ∞, independently of parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Therefore, we aim to guarantee: (SGD0) The optimisation problems (both smoothed and unsmoothed) are well-defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Since E[�M�η(θ, s)] (and E[�M�(θ, s)]) may not be a convex function in the parameters θ, we cannot hope to always find global optima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We seek instead stationary points, where the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' the parameters θ vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The fol- lowing results (whose proof is standard) provide sufficient conditions for the convergence of SGD to stationary points (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' [3] or [2, Chapter 2]): Proposition 3 (Convergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose (γk)k∈N satisfies the Robbins-Monro criterion Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (6) and g(θ) := Es[f(θ, s)] is well-defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If Θ ⊆ Rm satisfies (SGD1) Unbiasedness: ∇θg(θ) = Es[∇θf(θ, s)] for all θ ∈ Θ (SGD2) g is L-Lipschitz smooth on Θ for some L > 0: ∥∇θg(θ) − ∇θg(θ′)∥ ≤ L · ∥θ − θ′∥ for all θ, θ′ ∈ Θ (SGD3) Bounded Variance: supθ∈Θ Es[∥∇θfk(θ, s)∥2] < ∞ then infi∈N E[∥∇g(θi)∥2] = 0 or θi ̸∈ Θ for some i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Unbiasedness (SGD1) requires commuting differentiation and integration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The validity of this operation can be established by the dominated convergence theorem [21, Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='28], see Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To be applicable the partial derivatives of f w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' the parameters need to be dominated uniformly by an integrable function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Formally: Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let f : Θ × Rn → R and g : Rn → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We say that g uniformly dominates f if for all (θ, s) ∈ Θ × Rn, |f(θ, s)| ≤ g(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Also note that for Lipschitz smoothness (SGD2) it suffices to uniformly bound the second-order partial derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In the remainder of this section we present two type systems which restrict the language to guarantee properties (SGD0) to (SGD3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Piecewise Polynomials and Distributions with Finite Moments As a first illustrative step we consider a type system ⊢poly, which restricts terms to (piecewise) polynomials, and distributions with finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Recall that a distribution D has (all) finite moments if for all p ∈ N, Es∼D[|s|p] < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Distri- butions with finite moments include the following commonly used distributions: normal, exponential, logistic and gamma distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A non-example is the Cauchy distribution, which famously does not even have an expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 14 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For a distribution D with finite moments, f : Rn → R has (all) finite moments if for all p ∈ N, Es∼D[|f(s)|p] < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Functions with finite moments have good closure properties: Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If f, g : Rn → R have (all) finite moments so do −f, f + g, f · g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, if a distribution has finite moments then polynomials do, too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consequently, intuitively, it is sufficient to simply (the details are explicitly spelled out in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2): 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' require that the distributions D in the sample rule have finite moments: | [D] ⊢poly sample D : R D has finite moments 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' remove the rules for −1, exp and log from the type system ⊢poly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Type Soundness I: Well-Definedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Henceforth, we fix parameters θ1 : ι1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θm : ιm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, it is pretty obvious that �M� is a piecewise polynomial whenever θ | Σ ⊢poly M : ι.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Nonetheless, we prove the property formally to illustrate our proof technique, a variant of logical relations, employed throughout the rest of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We define a slightly stronger logical predicate P(n) τ on Θ × Rn → �τ�, which allows us to obtain a uniform upper bound: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f ∈ P(n) ι if f is uniformly dominated by a function with finite moments 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f ∈ P(n) τ1•Σ3→τ2 if for all n2 ∈ N and g ∈ P(n+n2) τ1 , f ⊙ g ∈ P(n+n2+|Σ3|) τ2 where for f : Θ × Rn1 → �τ1 • Σ3 → τ2� and g : Θ × Rn1+n2 → �τ1� we define f ⊙ g : Θ × Rn1+n2+|Σ3| → τ2 (θ, s1 ++ s2 ++ s3) �→ f(θ, s1)(g(θ, s1 ++ s2), s3) Intuitively, g may depend on the samples in s2 (in addition to s1) and the function application may consume further samples s3 (as determined by the trace type Σ3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By induction on safe types we prove the following result, which is important for conditionals: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If f ∈ P(n) ι and g, h ∈ P(n) σ then [f(−) < 0]·g+[f(−) ≥ 0]·h ∈ P(n) σ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For base types it follows from Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Hence, suppose σ has the form σ1•[] → σ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let n2 ∈ N and x ∈ Pn+n2 σ1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By definition, (g⊙x), (h⊙x) ∈ P(n+n2) σ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let �f be the extension (ignoring the additional samples) of f to Θ×Rn+n2 → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It is easy to see that also �f ∈ P(n+n2) ι By the inductive hypothesis, [ �f(−) < 0] · (g ⊙ x) + [ �f(−) ≥ 0] · (h ⊙ x) ∈ P(n+n2) σ2 Finally, by definition, ([f(−) < 0] · g + [f(−) ≥ 0] · h) ⊙ x = [ �f(−) < 0] · (g ⊙ x) + [ �f(−) ≥ 0] · (h ⊙ x) Fast and Correct Optimisation for Probabilistic Programming via Smoothing 15 Assumption 1 We assume that Θ ⊆ �ι1� × · · · × �ιm� is compact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 5 (Fundamental).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , xℓ : τℓ | Σ ⊢poly M : τ, n ∈ N, ξ1 ∈ P(n) τ1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ ∈ P(n) τℓ then �M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ ∈ P(n+|Σ|) τ , where �M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ : Θ × Rn+|Σ| → �τ� (θ, s ++ s′) �→ �M�((θ, ξ1(θ, s), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ(θ, s)), s′) It is worth noting that, in contrast to more standard fundamental lemmas, here we need to capture the dependency of the free variables on some number n of further samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' in the context of (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' x) sample N the subterm x depends on a sample although this is not apparent if we consider x in isolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 5 is proven by structural induction (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The most interesting cases include: parameters, primitive operations and condition- als.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In the case for parameters we exploit the compactness of Θ (Assumption 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For primitive operations we note that as a consequence of Lemma 3 each P(n) ι is closed under negation5, addition and multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Finally, for conditionals we exploit Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Type Soundness II: Correctness of SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Next, we address the integrability for the smoothed problem as well as (SGD1) to (SGD3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We establish that not only �M�η but also its partial derivatives up to order 2 are uniformly dominated by functions with finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For this to possibly hold we require: Assumption 2 For every η > 0, sup x∈R |ση(x)| < ∞ sup x∈R |σ′ η(x)| < ∞ sup x∈R |σ′′ η(x)| < ∞ Note that, for example, the logistic sigmoid satisfies Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We can then prove a fundamental lemma similar to Lemma 5, mutatis mu- tandis, using a logical predicate in VectFr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We stipulate f ∈ Q(n) ι if its partial derivatives up to order 2 are uniformly dominated by a function with finite mo- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In addition to Lemma 3 we exploit standard rules for differentiation (such as the sum, product and chain rule) as well as Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We conclude: Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ | Σ ⊢poly M : R then the partial derivatives up to order 2 of �M�η are uniformly dominated by a function with all finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consequently, the Smoothed Optimisation Problem 2 is not only well-defined but, by the dominated convergence theorem [21, Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='28], the reparame- terisation gradient estimator is unbiased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Furthermore, (SGD1) to (SGD3) are satisfied and SGD is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5 for ι = R 16 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The type system ⊢poly is simple yet guarantees correctness of SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However, it is somewhat restrictive;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' in particular, it does not allow the expression of many ELBOs arising in variational inference directly as they often have the form of logarithms of exponential terms (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Example 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 A Generic Type System with Annotations Next, we present a generic type system with annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 we give an instantiation to make ⊢poly more permissible and in Section 5 we turn towards a different property: the uniform convergence of the smoothings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Typing judgements have the form Γ | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ, where “?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' indicates the property we aim to establish, and we annotate base types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus, types are generated from trace types Σ ::= [s1 ∼ D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , sn ∼ Dn] base types ι ::= R | R>0 safe types σ ::= ιβ | σ • [] → σ types τ ::= ια | τ • Σ → τ Annotations are drawn from a set and may possibly restricted for safe types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Secondly, the trace types are now annotated with variables, typically Σ = [s1 ∼ D1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , sn ∼ Dn] where the variables sj are pairwise distinct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the subtyping relation we can constrain the annotations at the base type level (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 8a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' the extension to higher types is accomplished as before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The typing rules have the same form but they are extended with the annota- tions on base types and side conditions possibly constraining them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For example, the rules for addition, exponentiation and sampling are modified as follows: | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' + : ια1 → ια2 → ια (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Add) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' exp : Rα → Rα′ >0 (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Exp) | [sj ∼ D] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample D : Rα (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Sample) The rules for subtyping, variables, abstractions and applications do not need to be changed at all but they use annotated types instead of the types of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Γ | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ Γ ′ | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ ′ Γ ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Γ ′, τ ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' τ ′ x : τ | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' x : τ Γ, y : τ1 | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ2 Γ | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2 Γ | Σ2 ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ3 → τ2 Γ | Σ1 ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' N : τ1 Γ | Σ1 ++ Σ2 ++ Σ3 ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M N : τ2 The full type system is presented in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ⊢poly can be considered a special case of ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' whereby we use the singleton ∗ as annotations, a contradictory side condition (such as false) for the undesired primitives −1, exp and log, and use the side condition “D has finite moments” for sample as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 17 Table 1: Overview of type systems in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' property Section judgement annotation totality Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 ⊢ – correctness SGD Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 ⊢poly none/∗ Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 ⊢SGD 0/1 uniform convergence Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 ⊢unif (f, ∆)/(t, ∆) Table 1 provides an overview of the type systems of this paper and their purpose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and its instantiations refine the basic type system of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 in the sense that if a term-in-context is provable in the annotated type system, then its erasure (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' erasure of the annotations of base types and distributions) is provable in the basic type system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is straightforward to check.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 A More Permissible Type System In this section we discuss another instantiation, ⊢SGD, of the generic type system system to guarantee (SGD0) to (SGD3), which is more permissible than ⊢poly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, we would like to support Example 2, which uses logarithms and densities involving exponentials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, we need to ensure that subterms involving exp are “neutralised” by a corresponding log.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To achieve this we an- notate base types with 0 or 1, ordered discretely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 0 is the only annotation for safe base types and can be thought of as “integrable”;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1 denotes “needs to be passed through log”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' More precisely, we constrain the typing rules such that if θ | Σ ⊢SGD M : ι(e) then6 loge ◦�M� and the partial derivatives of loge ◦�M�η up to order 2 are uniformly dominated by a function with finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We subtype base types as follows: ι(e1) 1 ⊑SGD ι(e2) 2 if ι1 ⊑ ι2 (as defined in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3a) and e1 = e2, or ι1 = R>0 = ι2 and e1 ≤ e2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The second disjunct may come as a surprise but we ensure that terms of type R(0) >0 cannot depend on samples at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4 we list the most important rules;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' we relegate the full type sys- tem to Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' exp and log increase and decrease the annotation respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The rules for the primitive operations and conditionals are motivated by the closure properties of Lemma 3 and the elementary fact that log ◦(f · g) = (log ◦f) + (log ◦g) and log ◦(f −1) = − log ◦f for f, g : Θ × Rn → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' θ : R(0) >0 | [N, N] ⊢SGD log (θ−1 · exp (sample N )) + sample N : R(0) Note that the branches of conditionals need to have safe type, which rules out branches with type R(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is because logarithms do not behave nicely when composed with addition as used in the smoothed interpretation of conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 6 using the convention log0 is the identity 18 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) | [] ⊢SGD exp : R(0) → R(1) >0 | [] ⊢SGD log : R(e) >0 → R(0) | [] ⊢SGD + : ι(0) → ι(0) → ι(0) | [] ⊢SGD · : ι(e) → ι(e) → ι(e) | [] ⊢SGD − : R(0) → R(0) | [] ⊢SGD −1 : R(e) >0 → R(e) >0 Γ | Σ ⊢SGD L : ι(0) Γ | Σ′ ⊢SGD M : σ Γ | Σ′′ ⊢SGD N : σ Γ | Σ ++ Σ′ ++ Σ′′ ⊢SGD if L < 0 then M else N : σ | [sj ∼ D] ⊢SGD sample D : R(0) D has finite moments Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4: Excerpt of the typing rules (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4) for the correctness of SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, observe that in the rules for logarithm and inverses e = 0 is allowed, which may come as a surprise7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' necessary for the typability of the variational inference Example 2: Example 5 (Typing for Variational Inference).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It holds | [] ⊢ N : R(0) → R(0) → R(0) >0 → R(1) >0 and θ : R(0) | [s1 ∼ N] ⊢ M : R(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Type Soundness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To formally establish type soundness, we can use a logical predicate, which is very similar to the one in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 (N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' the additional Item 2): in particular f ∈ Q(n) ι(e) if 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' partial derivatives of loge ◦f up to order 2 are uniformly dominated by a function with finite moments 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' if ι(e) is R(0) >0 then f is dominated by a positive constant function Using this and a similar logical predicate for �(−)� we can show: Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ1 : ι(0), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θm : ι(0) m | Σ ⊢SGD M : ι(0) then 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' all distributions in Σ have finite moments 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' �M� and for each η > 0 the partial derivatives up to order 2 of �M�η are uniformly dominated by a function with finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consequently, again the Smoothed Optimisation Problem 2 is not only well- defined but by the dominated convergence theorem, the reparameterisation gra- dient estimator is unbiased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Furthermore, (SGD1) to (SGD3) are satisfied and SGD is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5 Uniform Convergence In the preceding section we have shown that SGD with the reparameterisation gradient can be employed to correctly (in the sense of Proposition 3) solve the 7 Recall that terms of type R(0) >0 cannot depend on samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 19 Smoothed Optimisation Problem 2 for any fixed accuracy coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However, a priori, it is not clear how a solution of the Smoothed Problem 2 can help to solve the original Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The following illustrates the potential for significant discrepancies: Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consider M ≡ if 0 < 0 then θ ·θ +1 else (θ −1)·(θ −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Notice that the global minimum and the only stationary point of �M�η is at θ = 1 2 regardless of η > 0, where �M�η( 1 2) = 3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' On the other hand �M�( 1 2) = 1 4 and the global minimum of �M� is at θ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In this section we investigate under which conditions the smoothed objective function converges to the original objective function uniformly in θ ∈ Θ: (Unif) Es∼D [�M�η(θ, s)] unif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ Es∼D [�M�(θ, s)] as η ↘ 0 for θ ∈ Θ We design a type system guaranteeing this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The practical significance of uniform convergence is that before running SGD, for every error tolerance ϵ > 0 we can find an accuracy coefficient η > 0 such that the difference between the smoothed and original objective function does not exceed ϵ, in particular for θ∗ delivered by the SGD run for the η-smoothed problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Discussion of Restrictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To rule out the pathology of Example 6 we require that guards are non-0 almost everywhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Furthermore, as a consequence of the uniform limit theorem [29], (Unif) can only possibly hold if the expectation Es∼D [�M�(θ, s)] is continuous (as a function of the parameters θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For a straightforward counterexample take M ≡ if θ < 0 then 0 else 1, we have Es[�M�(θ)] = [θ ≥ 0] which is discontin- uous, let alone differentiable, at θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Our approach is to require that guards do not depend directly on parameters but they may do so, indirectly, via a dif- feomorphic8 reparameterisation transform;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' see Example 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We call such guards safe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In summary, our aim, intuitively, is to ensure that guards are the composition of a diffeomorphic transformation of the random samples (potentially depending on parameters) and a function which does not vanish almost everywhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Type System for Guard Safety In order to enforce this requirement and to make the transformation more ex- plicit, we introduce syntactic sugar, transform sample D by T, for applications of the form T sample D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Example 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' As expressed in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (2), we can obtain samples from N(µ, σ2) via transform sample N by (λs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s · σ + µ), which is syntactic sugar for the term (λs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s · σ + µ) sample N .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 8 Example 12 in Appendix D illustrates why it is not sufficient to restrict the repa- rameterisation transform to bijections (rather, we require it to be a diffeomorphism).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 20 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) We propose another instance of the generic type system of Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3, ⊢unif, where we annotate base types by α = (g, ∆), where g ∈ {f, t} denotes whether we seek to establish guard safety and ∆ is a finite set of sj capturing possible dependencies on samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We subtype base types as follows: ι(g1,∆1) 1 ⊑unif ι(g2,∆2) 2 if ι1 ⊑ ι2 (as defined in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3a), ∆1 ⊆ ∆2 and g1 ⪯ g2, where t ⪯ f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is motivated by the intuition that we can always drop9 guard safety and add more dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The rule for conditionals ensures that only safe guards are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The unary operations preserve variable dependencies and guard safety.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Parameters and con- stants are not guard safe and depend on no samples (see Appendix D for the full type system): Γ | Σ ⊢unif L : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) Γ | Σ′ ⊢unif M : σ Γ | Σ′′ ⊢unif N : σ Γ | Σ ++ Σ′ ++ Σ′′ ⊢unif if L < 0 then M else N : σ | [] ⊢unif − : R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) → R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) θi : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∅) | [] ⊢unif θi : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∅) | [] ⊢unif r : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∅) r ∈ �ι� θ | [] ⊢unif T : Rα → Rα θ | [sj ∼ D] ⊢unif transform sample D by T : R(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='{sj}) T diffeomorphic A term θ | [] ⊢unif T : Rα → Rα is diffeomorphic if �T�(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' []) = �T�η(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' []) : R → R is a diffeomorphism for each θ ∈ Θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' differentiable and bijective with differentiable inverse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' First, we can express affine transformations, in particular, the location-scale transformations as in Example 7: Example 8 (Location-Scale Transformation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The term-in-context σ : R(f,∅) >0 , µ : R(f,∅) | [] ⊢ λs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' σ · s + µ : R(f,{s1}) → R(f,{s1}) is diffeomorphic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (However for σ : R(f,∅) it is not because it admits σ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') Hence, the reparameterisation transform G ≡ σ : R(f,∅) >0 , µ : R(f,∅) | [s1 : D] ⊢ transform sample D by (λs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='s·σ+µ) : R(t,{s1}) which has g-flag t, is admissible as a guard term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Notice that G depends on the parameters, σ and µ, indirectly through a diffeomorphism, which is permitted by the type system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If guard safety is sought to be established for the binary operations, we require that operands do not share dependencies on samples: | [] ⊢unif ◦ : ι(f,∆) → ι(f,∆) → ι(f,∆) ◦ ∈ {+, ·} | [] ⊢unif ◦ : ι(t,∆1) → ι(t,∆2) → ι(t,∆1∪∆2) ◦ ∈ {+, ·}, ∆1 ∩ ∆2 = ∅ This is designed to address: 9 as long as it is not used in guards Fast and Correct Optimisation for Probabilistic Programming via Smoothing 21 Example 9 (Non-Constant Guards).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We have | [] ⊢ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='x + (−x)) : R(f,{s1}) → R(f,{s1}), noting that we must use g = f for the + rule;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and because R(t,{sj}) ⊑unif R(f,{sj}), we have | [] ⊢ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='x + (−x)) : R(t,{s1}) → R(f,{s1}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Now transform sample D by (λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='y) has type R(t,{s1}) with the g-flag necessar- ily set to t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and so the term M ≡ � λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='x + (−x) � transform sample D by (λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='y) which denotes 0, has type R(f,{s1}), but not R(t,{s1}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It follows that M cannot be used in guards (notice the side condition of the rule for conditional), which is as desired: recall Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Similarly consider the term N ≡ � λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (λy z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='if y + (−z) < 0 then M1 else M2) x x � (transform sample D by (λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='y)) (7) When evaluated, the term y + (−z) in the guard has denotation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the same reason as above, the term N is not refinement typable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The type system is however incomplete, in the sense that there are terms-in- context that satisfy the property (Unif) but which are not typable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Example 10 (Incompleteness).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The following term-in-context denotes the “iden- tity”: | [] ⊢ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (2 · x) + (−x)) : R(t,{s1}) → R(f,{s1}) but it does not have type R(t,{s1}) → R(t,{s1}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then, using the same reasoning as Example 9, the term G ≡ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (2 · x) + (−x)) (transform sample D by (λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='y)) has type R(f,{s1}), but not R(t,{s1}), and so if G < 0 then 0 else 1 is not typable, even though G can safely be used in guards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Type Soundness Henceforth, we fix parameters θ1 : ι(f,∅) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θm : ι(f,∅) m .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Now, we address how to show property (Unif), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' that for θ | Σ ⊢unif M : ι(g,∆), the η-smoothed E[�M�η(θ, s)] converges uniformly for θ ∈ Θ as η ↘ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For this to hold we clearly need to require that ση has good (uniform) convergence properties (as far as the unavoidable discontinuity at 0 allows for): Assumption 3 For every δ > 0, ση unif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ [(−) > 0] on (−∞, −δ) ∪ (δ, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Observe that in general even if M is typable �M�η does not converge uniformly in both θ and s because �M� may still be discontinuous in s: 22 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Example 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For M ≡ if (transform sample N by (λs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s+θ)) < 0 then 0 else 1, �M�(θ, s) = [s + θ ≥ 0], which is discontinuous, and �M�η(θ, s) = ση(s + θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However, if θ | Σ ⊢ M : ι(g,∆) then �M�η does converge to �M� uniformly almost uniformly, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', uniformly in θ ∈ Θ and almost uniformly in s ∈ Rn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Formally, we define: Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let f, fη : Θ × Rn → R, µ be a measure on Rn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We say that fη converges uniformly almost uniformly to f (notation: fη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ f) if there exist sequences (δk)k∈N, (ϵk)k∈N and (ηk)k∈N such that limk→∞ δk = 0 = limk→∞ ϵk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and for every k ∈ N and θ ∈ Θ there exists U ⊆ Rn such that 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' µ(U) < δk and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' for every 0 < η < ηk and s ∈ Rn \\ U, |fη(θ, s) − f(θ, s)| < ϵk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If f, fη are independent of θ this notion coincides with standard almost uniform convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For M from Example 11 �M�η u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ �M� holds although uniform convergence fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' However, uniform almost uniform convergence entails uniform convergence of expectations: Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let f, fη : Θ × Rn → R have finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If fη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ f then Es∼D[fη(θ, s)] unif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ Es∼D[f(θ, s)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' As a consequence, it suffices to establish �M�η u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ �M�.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We achieve this by positing an infinitary logical relation between sequences of morphisms in VectFr (corresponding to the smoothings) and morphisms in QBS (corresponding to the measurable standard semantics).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We then prove a Fundamental Lemma 17 (details are in Appendix D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Not surprisingly the case for conditionals is most interesting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This makes use of Assumption 3 and exploits that guards, for which the typing rules assert the guard safety flag to be t, can only be 0 at sets of measure 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We conclude: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ1 : ι(f,∅) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θm : ι(f,∅) m | Σ ⊢unif M : R(g,∆) then �M�η u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ �M�.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, if �M�η and �M� also have finite moments then Es∼D[�M�η(θ, s)] unif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ Es∼D[�M�(θ, s)] as η ↘ 0 for θ ∈ Θ We finally note that ⊢unif can be made more permissible by adding syntactic sugar for a-fold (for a ∈ N>0) addition a · M ≡ M + · · · + M and multiplication M a ≡ M · · · · · M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This admits more terms as guards, but safely (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 6 Related Work [23] is both the starting point for our work and the most natural source for comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' They correct the (biased) reparameterisation gradient estimator for non-differentiable models by additional non-trivial boundary terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' They present Fast and Correct Optimisation for Probabilistic Programming via Smoothing 23 an efficient method for affine guards only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, they are not concerned with the convergence of gradient-based optimisation procedures;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' nor do they discuss how assumptions they make may be manifested in a programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In the context of the reparameterisation gradient, [25] and [18] relax discrete random variables in a continuous way, effectively dealing with a specific class of discontinuous models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' [41] use a similar smoothing for discontinuous optimisation but they do not consider a full programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Motivated by guaranteeing absolute continuity (which is a necessary but not sufficient criterion for the correctness of e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' variational inference), [24] use an approach similar to our trace types to track the samples which are drawn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' They do not support standard conditionals but their “work-around” is also eager in the sense of combining the traces of both branches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, they do not support a full higher-order language, in which higher-order terms can draw samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Thus, they do not need to consider function types tracking the samples drawn during evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 7 Empirical Evaluation We evaluate our smoothed gradient estimator (Smooth) against the biased repa- rameterisation estimator (Reparam), the unbiased correction of it (LYY18) due to [23], and the unbiased (Score) estimator [31,40,27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The experimental setup is based on that of [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The implementation is written in Python, using automatic differentiation (provided by the jax library) to implement each of the above estimators for an arbitrary probabilistic program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For each estima- tor and model, we used the Adam [19] optimiser for 10, 000 iterations using a learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='001, with the exception of xornet for which we used 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The initial model parameters θ0 were fixed for each model across all runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In each iteration, we used N = 16 Monte Carlo samples from the gradient esti- mator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the Lyy18 estimator, a single subsample for the boundary term was used in each estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For our smoothed estimator we use accuracy coefficients η ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='15, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Further details are discussed in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Compilation for First-Order Programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' All our benchmarks are first-order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We compile a potentially discontinuous program to a smooth program (parame- terised by ση) using the compatible closure of if L < 0 then M else N ⇝ (λw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ση(−w) · M + ση(w) · N) L Note that the size only increases linearly and that we avoid of an exponential blow-up by using abstractions rather than duplicating the guard L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We include the models from [23], an example from differential privacy [11] and a neural network for which our main competitor, the estimator of [23], is not applicable (see Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 for more details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 24 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) (a) temperature (b) textmsg (c) influenza (d) cheating (e) xornet Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5: ELBO trajectories for each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A single colour is used for each esti- mator and the accuracy coefficient η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='15, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 for Smooth is represented by dashed, solid and dotted lines respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Analysis of Results We plot the ELBO trajectories in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5 and include data on the computational cost and variance in Table 2 in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The ELBO graph for the temperature model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5a and the cheating model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5d shows that the Reparam estimator is biased, converging to suboptimal values when compared to the Smooth and Lyy18 estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For the temperature model we can also see from the graph and the data in Ta- X106 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='0 ELBO 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 SMOOTH 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='0 SCORE REPARAM 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='5 LYY18 0 2000 4000 6000 8000 10000 Iteration-300 350 400 LBO 450 SMOOTH 500 SCORE REPARAM 550 LYY18 0 2000 4000 6000 8000 10000 IterationX106 0 1 ELBO 2 3 SMOOTH SCORE 4 REPARAM LYY18 0 2000 4000 6000 8000 10000 Iteration-66 68 70 LBO 72 74 76 SMOOTH SCORE 78 REPARAM LYY18 80 0 2000 4000 6000 8000 10000 Iteration0 2000 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4000 ELBO 6000 8000 SMOOTH SCORE REPARAM 10000 0 2000 4000 6000 8000 10000 IterationFast and Correct Optimisation for Probabilistic Programming via Smoothing 25 ble 2a that the Score estimator exhibits extremely high variance, and does not converge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Finally, the xornet model shows the difficulty of training step-function based neural nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The Lyy18 estimator is not applicable here since there are non-affine conditionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5e, the Reparam estimator makes no progress while other estimators manage to converge to close to 0 ELBO, showing that they learn a network that correctly classifies all points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, the Smooth estimator converges the quickest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Summa summarum, the results reveal where the Reparam estimator is bi- ased and that the Smooth estimator does not have the same limitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Where the Lyy18 estimator is defined, they converge to roughly the same objective value;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and the smoothing approach is generalisable to more complex models such as neural networks with non-linear boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Our proposed Smooth estimator has consistently significantly lower work-normalised variance, up to 3 orders of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 8 Conclusion and Future Directions We have discussed a simple probabilistic programming language to formalise an optimisation problem arising e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' in variational inference for probabilistic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We have endowed our language with a denotational (measurable) value semantics and a smoothed approximation of potentially discontinuous pro- grams, which is parameterised by an accuracy coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We have proposed type systems to guarantee pleasing properties in the context of the optimisation problem: For a fixed accuracy coefficient, stochastic gradient descent converges to stationary points even with the reparameterisation gradient (which is unbi- ased).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, the smoothed objective function converges uniformly to the true objective as the accuracy is improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Our type systems can be used to independently check these two properties to obtain partial theoretical guarantees even if one of the systems suffers from incompleteness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We also stress that SGD and the smoothed unbiased gradient estimator can even be applied to programs which are not typable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Experiments with our prototype implementation confirm the benefits of re- duced variance and unbiasedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Compared to the unbiased correction of the reparameterised gradient estimator due to [23], our estimator has a similar con- vergence, but is simpler, faster, and attains orders of magnitude (2 to 3,000 x) reduction in work-normalised variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Future Directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A natural avenue for future research is to make the language and type systems more complete, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' to support more well-behaved programs, in particular programs involving recursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Furthermore, the choice of accuracy coefficients leaves room for further in- vestigations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We anticipate it could be fruitful not to fix an accuracy coefficient upfront but to gradually enhance it during the optimisation either via a pre- determined schedule (dependent on structural properties of the program), or adaptively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 26 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Aumann, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Borel structures for function spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Illinois Journal of Mathematics 5 (1961) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Bertsekas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Convex optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Athena Scientific (2015) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Bertsekas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Tsitsiklis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Gradient convergence in gradient methods with errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Optim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 10(3), 627–642 (2000) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Bingham, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Jankowiak, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Obermeyer, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Pradhan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Karaletsos, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Singh, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Szerlip, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Horsfall, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Goodman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Pyro: Deep universal probabilistic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 20, 28:1–28:6 (2019) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Bishop, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Pattern recognition and machine learning, 5th Edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Information science and statistics, Springer (2007) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Blei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kucukelbir, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', McAuliffe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Variational inference: A review for statisticians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Journal of the American Statistical Association 112(518), 859–877 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1080/01621459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1285773 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Borgström, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Lago, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Gordon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Szymczak, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': A lambda-calculus foun- dation for universal probabilistic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, ICFP 2016, Nara, Japan, September 18-22, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 33–46 (2016) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Botev, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Ridder, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Variance Reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Wiley StatsRef: Statistics Reference Online, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1–6 (2017) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Cusumano-Towner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Saad, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Lew, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Mansinghka, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Gen: a general-purpose probabilistic programming system with programmable inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: McKinley, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Fisher, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, Phoenix, AZ, USA, June 22-26, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 221–236.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ACM (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1145/3314221.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3314642, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1145/3314221.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3314642 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Dahlqvist, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kozen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Semantics of higher-order probabilistic programs with conditioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ACM Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4(POPL), 57:1–57:29 (2020) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Davidson-Pilon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Addison-Wesley Professional (2015) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Ehrhard, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Tasson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Pagani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Probabilistic coherence spaces are fully ab- stract for probabilistic PCF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: The 41st Annual ACM SIGPLAN-SIGACT Sym- posium on Principles of Programming Languages, POPL ’14, San Diego, CA, USA, January 20-21, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 309–320 (2014) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Frölicher, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kriegl, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Linear Spaces and Differentiation Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Interscience, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Wiley and Son, New York (1988) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Glynn, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Whitt, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': The asymptotic efficiency of simulation estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Op- erations research 40(3), 505–520 (1992) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Heunen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kammar, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Staton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': A convenient category for higher- order probability theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Symposium Logic in Computer Science (2017) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Heunen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kammar, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Staton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': A convenient category for higher- order probability theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik, Iceland, June 20-23, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1–12 (2017) 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Hur, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Nori, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Rajamani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Samuel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': A provably correct sampler for probabilistic programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: 35th IARCS Annual Conference on Foundation of Soft- ware Technology and Theoretical Computer Science, FSTTCS 2015, December 16-18, 2015, Bangalore, India.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 475–488 (2015) Fast and Correct Optimisation for Probabilistic Programming via Smoothing 27 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Jang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Gu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Poole, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Categorical reparameterization with gumbel-softmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings (2017) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Ba, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Welling, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Auto-encoding variational bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Le- Cun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings (2014) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Klenke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Probability Theory: A Comprehensive Course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Universitext, Springer London (2014) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lee, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Rival, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Towards verified stochastic variational infer- ence for probabilistic programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' PACMPL 4(POPL) (2020) 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lee, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Reparameterization gradient for non-differentiable mod- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5558–5568 (2018) 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lew, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Cusumano-Towner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Sherman, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Carbin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Mansinghka, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Trace types and denotational semantics for sound programmable inference in prob- abilistic languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ACM Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 4(POPL), 19:1–19:32 (2020) 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Maddison, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Mnih, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Teh, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : The concrete distribution: A continuous re- laxation of discrete random variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings (2017) 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Mak, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Ong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Paquet, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Wagner, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Densities of almost surely terminating probabilistic programs are differentiable almost everywhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Yoshida, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') Programming Languages and Systems - 30th European Symposium on Program- ming, ESOP 2021, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021, Luxembourg City, Luxembourg, March 27 - April 1, 2021, Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lecture Notes in Computer Science, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 12648, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 432–461.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Springer (2021) 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Minh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Gregor, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Neural variational inference and learning in belief networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' JMLR Workshop and Conference Proceed- ings, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 32, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1791–1799.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='org (2014) 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Mityagin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': The zero set of a real analytic function (2015) 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Munkres, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Prentice Hall, New Delhi„ 2nd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' edn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (1999) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Machine Learning: A Probabilististic Perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' MIT Press (2012) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Ranganath, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Gerrish, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Blei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Black box variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Pro- ceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 814–822 (2014) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Rezende, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Mohamed, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Wierstra, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Stochastic backpropagation and ap- proximate inference in deep generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Proceedings of the 31th In- ternational Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' JMLR Workshop and Conference Proceedings, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 32, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1278–1286.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='org (2014) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Shumway, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Stoffer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' : Time Series Analysis and Its Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Springer Texts in Statistics, Springer-Verlag (2005) 28 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Soudjani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Majumdar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Nagapetyan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Multilevel monte carlo method for statistical model checking of hybrid systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Bertrand, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Bortolussi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') Quantitative Evaluation of Systems - 14th International Conference, QEST 2017, Berlin, Germany, September 5-7, 2017, Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lecture Notes in Com- puter Science, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 10503, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 351–367.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Springer (2017) 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Stacey, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Comparative smootheology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Theory and Applications of Categories 25(4), 64–117 (2011) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Staton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Commutative semantics for probabilistic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Program- ming Languages and Systems - 26th European Symposium on Programming, ESOP 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 855– 879 (2017) 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Staton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Wood, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Heunen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kammar, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Semantics for prob- abilistic programming: higher-order functions, continuous distributions, and soft constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, LICS ’16, New York, NY, USA, July 5-8, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 525–534 (2016) 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Titsias, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Lázaro-Gredilla, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Doubly stochastic variational bayes for non- conjugate inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In: Proceedings of the 31th International Conference on Ma- chine Learning, ICML 2014, Beijing, China, 21-26 June 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1971–1979 (2014) 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Vákár, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kammar, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Staton, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': A domain theory for statistical probabilistic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' PACMPL 3(POPL), 36:1–36:29 (2019) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Wingate, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Weber, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Automated variational inference in probabilistic program- ming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' CoRR abs/1301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1299 (2013) 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Zang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Discontinuous optimization by smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Mathematics of Operations Research 6(1), 140–152 (1981) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Butepage, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Kjellstrom, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=', Mandt, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=': Advances in Variational Infer- ence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 41(8), 2008–2026 (2019) Fast and Correct Optimisation for Probabilistic Programming via Smoothing 29 A Supplementary Materials for Section 2 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Supplementary Materials for Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If Γ | Σ ⊢ M : τ and Γ | Σ′ ⊢ M : τ ′ then Σ = Σ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof (sketch).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We define an equivalence relation ≈ on types by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ι ≈ ι′ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (τ1 • Σ → τ2) ≈ (τ ′ 1 • Σ′ → τ ′ 2) iff τ1 ≈ τ ′ 1 implies Σ = Σ′ and τ2 ≈ τ ′ 2 Intuitively, two types are related by ≈ if for (inductively) related arguments they draw the same samples and again have related return types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We extend the relation to contexts: Γ ≈ Γ ′ if for all x : τ in Γ and x : τ ′ in Γ ′, τ ≈ τ ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then we show by induction that if Γ | Σ ⊢ M : τ, Γ ′ | Σ′ ⊢ M : τ ′ and Γ ≈ Γ ′ then Σ = Σ′ and τ ≈ τ ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Finally, this strengthened statement allows us to prove the tricky case of the lemma: application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Supplementary Materials for Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 Like measurable space (X, ΣX), a quasi Borel space (QBS) is a pair (X, MX) where X is a set;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' but instead of axiomatising the measurable subsets ΣX, QBS axiomatises the admissible random elements MX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The set MX, which is a col- lection of functions R → X, must satisfy the following closure properties: – if α ∈ MX and f : R → R is measurable, then α ◦ f ∈ MX – if α : R → X is constant then α ∈ MX – given a countable partition of the reals R = � i∈N Si where each Si is Borel, and {αi}i∈N ⊆ MX, the function r �→ αi(r) where r ∈ Si is in MX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The QBS morphisms (X, MX) → (Y, MY ) are functions f : X → Y such that f ◦ α ∈ MY whenever α ∈ MX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 7 (Substitution).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let Γ, x : τ ′ | Σ ⊢ M : τ and Γ | [] ⊢ N : τ ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then �M� �� γ, �N�(γ, []) � , s � = �M[N/x]�(γ, s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 Supplementary Materials for Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 The following can be verified by structural induction on M: Lemma 8 (Substitution).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If Γ, x : τ ′ | Σ ⊢ M : τ and Γ | [] ⊢ N : τ ′ then Γ | Σ ⊢ M[N/x] : τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that it may not necessarily hold that Γ, x : τ ′ | Σ ⊢ M : τ and Γ | Σ′ ⊢ N : τ ′ imply Γ | Σ ++ Σ′ ⊢ M[N/x] : τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Take M ≡ x + x and N ≡ sample N .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then note that x : R | [] ⊢ M : R | [N] ⊢ N : R | [N, N] ⊢ M[N/x].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 30 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) V ⇓[] 1 V sample D ⇓[s] pdfD(s) s L ⇓s1 w1 r M ⇓s2 w2 V N ⇓s3 w3 V ′ if L < 0 then M else N ⇓s1++s2++s3 w1·w2·w3 V r < 0 L ⇓s1 w1 r M ⇓s2 w2 V N ⇓s3 w3 V ′ if L < 0 then M else N ⇓s1++s2++s3 w1·w2·w3 V ′ r ≥ 0 M1 ⇓s1 w1 r1 M2 ⇓s2 w2 r2 M1 ◦ M2 ⇓s1++s2 w1·w2 r1 ◦ r2 ∈ {+, ·} M ⇓s w r op M ⇓s w op(r) op ∈ {−, −1, exp, log}, r ∈ dom(op) M ⇓s1 w1 λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M ′ N ⇓s2 w2 V ′ M ′[V ′/x] ⇓s3 w3 V M N ⇓s1++s2++s3 w1·w2·w3 V Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 6: Operational big-step sampling-based semantics Discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 8 is a slightly stronger version of the usual substitution lemma for a CBV language: if Γ, x : τ ′ | Σ ⊢ M : τ and Γ | Σ′ ⊢ V : τ then Γ | Σ ++ Σ′ ⊢ M[V/x] : τ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' note that Σ′ = [] necessarily, and we also have Γ | Σ ++ Σ′ ⊢ (λx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='M) V : τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consequently, subject reduction holds for CBV β-reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' B Supplementary Materials for Section 3 Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose φ : X → Y is a function and (X, CX, FX) and (Y, CY , CX) are vector Frölicher spaces, where the former is generated by C0 ⊆ Set(R, X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then φ is a morphism iff for all f ∈ FY and c ∈ C0, f ◦ φ ◦ c ∈ C∞(R, R) (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' it is not necessary to check c ∈ CX \\ C0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (Note that C ⊆ �CX ⊆ CX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Therefore, if f : X → R is such that for all c ∈ CX ⊇ C, f ◦ c ∈ C∞(R, R) then f ∈ FX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' VectFr is cartesian closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Singleton vector spaces are terminal objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose (X1, CX1, FX1) and (X2, CX2, FX2) are vector Frölicher spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Con- sider the vector Frölicher space on X1 × X2 generated by {⟨c1, c2⟩ | c1 ∈ CX1, c2 ∈ CX2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By construction (X1 × X2, CX1×X2, FX1×X2) is a vector Frölicher space and πi : (X1 × X2, CX1×X2, FX1×X2) → (Xi, CXi, FXi) are morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Now, suppose Z and f : Z → X1 and g : Z → X2 are mor- phisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Clearly, h := ⟨f, g⟩ is the unique morphism Z → X1 × X2 such that π1 ◦ h = f and π2 ◦ h = g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 31 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Finally, suppose (X, CX, FX) and (Y, CY , FY ) are vector Frölicher spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consider the vector Frölicher space on the hom-set VectFr(X, Y ) generated by {c : R → VectFr(X, Y ) | ((r, x) �→ c(r)(x)) ∈ Fr(R × X, Y )}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Define eval : VectFr(X, Y ) × X → Y by eval(f, x) := f(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To see that this is a morphism by Remark 1 it suffices to consider c1 : R → CX⇒Y such that ((r, x) �→ c1(r)(x)) ∈ Fr(R × X, Y ), c2 ∈ CX and g ∈ FY .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that g ◦ eval ◦ ⟨c1, c2⟩ = g ◦ ((r, x) �→ c1(r)(x)) � �� � ∈Fr(R×X,Y ) ⟨id, c2⟩ � �� � ∈CR×X which is in C∞(R, R) by definition of morphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Clearly, this satisfies the required universal property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' C Supplementary Materials for Section 4 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Supplementary Materials for Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 The following immediately follows from a well-known result about exchanging differentiation and integration, which is a consequence of the dominated conver- gence theorem [21, Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='28]: Lemma 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let U ⊆ R be open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose g : R × Rn → R satisfies 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' for each x ∈ R, s �→ g(x, s) is integrable 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' g is continuously differentiable everywhere 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' there exists integrable h : Rn → R such that for all x ∈ U and s ∈ Rn, | ∂g ∂x(x, s)| ≤ h(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then for all x ∈ U, ∂ ∂x � g(x, s) ds = � ∂g ∂x(x, s) ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , m}, M > 0 and U := BM(0) ⊆ Rm be the open M-ball.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose g : Rm × Rn → R satisfies 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' for each x ∈ Rm, s �→ g(x, s) is integrable 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' g is continuously differentiable everywhere 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' there exists integrable h : Rn → R such that for all x ∈ U and s ∈ Rn, | ∂g ∂xi (x, s)| ≤ h(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then for all x ∈ U, ∂ ∂xi � g(x, s) ds = � ∂g ∂xi (x, s) ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Supplementary Materials for Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If f, g : Rn → R have (all) finite moments so do −f, f + g, f · g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For negation it is trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For addition it can be checked as follows: E[|(f + g)(s)|p] ≤ E [|2f(s)|p + |2g(s)|p] ≤ 2p · E [|f(s)|p] + 2p · E [|g(s)|p] < ∞ For multiplication it follows from Cauchy-Schwarz: E[|(f · g)(s)|p] = E [|f(s)|p · |g(s)|p] ≤ � E [|f(s)|2p] · E [|g(s)|2p] < ∞ 32 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Γ | Σ ⊢poly M : τ Γ ′ | Σ ⊢poly M : τ ′ Γ ⊑poly Γ ′, τ ⊑poly τ ′ x : τ | [] ⊢poly x : τ | [] ⊢poly r : R r ∈ R | [] ⊢poly r : R>0 r ∈ R>0 | [] ⊢poly ◦ : ι → ι → ι ◦ ∈ {+, ·} | [] ⊢poly − : R → R Γ | Σ ⊢poly L : R Γ | Σ′ ⊢poly M : σ Γ | Σ′′ ⊢poly N : σ Γ | Σ ++ Σ′ ++ Σ′′ ⊢poly if L < 0 then M else N : σ | [sj ∼ D] ⊢poly sample D : R D has finite moments Γ, y : τ1 | Σ ⊢poly M : τ2 Γ | [] ⊢poly λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2 Γ | Σ1 ⊢poly M : τ1 • Σ3 → τ2 Γ | Σ2 ⊢poly N : τ1 Γ | Σ1 ++ Σ2 ++ Σ3 ⊢poly M N : τ2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 7: Typing judgements for ⊢poly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 5 (Fundamental).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , xℓ : τℓ | Σ ⊢poly M : τ, n ∈ N, ξ1 ∈ P(n) τ1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ ∈ P(n) τℓ then �M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ ∈ P(n+|Σ|) τ , where �M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ : Θ × Rn+|Σ| → �τ� (θ, s ++ s′) �→ �M�((θ, ξ1(θ, s), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ(θ, s)), s′) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We prove the claim by induction on M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For constants r and variables xi this is obvious;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' for parameters θi it is ensured by Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' �sample D�((), [s]) = s clearly has finite moments because D does.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Next, to show �+� ∈ P(0) ι→ι→ι (multiplication can be checked analogously) let n1, n2 ∈ N, f1 ∈ P(n1) ι , f2 ∈ P(n1+n2) ι .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By definition f1 and f2 are uniformly dominated by some g1 and g2, respectively, with finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By Lemma 3 g1 + g2 has finite moments to and |(�+� ⊙ f1 ⊙ f2)(θ, s1 ++ s2)| ≤ |f1(θ, s1)| + |f2(θ, s1 ++ s2)| ≤ g1(s1) + g2(s1 ++ s2) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The reasoning for − is straightforward and −1, exp and log cannot occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The claim for conditionals follows Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For applications it follows immediately from the inductive hypothesis and the definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θ, xℓ : τℓ | Σ1 ++ Σ2 ++ Σ3 : τℓ ⊢poly M N : τ because θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θ, xℓ : τℓ | Σ1 : τℓ ⊢poly M : τ ′ • Σ3 → τ and θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θ, xℓ : τℓ | Σ2 : τℓ ⊢poly M N : τ ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let n ∈ N and ξ1 ∈ P(n) τ1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ ∈ P(n) τℓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By the inductive hypothesis, �M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ ∈ P(n+|Σ1|) τ ′•Σ3→τ �N� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ ∈ P(n+|Σ1|+|Σ2|) τ ′ Fast and Correct Optimisation for Probabilistic Programming via Smoothing 33 By definition of P(n+|Σ1|) τ ′•Σ3→τ, (�M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩) ⊙ (�N� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩) ∈ P(n+|Σ1|+|Σ2|+|Σ3|) τ and by definition of ⊙ and ∗, (�M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩) ⊙ (�N� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩) = �M N� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For abstractions suppose θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , xℓ : τℓ | [] ⊢poly λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ • Σ → τ ′ because θ, x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , xℓ : τℓ, y : τ | Σ ⊢poly M : τ ′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' let n ∈ N and ξ1 ∈ P(n) τ1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ ∈ P(n) τℓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To show the claim, suppose n2 ∈ N and g ∈ P(n+n2) τ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By definition of the logical predicate we need to verify (�M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩) ⊙ g ∈ P(n+n2+|Σ|) τ ′ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Call �ξi the extension of ξi to Θ × Rn+n2 → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By the inductive hypothesis, �M� ∗ ⟨�ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , �ξℓ, g⟩ ∈ P(n+n2+|Σ|) τ ′ Finally it suffices to observe that (�M� ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩) ⊙ g = �M� ∗ ⟨�ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , �ξℓ, g⟩ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 Supplementary Materials for Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 See Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 Supplementary Materials for Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='4 We define the logical predicate Q(n) τ on Θ × Rn → �τ� in VectFr: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f ∈ Q(n) ι(e) if (a) partial derivatives of loge ◦f up to order 2 are uniformly dominated by a function with finite moments (b) if ι(e) is R(0) >0 then f is dominated by a positive constant function 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' f ∈ P(n) τ1•Σ3→τ2 if for all n2 ∈ N and g ∈ Q(n+n2) τ1 , f ⊙ g ∈ Q(n+n2+|Σ3|) τ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 10 (Fundamental).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ1 : ι(0) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θm : ι(0) m , x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , xℓ : τℓ | Σ ⊢SGD M : τ, n ∈ N, ξ1 ∈ Q(n) τ1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ ∈ Q(n) τℓ then �M�η ∗ ⟨ξ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξℓ⟩ ∈ Q(n+|Σ|) τ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Similar to Lemma 5, exploiting standard rules for logarithm and partial derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 34 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) ια ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ια′ (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' subt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 1) Rα >0 ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Rα′ (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' subt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2) τ ′ 1 ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' τ1 τ2 ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' τ ′ 2 (τ1 • Σ → τ2) ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (τ ′ 1 • Σ → τ ′ 2) (a) Subtyping Γ | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ Γ ′ | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ ′ Γ ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Γ ′, τ ⊑?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' τ ′ x : τ | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' x : τ Γ, y : τ1 | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ2 Γ | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2 Γ | Σ2 ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ3 → τ2 Γ | Σ1 ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' N : τ1 Γ | Σ1 ++ Σ2 ++ Σ3 ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M N : τ2 θi : ια | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' θi : ια (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Para) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' r : ια (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Const), r ∈ �ι� | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' + : ια1 → ια2 → ια (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Add) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' · : ια1 → ια2 → ια (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Mul) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' − : Rα → Rα (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Min) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −1 : Rα >0 → Rα >0 (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Inv) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' exp : Rα → Rα′ >0 (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Exp) | [] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' log : Rα >0 → Rα′ (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Log) Γ | Σ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' L : ια Γ | Σ′ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : σ Γ | Σ′′ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' N : σ Γ | Σ ++ Σ′ ++ Σ′′ ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' if L < 0 then M else N : σ (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If) | [sj ∼ D] ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' sample D : Rα (cond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Sample) (b) Typing rules for ⊢?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 8: Generic type system with annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 35 Γ | Σ ⊢SGD M : τ Γ ′ | Σ ⊢SGD M : τ ′ Γ ⊑SGD Γ ′, τ ⊑SGD τ ′ x : τ | [] ⊢SGD x : τ Γ, y : τ1 | Σ ⊢SGD M : τ2 Γ | [] ⊢SGD λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ2 ⊢SGD M : τ1 • Σ3 → τ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ1 ⊢SGD N : τ1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ1 ++ Σ2 ++ Σ3 ⊢SGD M N : τ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='θi : ι(0) | [] ⊢SGD θi : ι(0) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD r : ι(0) r ∈ �ι� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD + : ι(0) → ι(0) → ι(0) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD · : ι(e) → ι(e) → ι(e) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD − : R(0) → R(0) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='−1 : R(e) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='>0 → R(e) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='>0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD exp : R(0) → R(1) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='>0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [] ⊢SGD log : R(e) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='>0 → R(0) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ ⊢SGD L : ι(0) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ′ ⊢SGD M : σ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ′′ ⊢SGD N : σ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Γ | Σ ++ Σ′ ++ Σ′′ ⊢SGD if L < 0 then M else N : σ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='| [sj ∼ D] ⊢SGD sample D : R(0) D has finite moments ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 9: Typing rules for ⊢SGD D Supplementary Materials for Section 5 Example 12 (Divergence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose M ≡ if ((λz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' z3+θ) sample N ) < 0 then 0 else 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let φθ(z) := z3 + θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that despite being bijective, φθ : R → R is not a dif- feomorphism because φ−1 θ (α) = 3√ α − θ is not differentiable at α = θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then Ez∼N [�M�(θ, z)] = � ∞ − 3√−θ N(z | 0, 1) dz ∂ ∂θEz∼N [�M�(θ, z)] = 1 3 · N(− 3√ −θ | 0, 1) · θ− 2 3 Therefore θ �→ Ez∼N [�M�(θ, z)] is not differentiable at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides, for θ = 0, Ez∼N � ∂ ∂θ�M(θ, z)�η � = Ez∼N � σ′ η(z3) � → ∞ D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Properties of Uniform Almost Uniform Convergence Let µ(U) = Es∼D[[s ∈ U]], where D has finite moments and φθ be a diffeomor- phism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We continue assuming compactness of Θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' limk∈N supθ∈Θ µ(φ−1 θ (Rn \\ Bk(0))) = 0 36 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Γ | Σ ⊢unif M : τ Γ ′ | Σ ⊢unif M : τ ′ Γ ⊑unif Γ ′, τ ⊑unif τ ′ x : τ | [] ⊢unif x : τ Γ, y : τ1 | Σ ⊢unif M : τ2 Γ | [] ⊢unif λy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' M : τ1 • Σ → τ2 Γ | Σ2 ⊢unif M : τ1 • Σ3 → τ2 Γ | Σ1 ⊢unif N : τ1 Γ | Σ1 ++ Σ2 ++ Σ3 ⊢unif M N : τ2 θi : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∅) | [] ⊢unif θi : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∅) | [] ⊢unif r : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∅) r ∈ �ι� | [] ⊢unif ◦ : ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) → ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) → ι(f,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) ◦ ∈ {+,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ·} | [] ⊢unif ◦ : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆1) → ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆2) → ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆1∪∆2) ◦ ∈ {+,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ·},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ∆1 ∩ ∆2 = ∅ | [] ⊢unif − : R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) → R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) | [] ⊢unif −1 : R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) >0 → R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) >0 | [] ⊢unif exp : R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) → R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) >0 | [] ⊢unif log : R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) >0 → R(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) Γ | Σ ⊢unif L : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) Γ | Σ′ ⊢unif M : σ Γ | Σ′′ ⊢unif N : σ Γ | Σ ++ Σ′ ++ Σ′′ ⊢unif if L < 0 then M else N : σ | [sj ∼ D] ⊢unif sample D : R(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='{sj}) θ | [] ⊢unif T : Rα → Rα θ | [sj ∼ D] ⊢unif transform sample D by T : R(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='{sj}) T diffeomorphic Γ | [] ⊢unif M : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) Γ | [] ⊢unif a · M : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) a ∈ N>0 Γ | [] ⊢unif M : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) Γ | [] ⊢unif M a : ι(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∆) a ∈ N>0 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 10: Typing rules for ⊢unif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 37 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let s0 ∈ Rn be arbitrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' δ(i) ∗ := sup θ∈Θ |φ(i) θ (s0)| d(i) k := sup θ∈Θ sup s∈Bk(s0) ∥∇θφ(i) θ (s)∥ thus if s ∈ Bk(s0), |φθ(s)(i)| ≤ ∥φ(i) θ (0)∥ + ⟨∇φ(i) θ (ζ), x⟩ ≤ δi ∗ + ∥∇φ(i) θ (ζ)∥ · ∥x∥ ≤ δi ∗ + d(i) k · k Let δ(i) k := δi ∗ + d(i) k · k δk := √n · n max i=1 δ(i) k Note that for s ∈ Rn, if ∥φθ(s)∥ > δk then |φ(i) θ (s)| > δ(i) r for some 1 ≤ i ≤ n and thus s ∈ Rn \\ Bk(s0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' As a consequence, φ−1 θ (Rn \\ Bδk(0)) ⊆ Rn \\ Bk(s0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Now, it suffices to observe that limk µ(Rn \\ Bk(s0)) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For each k ∈ N there exists c > 0 such that µ(φ−1 θ (U ∩ Bk(0))) ≤ c · Leb(U ∩ Bk(0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let f : Rn → R be the density of µ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then µ(φ−1 θ (U ∩ Bk(0))) = � φ−1 θ (U∩Bk(0)) f(s) ds = � U∩Bk(0) f(φ−1 θ (z)) · | det Jφ−1 θ (z)| dz Lemma 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose fη ◦ φ(−)(−) u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ f ◦ φ(−)(−) and f ̸= 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then ση ◦ fη ◦ φ(−)(−) u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ [f(φ(−)(−)) > 0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let δk, ϵk and ηk be witnesses for fη ◦ φ(−)(−) u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ f ◦ φ(−)(−).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For i ∈ N define Vi := {z ∈ Rn | |f(z)| < 1 i }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For every k ∈ N there exists ik ∈ N such that Leb(Vik ∩ Bk(0)) < 1 k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (This is because Leb((−) ∩ Bk(0)) is a finite measure and ∩i∈NVi ⊆ f −1(0) and f ̸= 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=') Furthermore, for k ∈ N let Kk ∈ N be such that ϵKk < 1 2ik .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By Assumption 3 there exists 0 < η′ k < ηKk such that for all 0 < η < η′ k and y > 1 2ik , ση(−y) < 1 k and ση(y) > 1 − 1 k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We also define δ′ k := δKk + sup θ∈Θ µ(φ−1 θ (Rn \\ Bk(0))) + 1 k ϵ′ k := 1 k 38 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) By Lemma 11, lim δ′ k = 0 = lim ϵ′ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Now, suppose θ ∈ Θ and k ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Define U ′ := UKk ∪ φ−1 θ (Vik) where UKk ⊆ Rn is the corresponding set for [f(φ(−)(−)) > 0], θ and Kk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It holds µ(U ′) ≤ µ(UKk) + µ(φ−1 θ (Rn \\ Bk(0))) + µ(φ−1 θ (Vik ∩ Bk(0))) ≤ µ(UKk) + µ(φ−1 θ (Rn \\ Bk(0))) + c · Leb(Vik ∩ Bk(0)) ≤ δ′ k Besides, for 0 < η < η′ k and s ∈ Rn \\ U ′, |fη(φθ(s)) − f(φθ(s))| < 1 2ik and |f(φθ(s))| ≥ 1 ik thus |fη(φθ(s))| > 1 2ik .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Consequently, |ση(fη(φθ(s))) − [f(φθ(s)) > 0]| < 1 k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If f : U1 × U2 → R (for open and connected U1, U2 ⊆ R) is contin- uously differentiable and gη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ g : Θ ×Rn → U1 and hη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ h : Θ ×Rn → U2, g, h are also bounded on bounded subsets of Rn then f ◦ ⟨gη, hη⟩ u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ f ◦ ⟨g, h⟩ : Θ × Rn → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' First, note that f ◦ ⟨g, h⟩ is bounded on bounded subsets of Rn because f is continuously differentiable and g and h also satisfies this property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let δ(i) k , ϵ(i) k and η(i) k (i ∈ {1, 2}) be witnesses for gη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ g and hη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' all ϵ(i) k ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Observe that for k ∈ N, Mk := sup (θ,s)∈Θ×Bk(0) ∥(g(θ, s), h(θ, s))∥ + √ 2 < ∞ because g(Θ × Bk(0)) and h(Θ × Bk(0)) are bounded by assumption (also Assumption 1) and therefore dk := sup (x,y)∈Mk∩(U1×U2) ∥∇f(x, y)∥ < ∞ is well-defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For k ∈ N there exists Kk ≥ k such that each √ 2 · dk · ϵ(i) Kk < 1 k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Define δk := µ(Rn \\ Bk(0)) + δ(1) Kk + δ(2) Kk ϵk := 1 k ηk := min{η(1) Kk, η(2) Kk} Note that by Lemma 11, lim δk = 0 = lim ϵk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let θ ∈ Θ and k ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let V := (Rn \\ Bk(0)) ∪ V (1) ∪ V (2), where V (1) (and V (2), respectively) are the sets for g (and h, respectively), θ and Kk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that µ(V ) ≤ δk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Besides for every 0 < η < ηk and s ∈ Rn \\ V , |gη(θ, s)| ≤ |g(θ, s)| + ϵ(1) Kk ≤ |g(θ, s)|+1 (similarly for h).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Hence,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' every point between (gη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' hη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s)) and (g(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' h(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s)) is in BMk(0) ∩ (U1 × U2) and therefore by the mean value Fast and Correct Optimisation for Probabilistic Programming via Smoothing 39 theorem,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' |f(gη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' hη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s)) − f(g(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' h(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s))| ≤ sup ζ∈BMk (0)∩(U1×U2) |⟨∇f(ζ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (gη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s) − g(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' hη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s) − h(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s))⟩| ≤ sup ζ∈BMk (0)∩(U1×U2) ∥∇f(ζ)∥ · ∥(gη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s) − g(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' hη(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s) − h(θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' s))∥ < dk · √ 2 · max{ϵ(1) Kk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' ϵ(2) Kk} ≤ ϵk using the Cauchy–Schwarz inequality in the second step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let f, fη : Θ × Rn → R have finite moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If fη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−−→ f then Es∼D[fη(θ, s)] unif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ Es∼D[f(θ, s)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It suffices to show the uniform convergence of Es∼D[|fη(θ, s) − f(θ, s)|] to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By assumption there exists M > 0 such that Es∼D � |fη(θ, s) − f(θ, s)|2� < M for all η > 0 and θ ∈ Θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let ϵ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By uniform almost uniform convergence of fη to f there exists k such that δk · M, ϵk < ϵ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose θ ∈ Θ and η < ηk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let U ⊆ Rn be the witness for almost uniform convergence of fη, k and θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, Es∼D[[s ∈ U]] · M < δk · M < ϵ 2 and for every s ∈ Rn \\ U, |fη(θ, s) − f(θ, s)| < ϵk < ϵ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Es∼D[|fη(θ, s) − f(θ, s)|] ≤ Es∼D [[s ∈ U] · |fη(θ, s) − f(θ, s)|] + Es∼D [[s ∈ Rn \\ U] · |fη(θ, s) − f(θ, s)|] ≤ Es∼D [[s ∈ U]] · Es∼D � |fη(θ, s) − f(θ, s)|2� + Es∼D � [s ∈ Rn \\ U] · ϵ 2 � ≤ ϵ D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Type Soundness In order to aggregate the effect of transformations we employ lists (typically denoted by Φ) of diffeomorphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' A list [φ(1) (−), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , φ(n) (−)] of diffeomorphisms Θ × R → R defines a diffeomorphism φ(−) : Θ × Rn → Rn (θ, [s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , sn]) �→ � φ(1) θ (s1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , φ(n) θ (sn) � and we use concatentation notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We posit the following infinitary logical relation RΦ τ between sequences of elements Θ × Rn → �τ� in VectFr (corresponding to the smoothings) and Θ × Rn → �τ� in QBS (corresponding to the measurable standard semantics): 40 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (fη, f) ∈ RΦ ι(f,∆) if fη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ f 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (fη, f) ∈ RΦ ι(t,∆) if fη u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' −−−→ f, fη = gη ◦ φ(−) and f = g ◦ φ(−), where (a) φ is defined by Φ as above (b) g : Rn → R is piecewise analytic and non-constant (c) on each piece g may only depend on (transformed) zj if sj ∈ ∆ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (fη, f) ∈ RΦ τ1•Σ3→τ2 iff for all Φ2 and (gη, g) ∈ RΦ++Φ2 τ1 , there exists Φ3 such that |Φ3| = |Σ3| and (fη ⊙ gη, f ⊙ g) ∈ RΦ++Φ2++Φ3 τ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that Item 2b implies f ̸= 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' because non-constant analytic functions vanish on negligible sets [28] and diffeomorphisms preserve negligibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If (fη, f) ∈ RΦ R(t,∆) and (gη, g), (hη, h) ∈ RΦ σ then ((ση ◦ (−fη)) · gη + (ση ◦ fη) · hη, [f(−) < 0] · g + [f(−) ≥ 0] · h) ∈ RΦ σ Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We focus on the argument for the case where σ is the annotated base type, in particular ι(t,∆), which is most interesting;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' the extension to higher orders can be obtained similarly as for Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Clearly, Items 2b and 2c are satisfied and u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' convergence follows from Lemmas 13 and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, Φ describes how samples which may have been drawn during execution are transformed We can add additional samples, which are ignored: Lemma 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let (fη, f) ∈ RΦ,τ and Φ′ be a list of diffeomorphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Then (gη, g) ∈ RΦ++Φ′,τ, where gη(θ, s ++ s′) := fη(θ, s) and g(θ, s ++ s′) := f(θ, s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Lemma 17 (Fundamental).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' If θ1 : ι(f,∅) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , θm : ι(f,∅) m , x1 : τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , xℓ : τℓ | Σ ⊢ M : τ, Φ be a list of diffeomorphisms, (ξ(1) η , ξ(1)) ∈ RΦ τ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , (ξ(ℓ) η , ξ(ℓ)) ∈ RΦ τℓ then there exists a list Φ′ of diffeomorphisms that |Σ| = |Φ′| and (�M�η ∗ ⟨ξ(1) η , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξ(ℓ) η ⟩, �M�∗⟨ξ(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' , ξ(ℓ)⟩) ∈ RΦ++Φ′ τ , where ∗ is defined as in Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The claim is proven by induction on the typing judgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We focus on the most interesting cases: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For conditionals we exploit the inductive hypothesis and Lemma 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Suppose θ | [sj ∼ D] ⊢unif transform sample D by T : R(t,{sj}) because T is diffeomorphic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We define g(sj) := sj φθ(s) := �T�(θ, [])(s, []) = �T�η(θ, [])(s, []) and therefore we can easily see that �transform sample D by T�η = g ◦ φ(−) = �transform sample D by T� and (�transform sample D by T�η, �transform sample D by T�) ∈ R [φ(−)] R(t,{sj }) follows immediately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 41 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For addition we focus on the interesting case | [] ⊢unif + : ι(t,∆1) → ι(t,∆2) → ι(t,∆1∪∆2), where ∆1 ∩∆2 = ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Let Φ, Φ1 and Φ2 be lists of diffeomorphisms, (f (1) η , f (1)) ∈ RΦ++Φ1 ι(t,∆1) and (f (2) η , f (2)) ∈ RΦ++Φ1++Φ2 ι(t,∆2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' By definition there are decompositions f (1) η = g(1) η φ(1) (−) f (1) = g(1) ◦ φ(1) (−) f (2) η = g(2) η φ(2) (−) f (2) = g(2) ◦ φ(2) (−) Let � g(1) η and � g(1) be the extension of g(1) η and g(1), respectively, to R|Φ|+|Φ1|+|Φ2| → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that �+�η ⊙ f (1) η ⊙ f (2) η = (� g(1) η + g(2) η ) ◦ φ(2) θ �+� ⊙ f (1) ⊙ f (2) = (� g(1) + g(2)) ◦ φ(2) θ Clearly (using Lemma 16), � g(1) + g(2) is again piecewise analytic and on each piece depends on (transformed) samples either g(1) or g(2) depends on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Furthermore, on each piece � g(1) + g(2) is not constant because g(1) and g(2) are not constant and depend on different variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' E Supplementary Materials for Section 7 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='1 Experimental Setup To generate the ELBO trajectories shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5, we separately took 1000 samples of the ELBO every 100 iterations, taking extra samples to reduce the variance in the graphs presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The random samples were the same across estimators, which leads to the correlation in noise seen in their trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Table 2 compares the average variance of the estimators, where the average is taken over a single optimisation trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' For each estimator, we took 1000 Monte Carlo samples of the gradient every 100 iterations to compute the vari- ance of the estimator at that iteration;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' we then computed the average of these variances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Since the gradients are vectors, the variance was measured in two ways: averaging the component-wise variances and the variance of the L2 norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We then separately benchmark each estimator by measuring how many iter- ations each can complete in a fixed time budget and setting the computational cost to be the reciprocal of that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' This is then used to compute a work-normalised variance [14,8] that is taken to be the product of the computational cost and variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Intuitively, we divide by the relative time taken since we can reduce the variance by the same factor running the faster estimator more times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='2 Models We include the models from [23], which are as follows: – temperature [34] models a controller keeping the temperature of a room within set bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The discontinuity arises from the discrete state of the 42 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) controller, being either on or off, which disrupts the continuous state rep- resenting the temperature of the room.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Given a set of noisy measurements of the room temperature, the goal is to infer the controller state at each of 21 time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The model has a 41-dimensional latent variable and 80 if-statements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' – textmsg [11] models daily text message rates, and the goal is to discover a change in the rate over the 74-day period of data given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The non-differentiability arises from the point at which the rate is modelled to change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The model has a 3-dimensional latent variable (the two rates and the point at which they change) and 37 if-statements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' – influenza [33] models the US influenza mortality for 1969.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In each month, the mortality rate depends on the dominant virus strain being of type 1 or type 2, producing a non-differentiablity for each month.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Given the mortality data, the goal is to infer the dominant virus strain in each month.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The model has a 37-dimensional latent variable and 24 if-statements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Additionally, we introduce the following models: – cheating [11] simulates a differential privacy setting where students taking an exam are surveyed to determine the prevalence of cheating without ex- posing the details for any individual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Students are tasked to toss a coin, on heads they tell the truth (cheating or not cheating) and on tails they toss a second coin to determine their answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The tossing of coins here is a source of discontinuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The goal, given the proportion of students who answered yes, is to predict a posterior on the cheating rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In this model there are 300 if-statements and a 301-dimensional latent space, although we only optimise over a single dimension with the other 300 being sources of randomness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' – xornet is a simple multi-layer neural network trained to compute the exclusive- or (XOR) function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' It has a 2-4-2-1 network architecture with two inputs and one output, and all activation functions being the Heaviside step func- tion which is traditionally infeasible for gradient-based optimisation because of the discontinuity at 0 and a zero gradient everywhere else.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The model has a 25-dimensional latent space (for all the weights and biases) and 28 if-statements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Note that this model is not applicable to the Lyy18 estimator since the branch conditions are not all affine in the latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='3 Analysis of Results The ELBO graph for the temperature model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5a shows that the Reparam estimator is biased, converging to a suboptimal value when compared to the Smooth and Lyy18 estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We can also see from the graph and the data in Table 2a that the Score estimator exhibits extremely high variance, and does not converge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The textmsg and influenza ELBO graphs in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5b and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5c both show all estimators converging towards roughly the same value, with Score exhibiting a larger variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The work-normalised variance of the Smooth estimator across both model is the lowest across both variance measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Fast and Correct Optimisation for Probabilistic Programming via Smoothing 43 For the cheating model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5d, we have another visual indicator of the bias of the Reparam gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Here Smooth outperforms again with the lowest work-normalised variance (ignoring that of Reparam since it is biased).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Finally, the xornet model shows the difficulty of training step-function based neural nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' The Lyy18 estimator is not applicable here since the boundary integral has no general efficient estimator for non-affine conditionals, which is the case here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 5e, the Reparam estimator makes no progress while other estimators manage to converge to close to 0 ELBO, showing that they learn a network that correctly classifies all points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' In particular, the Smooth estimator converges the quickest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' To summarise, the results show cases where the Reparam estimator is biased and how the Smooth estimator do not have the same limitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Where the Lyy18 estimator is defined, they converge to roughly the same objective value;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' and the smoothing approach is generalisable to more complex models such as neural networks with non-linear boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Our proposed Smooth estimator has consistently significantly lower work-normalised variance, up to 3 orders of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' 44 Basim Khajwal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' Luke Ong, and Dominik Wagner(�) Table 2: Computational cost and work-normalised variances, all given as ratios with respect to the Score estimator (whose data are omitted since they would be a row of 1s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' We chose η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='15 for Smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=' (a) temperature Estimator Cost Avg(V (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=')) V (∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∥2) Smooth 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='62e+00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='17e-10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='09e-09 Reparam 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='28e+00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='48e-08 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='01e-08 Lyy18 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='12e+00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='22e-06 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='76e-05 (b) textmsg Estimator Cost Avg(V (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=')) V (∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∥2) Smooth 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='00e+00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='29e-02 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='79e-02 Reparam 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='18e+00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='43e-02 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='29e-02 Lyy18 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='00e+00 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='76e-02 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='46e-02 (c) influenza Estimator Cost Avg(V (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=')) V (∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∥2) Smooth 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='47e+00 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='15e-03 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='58e-03 Reparam 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='17e+00 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='45e-03 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='68e-03 Lyy18 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='30e+00 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='88e-02 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='91e-02 (d) cheating Estimator Cost Avg(V (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=')) V (∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∥2) Smooth 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='59e+00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='64e-03 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='94e-03 Reparam 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='66e-01 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='47e-19 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='74e-18 Lyy18 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='51e+00 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='39e-02 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='34e-01 (e) xornet Estimator Cost Avg(V (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content=')) V (∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='∥2) Smooth 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='66e+00 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='57e-03 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='46e-02 Reparam 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='51e-01 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='55e-09 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} +page_content='37e-09' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adE1T4oBgHgl3EQfwwXB/content/2301.03415v1.pdf'} diff --git a/btAzT4oBgHgl3EQfLfs5/vector_store/index.pkl b/btAzT4oBgHgl3EQfLfs5/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..3b22293856b032fc7484379c67571da35da04142 --- /dev/null +++ b/btAzT4oBgHgl3EQfLfs5/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90ed09aef06a2ca9ccd6589fa8f8e18c2b2084ddec2d62722917dd31c871a635 +size 351791 diff --git a/ctE3T4oBgHgl3EQfeAqc/content/2301.04540v1.pdf b/ctE3T4oBgHgl3EQfeAqc/content/2301.04540v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7f63b2ce6cee304719929d8794873bd7aa5f927c --- /dev/null +++ b/ctE3T4oBgHgl3EQfeAqc/content/2301.04540v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bc9cbed4d9f0613c0ef4cd6903b067dfffabd737d826caa7384caa8be5d5d86 +size 8811008 diff --git a/e9E1T4oBgHgl3EQfegRi/content/tmp_files/2301.03207v1.pdf.txt b/e9E1T4oBgHgl3EQfegRi/content/tmp_files/2301.03207v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..a30fad2df0ef444a50dcb522b0b0060923f3895b --- /dev/null +++ b/e9E1T4oBgHgl3EQfegRi/content/tmp_files/2301.03207v1.pdf.txt @@ -0,0 +1,1900 @@ +Negative Results of Fusing Code and +Documentation for Learning to Accurately Identify +Sensitive Source and Sink Methods +An Application to the Android Framework for Data Leak Detection +Jordan Samhi∗, Maria Kober‡§, Abdoul Kader Kabore∗, Steven Arzt†, Tegawend´e F. Bissyand´e∗, Jacques Klein∗ +∗ SnT, University of Luxembourg, firstname.lastname@uni.lu +† Fraunhofer Institute for Secure Information Technology, Darmstadt, Hessen, Germany, steven.arzt@sit.fraunhofer.de +‡ mariakober.research@gmx.com +Abstract—Apps on mobile phones manipulate all sorts of data, +including sensitive data, leading to privacy-related concerns. +Recent regulations like the European GDPR provide rules for +the processing of personal and sensitive data, like that no such +data may be leaked without the consent of the user. +Researchers have proposed sophisticated approaches to track +sensitive data within mobile apps, all of which rely on specific +lists of sensitive SOURCE and SINK API methods. The data flow +analysis results greatly depend on these lists’ quality. Previous +approaches either used incomplete hand-written lists that quickly +became outdated or relied on machine learning. The latter, +however, leads to numerous false positives, as we show. +This paper introduces CODOC, a tool that aims to revive +the machine-learning approach to precisely identify privacy- +related SOURCE and SINK API methods. In contrast to previous +approaches, CODOC uses deep learning techniques and combines +the source code with the documentation of API methods. Firstly, +we propose novel definitions that clarify the concepts of sensitive +SOURCE and SINK methods. Secondly, based on these definitions, +we build a new ground truth of Android methods representing +sensitive SOURCE, SINK, and NEITHER (i.e., no source or sink) +methods that will be used to train our classifier. +We evaluate CODOC and show that, on our validation dataset, +it achieves a precision, recall, and F1 score of 91% in 10-fold +cross-validation, outperforming the state-of-the-art SUSI when +used on the same dataset. However, similarly to existing tools, +we show that in the wild, i.e., with unseen data, CODOC performs +poorly and generates many false positive results. Our findings, +together with time-tested results of previous approaches, suggest +that machine-learning models for abstract concepts such as +privacy fail in practice despite good lab results. To encourage +future research, we release all our artifacts to the community. +I. INTRODUCTION +Given the ubiquity of mobile devices nowadays and the +proliferation of apps installed and used by end users, Android +apps’ analysis has become a common topic in software engi- +neering research. Numerous approaches have been proposed +to check security properties, detect malicious code, and detect +program bugs. These approaches leverage techniques such as +dynamic analysis [56], [40], [14], static analysis [31], [48], +[23], [58], [38], [45], [4], [20], [46], or both (i.e., hybrid +analyses) [8], [7]. +§Most of the work was completed while Maria Kober was present at +Fraunhofer SIT +The previously mentioned analysis approaches usually con- +sist of several specific techniques that are applied to apps. One +of them is taint analysis, which checks whether data obtained +from a given SOURCE method (or any kind of data derived +from it, e.g., after some computation) is passed to a SINK +method. In the context of Android-based privacy research, +a SOURCE is an API method that provides privacy-sensitive +data. A SINK is an API method that writes data to the outside, +e.g., via the network. +The need for sensitive SOURCE and SINK lists is ubiquitous +in taint analysis. Indeed, the literature is full of approaches and +techniques that set up privacy strategies building on sensitive +API methods that allow retrieving sensitive data and/or API +methods allowing to expose this kind of data. The range of +approaches relying on sensitive SOURCE and SINK methods +that could benefit from a complete and precise list of SOURCE +and SINK methods is large: sensitive data leak detection [31], +[19], [14], [4], [25], [47], [49], Android component leak +detection [27], dynamic policy enforcement [41], malware +detection [67], [51], [43], hidden behavior detection [39], +[66], inter-app communication analysis [10], [29], component +hijacking vulnerabilities detection [34], [65], the uncovering +of run-time sensitive values [42], as well as GDPR compliance +checks [17], [16]. +In its public-facing API – i.e., methods contained in the +official online documentation for Android –, Android 111 +contains more than 34 000 methods. However, it has been +shown [30] that developers have access to many more methods +inside the Android framework that are not directly available +in the public-facing API (e.g., hidden to developers using the +annotation ”@hide”). In Android 11, for example, more than +210 000 methods are available to developers in total, e.g., +through the use of reflection2. These large numbers render +a manual classification of sources and sinks infeasible. Fur- +thermore, additional methods are added in every new release. +New frameworks, such as Android Auto or Chromecast, bring +1API level 30, which was released in September 2020 +2Note that reflection can also be used to make private methods accessible +since Android does not provide a Java security manager. +arXiv:2301.03207v1 [cs.CR] 9 Jan 2023 + +in new methods, and thus potentially new sources and sinks as +well. Therefore, automatic approaches for identifying sources +and sinks are needed. Several approaches have been proposed +in the literature to solve this problem [19], [36], [14]. SUSI [3], +which is based on supervised machine-learning, is currently +one of the most popular approaches and the state-of-the-art [4], +[60], [39]. To the best of our knowledge, it is also the most +comprehensive and state-of-the-art approach for deriving lists +of sources and sinks from frameworks like Android. +However, the sources list generated by SUSI is neither +precise, nor specific for privacy analysis. As we show in +Section IV, SUSI classifies methods as sources, even though +they are clearly irrelevant for privacy analysis. Secondly, SUSI +relies on technical categories (network information, unique +identifiers, etc.) to structure its output. Selecting all categories +that could be relevant for privacy analysis leads to a large +number of irrelevant APIs being selected as well. These +observations are not surprising because SUSI makes no upfront +assumptions on the sensitivity of the sources yielded. +Further, some methods are misclassified entirely by SUSI, +e.g., the method getScrollIndicatorBounds of the +android.view.ViewGroup class is categorized as a +source in the ”SMS MMS” category. As we show in Sec- +tion VI-E, these issues lead to a profusion of false positives +for any data flow tracker that relies on SUSI’s source/sink lists. +Several works in the literature [57], [24], [35] have come to +similar conclusions. Luo et al. show that SUSI’s sources list +leads to a false positive rate of almost 80% while trying to +detect sensitive data leaks [35]. Further, in accordance with our +own findings, they state the following regarding the sources +yielded by SUSI: +the root cause of these false positives is that their +sources [...] are actually inappropriate, i.e., they do +not return sensitive data. +We note that SUSI’s evaluation in the original paper [3] does +not highlight these issues, and that SUSI performs well on its +training data and select examples. However, in the real-world, +the false positive rate is much higher. +To the best of our knowledge, no other approach has +tackled the problem of automatically identifying sources and +sinks in Android since the release of SUSI in 2014. In other +words, SUSI is still the most relevant approach despite its +shortcomings. Since the problem remains highly relevant and +unsolved regarding sensitive data, we attempted to improve +the SUSI approach. The SUSI features for the supervised +machine learning rely entirely on static code analysis on the +Android bytecode implementation (the Android platform JAR +file). SUSI considers individual properties as features, such +as method names, parameter types, or method/class modifiers +which do not capture the entire semantic of the code. +Our approach, that we named CODOC, on the other hand, +captures the entire semantics of a given method by taking +its’ complete source code into account. Additionally, CODOC +also considers the JavaDoc documentation of the Android +API. We observed that the Android documentation, which is +fairly extensive for most classes and methods, provides enough +guidance to the developer to correctly use the API. We then +assumed that analyzing this documentation would also help +in more precisely discovering sensitive SOURCE and SINK +methods in the Android API. +Lastly, CODOC is an attempt to incorporate the ground- +breaking advances that have been made in text and code +embedding [2], [44] and, thus, machine learning since 2014 +when SUSI was published. +While our evaluation shows that CODOC outperforms SUSI +in the lab, CODOC’s real-world performance, likewise, is +still lacking. Manually inspecting the SOURCE and SINK +methods identified in a set of previously unseen API methods +from the Android framework reveals many false positives. +We, therefore, argue that even adding documentation and +improving the machine learning techniques do not solve the +problem of accurately identifying privacy-related sources and +sinks in Android. Even with more and better training data and +careful optimization of the training, the overall goal remains +elusive. We argue that the semantic gap between an individual +API method (code or documentation) and an abstract concept +such as user privacy are unlikely to be closed by supervised +machine learning. Instead, novel approaches are necessary. +Further, we call for a more careful evaluation of machine +learning results. In the lab studies based on 10-fold cross- +validation, SUSI is sufficient, and CODOC is even better. Still, +on real-world data, i.e., previously unseen methods from the +Android framework, both fail to meet expectations. +Overall, we make the following contributions: +• we propose CODOC: a novel, fully-automated, deep- +learning-based approach to detect sensitive SOURCE and +SINK methods in the Android framework based on API +method source code and documentation; +• we release a new ground-truth of methods labeled as +sensitive SOURCE, SINK, or NEITHER; +• we evaluate CODOC and show that it outperforms the +state-of-the-art SUSI on a small evaluation dataset, reach- +ing a precision, recall, and F1 score of 91% in the lab; +• we apply CODOC on public methods from the Android +framework and show that, likewise SUSI, it yields a high +rate of false positives; +• We release our open-source prototype CODOC to the +community and all the artifacts used in our study at: +https://github.com/JordanSamhi/CoDoC +II. BACKGROUND +In this section, we provide the reader with context for the +work presented in this paper. +A. Taint Analysis +Taint analysis is a particular dataflow analysis that tracks +data through the control flow graph of a program. If a variable +V is assigned the return value of a specific function, like a +SOURCE method, it becomes tainted. If a tainted value is as- +signed to some other variable W, this variable W gets tainted +as well. The same applies if W is assigned the result of some +operation on an already tainted variable V . In other words: +2 + +the taint is propagated. When a tainted variable is passed to a +SINK function as a parameter, a leak is reported, as the value +derived from the SOURCE reached a SINK. In the case of data +leak detection in the context of privacy analysis, an example +of a SOURCE in the Android Framework is getImei() and +an example of a SINK is sendTextMessage(). +B. Text Embedding +Our work relies on methods’ source code and documenta- +tion to train a machine learning model and infer SOURCE +and SINK methods. In order to be processed by machine +learning algorithms, these textual representations need to be +transformed (embedded) into numerical representations, i.e., +numerical vectors. In this section, we briefly describe two +state-of-the-art techniques for this transformation, namely +Sentence-BERT [44] and Code2Vec [2]. +1) Sentence-BERT: Method documentation embedding re- +quires +efficient +natural +language +processing +techniques. +SENTENCE-BERT is a modified and more computationally ef- +ficient version of the well-known BERT neural network [13]. +It relies on siamese and triplet network structures to obtain +meaningful sentence embeddings. +2) CODE2VEC: Similarly to natural language embedding, +making predictions from source code requires code embedding +to have a homogeneous representation of different source code +inputs. CODE2VEC embeds Java methods to predict method +names. Methods are transformed into ASTs (Abstract Syntax +Trees) to construct path representations between different leaf +nodes. Then, using the attention mechanism [5], bag of path- +contexts are aggregated into a single vector that represents the +method body. +III. DEFINITIONS +In the literature, there is no consensus on the definitions +of sensitive SOURCE and SINK methods, which leads to a +lack of clarity in papers related to taint analysis. As described +in Section II-A, taint analysis tracks the flow of data from +a given SOURCE to a given SINK, no matter the type of +data. However, in most of the papers, the authors mix sensitive +SOURCE with SOURCE, which makes taint analysis appear as +tracking sensitive data, which is not always the case. Tracking +sensitive data is an instance of the more general task of +tracking data. +To cope with this problem and provide state-of-the-art +approaches that aim at tracking sensitive data with clear terms, +we propose the following definitions: +Definition 1 (Data). Any value or reference to a value. +Definition 2 (Composite Data). Any structure composed of +multiple Data (e.g., an object). +Definition 3 (Sensitive Data). Any data or composite data that +holds private value(s) that: +• can identify users, i.e., usernames and personally identi- +fying data like email address or name +• can identify the device, i.e., unique device identifiers +• are related data to personal information (of the phone +user), e.g., photographs and files, phone calls, SMS +• represent data owned by users holding information about +other users, e.g., contacts and phone lists, emails, etc. +• represent environment and sensor information, including +geolocation data, camera, and microphone. +Definition 4 (Sensitive SOURCE). A function that returns a +Sensitive Data. Note that functions that return constant values +are never sensitive sources. +Definition 5 (SINK). A function that sends out one or more +Data, Composite Data, or values derived from a Data or a +Composite Data from the application memory space. There is +no notion of sensitivity for sinks. The nature of the data (more +precisely: the SOURCE from which the data was originally +obtained) passed to the sink determines whether a leak of +sensitive data occurs. +IV. MOTIVATION +Tracking sensitive data within Android apps is of high +interest since it is used in numerous security-related ap- +proaches [62], [50], [61], [4], [31] and part of legal compli- +ance, e.g., according to the GDPR. Therefore, there is a need +to provide analysts and researchers with sources and sinks lists +that precisely enclose sensitive data (cf. Definition 6). +Android API: As briefly explained in Section I, the number +of public and documented API methods intended for use +by Android developers amounts to tens of thousands and +increases with every new API version (see Figure 1). Still, +even identifying all sources and sinks in these documented +public API methods is not sufficient, as developers also call +methods not intended for direct use, yet present in the Android +framework [30]. With tens of thousands of public API meth- +ods and hundreds of thousands of overall methods, manual +classification for every new release is obviously infeasible. +Therefore, automated solutions are needed to produce sen- +sitive sources and sinks lists for every new release. In the +following, we explain why the existing state of the art is +inappropriate for this task. +0 +5 +10 +15 +20 +25 +30 +API level +0 +5000 +10000 +15000 +20000 +25000 +30000 +35000 +40000 +Number of methods in the Android API +All methods +Public methods +Fig. 1: Number of methods in the public-facing Android API +by API level +Problem with the existing state of the art: The state-of- +the-art approach SUSI [3] uses machine learning to automati- +cally classify Android SOURCE and SINK methods. However, +it has been shown several times [57], [24], [35] that SUSI’s +lists are inappropriate since it is not specific to a particular +3 + +14 +16 +18 +20 +22 +24 +26 +28 +30 +API level +0 +50000 +100000 +150000 +200000 +250000 +300000 +350000 +Number of methods in the Android framework +All methods +Public methods +Fig. 2: Number of methods in the entire Android framework +code by API level +1 +public void method() { +2 +int p = 7; +3 +int q = 4; +4 +Rational r = new Rational(p, q); +5 +int value = r.intValue(); +6 +SmsManager s = SmsManager.getDefault(); +7 +s.sendTextMessage("0", null, value, null, null); +8 +} +Listing 1: Example of SUSI non-sensitive data leak +analysis like tracking sensitive data. Therefore it produces +many false positives and forces analysts to manually select +appropriate SOURCE and SINK methods. +Consider the example in Listing 1. In line 4, a Rational +object is created from two integers. In line 5, the integer +representation of the Rational is retrieved using the method +intValue() and stored in variable value. Eventually, this +value is sent out of the device via SMS. Since SUSI wrongly +considers the method intValue() as a SOURCE and the +method sendTextMessage() as a SINK, a taint analysis +based on SUSI will report a leak. This leak, however, is +irrelevant in the context of security and privacy as the SOURCE +is not sensitive. Thus, analysts will consider it a false positive +when aiming to detect sensitive data leaks in Android apps. +We aim to improve upon the state of the art by producing +a more adequate ground truth to train an improved machine +learning model based on method documentation and source +code, unexplored until now. +V. APPROACH +In this paper, we aim to automatically identify sensitive +SOURCE and SINK methods in the Android framework among +all API methods available to developers (i.e., > 210 000 in +Android 11) using supervised machine learning. Figure 3 +shows an overview of our approach. +Similar to SUSI, we build our training data by manually +labeling Android methods. We consider a method as a sensitive +SOURCE if it matches Definition 43, a SINK if it matches +Definition 5, and NEITHER otherwise. In contrast to SUSI, +our approach then uses features extracted from the code as +well as the documentation to train a machine-learning model +on our ground truth. This is a key difference to SUSI, which +only uses distinct properties extracted from parts of the code, +3in the following, when we refer to SOURCE, we mean ”Sensitive” SOURCE +such as method names and parameters or class and method +modifiers. Further, SUSI completely disregards the method’s +documentation, which CODOC includes. Moreover, as we rely +on the entire source code of a method, we are able to capture +the entire semantic of it. +We finally use our generated model to predict new sensitive +SOURCE and SINK methods from the Android framework +methods. We explain the individual steps in the following +sections. First, we give details about our manual labeling of +Android methods in Section V-A. Then, in Section V-B, we +explain what features were chosen for training our models. +Lastly, in Section V-C, we explicit on what machine learning +models our approach builds upon. +A. Manual Labeling +Since our approach relies on supervised machine learning +algorithms, labeled data is needed to train our model. However, +manual labeling is a challenging and time-consuming task, +especially if we randomly chose methods from the Android +framework to label one by one. Further, finding a SOURCE or +a SINK through random picking is highly unlikely as most +methods in the Android framework are neither. Therefore, we +opted for a better strategy divided into three phases: +Phase 1: The authors first constituted a golden dataset based +on well-known methods that return sensitive data described +in the literature [18], [4], [14], [39], [15], [64]. These meth- +ods span across classes such as: TelephonyManager, +AccountManager, LocationManager, SmsMessage, +or SensorManager. This step yielded an initial set of 39 +SOURCE and 35 SINK methods. +Phase 2: As explained in Section IV, SUSI can generate +lists of sources and sinks (from its own definition, i.e., not +restricted to sensitive methods). We applied SUSI on Android +API version 30 to generate additional pre-selected input that +we hand-labeled as training data for CODOC. As described +previously, randomly picking methods from the Android API +would mostly lead to methods that are neither sources nor +sinks. Therefore, we opted to focus hand-labeling on methods +that are more likely to be relevant. We concatenated the list of +sources and the list of sinks computed by SUSI to obtain a full +list of methods M that SUSI considers relevant. Note that we +did not manually post-process methods that SUSI classified as +neither a source or a sink. Two of the authors then applied +manual post-processing as follows. +One author started from the top of each list manually +classify each method in the respective list. The other author +started from the bottom of each list with same task. For +each method m ∈ M, the authors independently read the +documentation and the source code to be able to classify it +as a SOURCE, a SINK, or NEITHER based on the definitions +described in Section III. +This step leads to three lists per author: a x SOURCE list; +a y SINK list; and z a NEITHER list. +Phase 3: The third phase aimed at calculating the inter-rater +agreement between the data labeled by both authors in phase +2. We use inter-author agreement as a quality measure for +4 + +Public +methods +extraction +Documented +methods filter +Documentation +extraction +Source code +extraction +Source code +embedding +Documentation +embedding +Classification +Documentation +extraction +Source code +extraction +Source code +embedding +Documentation +embedding +Training +Sources +Sinks +Neither +Reference Set +Model +Vectors +Source +Sink +Neither +Vectors +Fig. 3: Overview of the CODOC approach +our hand-labeled dataset, which is later used for training the +CODOC classifier. To do so, both authors together alternately +verified the results of each other and noted the agreement. +Eventually, a Cohen’s Kappa coefficient [11] was computed +to evaluate the level of agreement. Due to the clear definitions +given in Section III, both authors reached a perfect agreement +level of 1. In phase 2, both authors classified 192 methods +as sources, resulting in a total of 231 SOURCE methods from +phase 1 and phase 2; 95 methods as sinks, resulting in a total +of 130 SINK methods; as well as 654 NEITHER methods. In +total, we have a set of 1015 API methods for model training. +B. Data collection and representation +Our approach relies on two different types of input: x +the documentation of a method; and y the source code of +a method. This section explains how these data were gathered +and transformed into numerical value vectors. +Data Collection: As an open-source project, the Android +source code is directly available on the Internet 4. We down- +loaded and parsed it using JAVAPARSER [55] to extract public +methods that were documented, and implemented (i.e., con- +crete methods). A method that is documented means either +that: x the method itself is documented; or y in one of its +parent classes/interfaces is documented. For each method in +the so derived dataset, we extracted x its source code; and y +its documentation. Eventually, our dataset consists of 46 034 +methods from the Android framework. +Source Code Representation: Source code must be pre- +processed as a piece of textual information before it can +serve as input for machine-learning algorithms. In our work, +we rely on CODE2VEC [2] (also see ection II). Since the +samples have different sizes, they must be transformed into +fixed-size numerical vectors. CODE2VEC relies on a neural +network that needs to be trained in order to generate source +code vectors. As the source code of the Android framework +is Java code and the original pre-trained model available in +the CODE2VEC project repository5 has been trained on Java +source code as well, we could have used this model. However, +4https://android.googlesource.com/ +5https://github.com/tech-srl/code2vec +0 +200 +400 +600 +800 +1000 +Fig. 4: Distribution of the number of words in the documen- +tation collected per Android method +since Android code contains platform-specific semantic tokens +that cannot be found in regular Java source code (e.g., Activity, +BroadcastReceiver, etc.), we decided against this approach. +Instead, we trained the model with the source code from the +Android framework to assure that our model properly captures +the platform-specific tokens prevalent in Android. +After training the CODE2VEC model with Android frame- +work data, we fed the model with the 46 034 Android methods +previously extracted to generate their numerical value vectors. +Eventually, 46 034 vectors of size 384 were generated. +Documentation representation: In the same way as the +source code, the documentation has to be embedded into fixed- +sized numerical value vectors to be fed into machine learning +algorithms. We relied on SENTENCE-BERT [44] to generate +those vectors. We leave experimentation with other models +such as BERT [13] or RoBERTa [32] to future work. +We used SENTENCE-BERT with the ”paraphrase-mpnet- +base-v2” pre-trained model, which is the one achieving the +best performances6 at the time of writing. Eventually, all +documentation of the 46 034 previously gathered methods +were converted into 768-value long vectors. The distribution +of the number of words in the documentation collected is +available in Figure 4. Note that on average, the number of +words in the documentation collected is 56, and the median +is 32. In future work, we will investigate the effect of text +summarizing, i.e., learning from more compact texts. +C. Deep Learning Architecture +Our deep learning model architecture is straightforward and +aims at combining documentation and source code vectors into +a single representation. The overall architecture is available +in Figure 5. Since we are working on two different inputs +6https://www.sbert.net/docs/pretrained models.html +5 + +<>Documentation +vector +Source code +Vector +Dense layers +Dense layers +Parallel Neural +Networks +Concatenation +layer +Dense layers +Pneither +Softmax +Psource +Psink +Fig. 5: CODOC neural network architecture. +(i.e., the documentation and the source code) of two different +sizes, we decided to rely on two parallel and identical sub- +neural networks and combine their output into a single vector +that, in turn, is used for a classification task. Each of those +two parallel networks is built using a stack of three dense [22] +layers with ReLU [6] as activation function. They are used for +extracting fixed-size features from the two inputs. Thus, the +first and the second sub-networks take as input the 768-long +documentation vector and the 384-long source code vector, +respectively, and provide as output two vectors of size 128 +each. Those outputs are combined using a concatenation layer +that produces an unique 256-long vector. We use this unique +vector for a classification task carried out in 3 additional dense +layers. A softmax [52] loss function is considered in the last +dense layer in order to perform a multi-class classification, +resulting in a classification as SOURCE, SINK, or NEITHER. +VI. EVALUATION +To evaluate CODOC, we address the following research +questions: +RQ1: Do documentation and source code features provide +complementary input for classification? +RQ2: How does CODOC perform in 10-fold cross-validation +and how does it compare with SUSI? +RQ3: Can CODOC make better predictions than SUSI in the +wild, i.e., with unseen data? +RQ4: How does CODOC perform on previously unseen meth- +ods? +RQ5: How do the source and sink lists created by CODOC +and by SUSI compare in data flow analysis? +A. RQ1: Features complementarity +Objective: In this section, we aim at evaluating to what +extent both the source code and the documentation are needed +to predict sensitive SOURCE and SINK methods. Intuitively, +the source code and the documentation should contribute +complementary pieces of semantic information. +Experimental Setup: To experimentally evaluate this hypoth- +esis, we run and compare 4 configurations of CODOC: +1) Binary classification with SOURCE and ¬SOURCE +a) Only documentation +b) Only source code +2) Binary classification with SINK and ¬SINK +a) Only documentation +b) Only source code +Note that we tested these configurations on multiple clas- +sifiers, i.e., we exchanged the dense classification layer in +Figure 5 with other classifiers. We did so to ensure that RQ1 +is answered in-depth and not dependent on a single clas- +sification approach. For each binary classification described +above, we performed a stratified 10-fold cross-validation [26], +[54], and for each iteration we retained the methods that +were mis-classified considering only the documentation and +well-classified using only the source code, and vice-versa. +The results using only code resp. only documentation are +available in Table I. We notice that predictions using only +documentation yield better results both for source and sink +predictions. +25 +33 +110 +Source Code +Documentation +Number of sources correctly predicted (KNN) +(a) Sources prediction +4 +14 +64 +Source Code +Documentation +Number of sinks correctly predicted (KNN) +(b) Sinks prediction +Fig. 6: Number of sources and sinks correctly predicted with +the KNN algorithm. +Further, we computed the overall number of methods that +were mis-classified with the documentation but well-classified +with the source code, and vice-versa for sources and sinks +and for each classifier. Figures 6a and 6b illustrate the results +of this experiment for the KNN algorithm. There we can see +that classification with the source code can correctly predict +25 sources that classification with documentation cannot, and +33 vice-versa. In the same way, classification with the source +code can correctly predict 4 sinks that classification with +documentation cannot, and 14 vice-versa. This shows that, +although classification with documentation is better, it misses +some samples that classification with source code does not. +This shows the need to use both features for our work to +improve our model capabilities. +RQ1 answer: Both the documentation and the code inde- +pendently bring additional information with regard to the +other, i.e., several sources/sinks could only be found with +the documentation, or only with code features. +B. RQ2: 10-fold cross validation on CODOC’s model and +comparison with the state-of-the-art SUSI +Objective: In this section, we investigate whether our ap- +proach can better identify sensitive SOURCE and SINK meth- +ods than the state-of-the-art SUSI approach. We use CODOC +with source code and documentation as input. Note that classi- +fying SOURCE and SINK methods is not a binary classification +6 + +SOURCE prediction +SINK prediction +Code +Documentation +Code +Documentation +A +P +R +F +K +A +P +R +F +K +A +P +R +F +K +A +P +R +F +K +XGB +0.86 +0.74 +0.62 +0.67 +0.58 +0.91 +0.87 +0.72 +0.78 +0.70 +0.94 +0.91 +0.58 +0.70 +0.66 +0.95 +0.91 +0.67 +0.76 +0.68 +SVC +0.82 +0.61 +0.64 +0.62 +0.49 +0.90 +0.75 +0.86 +0.80 +0.72 +0.88 +0.55 +0.69 +0.60 +0.54 +0.94 +0.75 +0.83 +0.78 +0.73 +DT +0.80 +0.56 +0.60 +0.57 +0.49 +0.85 +0.66 +0.66 +0.66 +0.52 +0.88 +0.53 +0.54 +0.53 +0.46 +0.87 +0.50 +0.50 +0.48 +0.50 +RF +0.85 +0.74 +0.56 +0.62 +0.58 +0.88 +0.92 +0.55 +0.68 +0.58 +0.93 +0.94 +0.51 +0.65 +0.57 +0.92 +0.93 +0.46 +0.60 +0.51 +GNB +0.81 +0.55 +0.82 +0.65 +0.53 +0.87 +0.66 +0.85 +0.74 +0.65 +0.82 +0.40 +0.77 +0.52 +0.42 +0.88 +0.54 +0.76 +0.62 +0.56 +SGD +0.82 +0.61 +0.57 +0.58 +0.50 +0.91 +0.83 +0.79 +0.80 +0.70 +0.90 +0.60 +0.65 +0.61 +0.52 +0.93 +0.75 +0.74 +0.73 +0.70 +KNN +0.85 +0.73 +0.56 +0.63 +0.56 +0.89 +0.82 +0.68 +0.74 +0.67 +0.91 +0.66 +0.64 +0.64 +0.60 +0.92 +0.67 +0.82 +0.73 +0.69 +BDT1 +0.85 +0.74 +0.50 +0.59 +0.51 +0.88 +0.86 +0.58 +0.68 +0.58 +0.92 +0.84 +0.49 +0.61 +0.54 +0.91 +0.73 +0.45 +0.55 +0.49 +BDT2 +0.85 +0.74 +0.50 +0.59 +0.51 +0.88 +0.86 +0.58 +0.68 +0.58 +0.92 +0.84 +0.49 +0.61 +0.54 +0.91 +0.73 +0.45 +0.55 +0.49 +ET +0.86 +0.75 +0.56 +0.63 +0.58 +0.87 +0.89 +0.51 +0.64 +0.54 +0.94 +0.95 +0.60 +0.73 +0.67 +0.93 +0.93 +0.49 +0.62 +0.57 +ADA1 +0.84 +0.66 +0.62 +0.63 +0.50 +0.89 +0.78 +0.70 +0.73 +0.70 +0.91 +0.74 +0.58 +0.63 +0.55 +0.94 +0.78 +0.73 +0.74 +0.70 +ADA2 +0.84 +0.66 +0.62 +0.63 +0.50 +0.89 +0.78 +0.70 +0.73 +0.70 +0.91 +0.74 +0.58 +0.63 +0.55 +0.94 +0.78 +0.73 +0.74 +0.70 +GB +0.86 +0.72 +0.62 +0.66 +0.57 +0.90 +0.83 +0.68 +0.75 +0.69 +0.93 +0.90 +0.52 +0.64 +0.59 +0.92 +0.85 +0.53 +0.64 +0.59 +NN +0.84 +0.69 +0.56 +0.60 +0.50 +0.90 +0.86 +0.68 +0.75 +0.64 +0.90 +0.77 +0.34 +0.45 +0.33 +0.87 +0.00 +0.00 +0.00 +0.00 +TABLE I: Results of binary classification on multiple classifiers with code and documentation. (A = Accuracy, P = Precision, +R = Recall, F = F1 score, K = Kappa score) +Precision +Recall +F1 score +SOURCE +0.82 +0.88 +0.85 +SINK +0.93 +0.87 +0.90 +NEITHER +0.95 +0.93 +0.94 +Macro Average +0.90 +0.89 +0.89 +Weighted Average +0.91 +0.91 +0.91 +TABLE II: CODOC performances +Precision +Recall +F1 score +SOURCE +0.83 +0.85 +0.84 +SINK +0.86 +0.71 +0.78 +NEITHER +0.89 +0.91 +0.90 +Macro Average +0.86 +0.82 +0.84 +Weighted Average +0.87 +0.87 +0.87 +TABLE III: SUSI performances with our ground-truth +problem since a method can either be: x a SOURCE; y +a SINK; or z NEITHER. Hence, our model relies on a +multiclass classification. +CODOC performances: To evaluate our approach, we apply +a stratified 10-fold cross-validation and compute the following +metrics: x precision ( +|T P | +|T P |+|F P |); y recall ( +|T P | +|T P |+|F N|); and +z F1 score (2 × precision×recall +precision+recall), with TP = True Positive, +FP = False Positive, and FN = False Negative. +In Table II, we present the results of CODOC. Note that we +retain the activation function that yielded the best results for +our neural network, i.e., ReLU [1] (we tested the following +activation functions: ReLU, sigmoid, tangent, and several +combinations of these functions). +Comparison with SUSI: SUSI is not intended to generate +privacy-sensitive SOURCE and SINK methods, as explained +in Section I. Therefore, a direct comparison between our ap- +proach CODOC and the pre-trained SUSI would be unfair [12]. +We therefore trained SUSI on our own ground truth to evaluate +it and compare it with CODOC. +Table III shows the results of a 10-fold cross validation. +First, we notice that both CODOC and SUSI independently +yield better results for the NEITHER class compared to +SOURCE and SINK classes. This is expected since the training +data set is highly imbalanced towards NEITHER methods – +most of the Android API is not a source or sink. While +Lists +# SOURCE +# SINK +CODOC +15 105 +1061 +SUSI ORIGINAL +25 369 +5913 +SUSI NEW +12 082 +1010 +TABLE IV: Number of SOURCE and SINK methods in the +lists generated by CODOC and SUSI +this imbalance may be relieved using over-/undersampling +techniques or class weights, it does not affect the comparative +performance of the tools. +Second, CODOC clearly yields slightly better performance +than SUSI to classify sensitive SOURCE and SINK methods. +CODOC outperforms SUSI by 4 points for the precision, the +recall, and the F1 score. +RQ2 answer: The performance of our approach CODOC +achieves an F1 score of 91% to identify SOURCE and SINK +methods in the Android framework. Furthermore, on the +same training set, CODOC achieves a slightly better score +than the state-of-the-art SOURCE and SINK classifier SUSI. +C. RQ3: CODOC and SUSI comparison in the wild +Objective: This research question aims to compare which +methods are identified as sensitive SOURCE or SINK methods +by CODOC, and which ones by SUSI. To do so, we check the +(non-)overlap to judge the performance of both approaches +outside of the 10-fold cross-validation. We apply SUSI and +CODOC on the full Android SDK, containing mostly non- +labeled methods. +Experimental setup: To compare CODOC against SUSI, we +generate the following lists of SOURCE and SINK methods +(sizes shown in Table IV): +1) CODOC: the lists generated by CODOC on Android 30 +(API version 11), trained on our ground truth. +2) SUSI ORIGINAL: the lists generated by SUSI on a more +recent version of Android (i.e., 30), trained on SUSI’s +original ground truth. +3) SUSI NEW: the lists generated by SUSI on a more recent +version of Android (i.e., 30), trained on our ground truth. +7 + +We first analyze the overlap between SUSI’s and CODOC’s +lists in Figure 7. We notice that for both SUSI ORIGINAL and +SUSI NEW, and both SOURCE and SINK methods, CODOC +and SUSI do not have much in common. +We manually curated 6 data sets for further inspection: +1) We select 100 non-sensitive SOURCE methods classified +by SUSI ORIGINAL and compare CODOC’s predictions +2) We select 100 misclassified SINK methods predicted by +SUSI ORIGINAL and compare CODOC’s predictions +3) We select 100 non-sensitive SOURCE methods classified +by SUSI NEW and compare CODOC’s predictions +4) We select 100 misclassified SINK methods predicted by +SUSI NEW and compare CODOC’s predictions +5) We select 100 misclassified SOURCE methods predicted +by CODOC and compare with both SUSI ORIGINAL and +SUSI NEW +6) We select 100 misclassified SINK methods predicted by +CODOC and compare with both SUSI ORIGINAL and +SUSI NEW +Methods +selection: Regarding the sources, the authors +randomly +browsed +the +SOURCE +methods +yielded +by +SUSI ORIGINAL, SUSI NEW, and CODOC, and consulted +the source code and the documentation to ensure the +sensitiveness of the methods. In case the methods did not +return sensitive values, they were retained for this research +question. The authors stopped after 100 SOURCE methods +(which +is +statistically +significant +at +a +95% +confidence +level and a confidence interval ± 10% from the 25 369 +sources of SUSI ORIGINAL, the 12 082 of SUSI NEW, +and the 15 105 of CODOC). Regarding the sinks, the +same procedure was applied as for sources, except the +authors +checked +if +the +methods +could +send +data +out +of the app (100 sinks are statistically significant at a +95% confidence level, and a confidence interval ± 10% +from the 5913 sinks of SUSI ORIGINAL, the 1010 of +SUSI NEW, +and +the +1061 +of +CODOC). +An +example +of +a +non-sensitive +SOURCE +method +misclassified +is: +”android.bluetooth.BluetoothCodecStatus.describeContents()” +that can be seen in Listing 2. Indeed, the documentation +and the source code are very explicit: this method only +returns 0. An example of a SINK method misclassified is: +”com.android.internal.util.FastMath.round(float)”, the source +code and documentation of which are available in Listing 3. +Indeed, this method is only intended to round a float number. +It does not write any value outside the app. +Eventually, we gathered 6 datasets with 100 methods each. +Results: The overall results are available in Table V. From +the 100 non-sensitive SOURCE methods classified as sources +by SUSI, CODOC only classified 13 as sensitive SOURCE +methods. From the 100 non-SINK methods classified as sinks +by SUSI, CODOC only classified 4 as SINK methods. +Discussion: These results, in the wild, do not confirm that +CODOC is better than SUSI (contrary to what is shown in +Section VI-B), nor the other way around, due to the very low +overlap between the SOURCE and SINK methods predicted by +both tools. Tables V and VI do not show that any approach +10507 +7484 +4598 +CoDoC +SuSi_New +(a) Sources prediction +982 +931 +79 +CoDoC +SuSi_New +(b) Sinks prediction +8158 +18422 +6947 +CoDoC +SuSi_Original +(c) Sources prediction +858 +5710 +203 +CoDoC +SuSi_Original +(d) Sinks prediction +Fig. 7: Overlap between SOURCE and SINK methods lists of +SUSI and CODOC. +1 +/** +2 +* Always returns 0 +3 +* +4 +* @return 0 +5 +* @hide +6 +*/ +7 +public int describeContents() { +8 +return 0; +9 +} +Listing +2: +Source +code +of +method +”describeContents” +of +class +”android.bluetooth.BluetoothCodecStatus” +is stronger. Better, taken independently, Table V shows that +CODOC is preferable, while Table VI shows that SUSI is +preferable. Hence, even if intuitively, by relying on both the +source code and the documentation, better results are expected, +in practice, it is not true according to our experiments and +observations. +1 +/** +2 +* Fast round from float to int. This is faster than +Math.round() +�→ +3 +* thought it may return slightly different results. It +does not try to +�→ +4 +* handle (in any meaningful way) NaN or infinities. +5 +*/ +6 +public static int round(float value) { +7 +long lx = (long) (value * (65536 * 256f)); +8 +return (int) ((lx + 0x800000) >> 24); +9 +} +Listing +3: +Source +code +of +method +”round” +of +class +”com.android.internal.util.FastMath” +Prediction +SUSI ORIGINAL +CODOC +SUSI NEW +CODOC +SOURCE +100 +13 +100 +26 +SINK +100 +4 +100 +18 +TABLE V: CODOC performance on SUSI’s misclassified +SOURCE and SINK methods +8 + +Prediction +CODOC +SUSI ORIGINAL +CODOC +SUSI NEW +SOURCE +100 +32 +100 +6 +SINK +100 +3 +100 +0 +TABLE VI: SUSI performance on CODOC’s misclassified +SOURCE and SINK methods +False positives +True positives +SOURCE +90 +10 +SINK +56 +44 +TABLE VII: Source and sink methods’ false and true posi- +tives’ rates predicted by CODOC +RQ3 answer: While the outputs of CODOC and SUSI differ, +neither is strictly superior over the other. Though CODOC +excels on SUSI’s misclassified results, the inverse is also +correct. +D. RQ4: Real-world performance of CODOC +Objective: In this section, we aim to qualitatively evaluate +CODOC’s predicted lists of SOURCE and SINK methods, +which is the main goal of this approach, i.e., generating lists +of SOURCE and SINK methods actionable to other tools, +e.g., data leak detectors. To do so, we randomly selected 100 +SOURCE and 100 SINK methods in the lists generated by +CODOC, which is statistically significant for the dataset at +a 95% confidence level and a confidence interval ± 10%, +and inspected each method based on two criteria: x the +sensitiveness of SOURCE methods for user privacy; and y +the fact that SINK methods can actually make data leave the +application space. +Results: The results of our manual analyses are available +in Table VII. Although, as seen in Section VI-B, CODOC +achieves a score of 82% precision in the lab to classify +SOURCE methods, in the wild, it reaches a false positive rate +of 90%. Same conclusion for SINK methods for which though +CODOC achieves a precision score of 93% to classify SINK +methods, in the wild, it reaches a false positive rate of 56%. +RQ4 answer: Results indicate that though CODOC achieves +high-performance scores when assessing its underlying deep +learning model, in the wild, it achieves poorly with over 90% +false positive for SOURCE methods and 56% false positive +for SINK methods. +E. RQ5: False positive measurement in sensitive data leak +detection +To evaluate the effect of the false positives generated +by CODOC on real-world applications, we utilize FLOW- +DROID [4] to find data leaks in Android apps. Intuitively, a +false positive in a list of sources or sinks is only relevant if +it leads to spurious leaks in the data flow analysis. A method +that is never used in an app might be on a source or sink list, +but does not have any negative effect in practice. +For this evaluation, we randomly selected 500 popular apps +from the 2022 GOOGLE PLAY across all available categories. +For each app analysis, we set FLOWDROID timeouts to 5 min +for data flow analysis and 3 min for callback collection and +configured the JVM with 768GB maximum heap size. We ran +the analysis on a system with 144 logical cores backed by four +physical Intel Xeon Gold 6254 CPUs. Note that we focus on +the quality of sources and sinks and not the performance of +the data flow analysis. We, therefore, opted for a system with +sufficient resources to scale to large apps. +We configure FLOWDROID with three different lists of +sources and sinks: +1) CODOC: The list generated by CODOC on Android +version 30. +2) SUSI NEW: The list generated by SUSI, where SUSI was +trained on the ground truth presented in this paper and +classified methods of Android version 30. +3) SUSI ORIGINAL: The list generated by SUSI, where +SUSI was trained on the original SUSI training data and +classified methods of Android version 30. +For each of these lists (SrcNN, SnkNN with NN +∈ +{SUSI, SUSI NEW, CODOC}), FLOWDROID yields a set of +data flows FlowNN. We then use the data flows FlowNN to +remove all sources and sinks from the lists that are not used in +at least one data flow. This leads to a reduced list of sources +and sinks, i.e. � +SrcNN, � +SnkNN. We validate these lists by hand +and count the number of methods that lead to leaks, but that +are not actually privacy-sensitive. +For CODOC, we find StringBuilder.toString(), +which is clearly a false positive, to be the most com- +monly +used +“source” +in +the +data +flow +analysis +(72% +of +all +flows). +The +second +most +common +source +was +StringBuffer.toString() with around 8% of all +flows. The sinks are more reasonable, with the Android log +methods being the most prevalent ones (21% of all flows). +The SUSI NEW results lead to far fewer flows (3,410 +instead of 71 211 for CODOC). The used sources and sinks +are more widely distributed, i.e., the top source only accounts +for 10% of all sources. Still, the used sources and sinks are +mostly false positives. +For SUSI ORIGINAL, we find the most flows (153 558). The +structure of used sources and sinks resembles SUSI NEW, i.e., +a wide variety of methods, most of which are false positives. +RQ5 answer: The results show that the false positives +generated by SUSI and CODOC have a major negative +impact on the precision of the data flow analysis that uses +these lists of sources and sinks. +VII. DISCUSSION +We designed our study and approach under the hypothesis +that adding more semantic, i.e., using code and documentation +together, would provide better results than the current state of +the art to classify SOURCE and SINK methods. Further, we +integrated recent advances in machine learning. Unfortunately, +empirical results show that CODOC performs poorly in prac- +tice. More precisely, our investigations show that: +9 + +• Although code and documentation are complementary to +predict SOURCE and SINK methods, CODOC performs +poorly in the wild. +• Although CODOC achieves good lab results and better +lab results than SUSI, i.e., precision, recall, and F1 score +of 91% compared to a precision, recall and F1 score of +87%, it performs poorly in the wild. +• CODOC generates many false positives when applied to +Android framework methods, i.e., classifies SOURCE and +SINK methods that are not SOURCE and SINK methods. +• The false positives in the SOURCE and SINK lists lead +to false positives in the data flow analysis which renders +the lists unfit for real-world data leak detection scenarios. +VIII. LIMITATIONS AND THREATS TO VALIDITY +CODOC relies on two inputs to make its prediction, namely +the source code and the documentation of Android methods. +We acknowledge that the Android framework contains un- +documented methods which cannot be taken into account by +CODOC. The lack of method documentation makes CODOC +miss some methods to classify, hence, sensitive SOURCE and +SINK methods are certainly missed. However, the proportion +of methods missed is too low (i.e., 18.6%) to fully explain the +poor real-world performance of the approach. +CODOC relies on supervised machine learning techniques +which, by definition, need labeled data. Therefore, we per- +formed manual labeling based on our expertise to label An- +droid framework methods as SOURCE, SINK, or NEITHER. +Consequently, though we observed a strict and consistent +procedure, our labels can be influenced by human subjectivity. +Nonetheless, we make public all of our artifacts to the research +community to mitigate this threat to validity. +Our training set is limited to 1015 samples across three +classes. This might not be enough training data. We will +explore the use of data augmentation in future work. +Sensitiveness is a concept not well-defined, especially for +technical frameworks, and exposed to human subjectivity since +there is no formal definition of what it is. Also, the authors +noticed during manual labeling that sometimes there is only +a fine line between a sensitive value and a non-sensitive one. +Therefore, the choices regarding sensitiveness can be biased +based on human subjectivity. +As already described and motivated in Section V-A, our +manual labeling process was performed without taking into +account SUSI’s NEITHER methods list. However, we note +that since SUSI yields very good results on the NEITHER +category [3], there is a high chance that this list contained +well-classified samples, hence being more representative than +the ones misclassified as SOURCE and SINK methods. +IX. RELATED WORK +In this section, we present the related works available in the +literature that are close to our work. +Taint analysis, which requires proper lists of sources and sinks, +is used for a variety of purposes: vulnerability detection [33], +[9], [37], sensitive data leak detection [4], [28], [45], [14], +[58], [20], [58], [45], [61], hidden behavior detection [66], +[53], malware detection [51] or bad practices detection [63]. +All of these works require lists of SOURCE and SINK methods +that have to be defined in advance. If these lists are not +complete, the approaches may miss important data flows. +Therefore, automated techniques were proposed to catch as +many SOURCE and SINK methods as possible, aiming to reach +completeness. In 2012, Gibler & al. [19] proposed an approach +to automatically detect SOURCE and SINK methods based +on mappings between methods and the permissions needed to +invoke those methods. Methods requiring sensitive permissions +were considered sources. Methods requiring the INTERNET +permission were considered sinks. However, not all sensitive +methods need permissions in the Android framework [21]. +Thus, permission-based approaches miss relevant sources and +sinks. Our work, on the other hand, considers all methods in +the Android framework regardless of required permissions. +Two years later, Arzt & al. [3] proposed SUSI, an automated +approach that relies on machine learning to classify SOURCE +and SINK methods in the Android framework. SUSI relies on +features based on the method signature (e.g., the method name, +the parameter types, the return value, the modifiers, etc.) and +based on dataflow features. In contrast to SUSI, our approach +CODOC relies on a multi-input deep-learning classifier based +on: x the source code, and y the documentation of a method. +More recently, Wongwiwatchai & al. [59] proposed an +approach to detect privacy leaks in Android apps. The authors +did not rely on existing lists of SOURCE and SINK methods. +Rather, they defined their own lists. To do so, the authors study +well-known frameworks on data protection (e.g., the GDPR) +and constitute a list of personal information commonly defined +in these regulations. The process of mapping personal infor- +mation (e.g., an age) to Android APIs is opaque in the paper. +Hence it is difficult to judge the approach’s comprehensiveness +and the rate of false positives. In contrast, our approach aims to +automatically and systematically map sensitive API methods +to sensitive data with machine learning techniques. +X. CONCLUSION +As described in Section VI-D, CODOC, likewise SUSI, does +not provide actionable results in the wild. Indeed, although +we have shown in Section VI-B that CODOC outperforms +SUSI with a score of 91% on our ground truth, our manual +evaluations have shown that CODOC performs poorly on the +Android framework methods. Hence, the resulting SOURCE +and SINK methods’ list produced cannot be relied upon in +real-world data leak detection scenarios. This negative result +and the literature [57], [24], [35] show: x the problem of +classifying SOURCE and SINK methods is not trivial; and y +there is an urgent need of a community effort to produce an +actionable list of SOURCE and SINK methods for sensitive +data leak detection in Android apps carrying highly sensitive +data about end users. +10 + +XI. DATA AVAILABILITY +For the sake of Open Science, we provide to the community +all the artifacts used in our study. In particular, we make +available the datasets used during our experimentation, the +source code of our prototype as well as the scripts to execute +CODOC, our manually labeled datasets, the vector represen- +tation of source code and documentation used, and SUSI +related artifacts. The project’s repository including all artifacts +is available at: https://github.com/JordanSamhi/CoDoC +XII. ACKNOWLEDGMENT +This research work has been funded by the German Federal +Ministry of Education and Research and the Hessian Ministry +of Higher Education, Research, Science and the Arts within +their joint support of the National Research Center for Applied +Cybersecurity ATHENE. Additionally, this work was partly +supported by the Luxembourg National Research Fund (FNR), +under projects Reprocess C21/IS/16344458 and the AFR grant +14596679. +11 + +REFERENCES +[1] Abien Fred Agarap. Deep learning using rectified linear units (relu). +arXiv preprint arXiv:1803.08375, 2018. +[2] Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. Code2vec: +Learning distributed representations of code. +Proc. ACM Program. +Lang., 3(POPL), January 2019. +[3] Steven Arzt, Siegfried Rasthofer, and Eric Bodden. Susi: A tool for the +fully automated classification and categorization of android sources and +sinks. University of Darmstadt, Tech. Rep. TUDCS-2013-0114, 2013. +[4] Steven Arzt, Siegfried Rasthofer, Christian Fritz, Eric Bodden, Alexan- +dre Bartel, Jacques Klein, Yves Le Traon, Damien Octeau, and Patrick +McDaniel. +Flowdroid: Precise context, flow, field, object-sensitive +and lifecycle-aware taint analysis for android apps. +SIGPLAN Not., +49(6):259–269, June 2014. +[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. +Neural +machine translation by jointly learning to align and translate. +arXiv +preprint arXiv:1409.0473, 2014. +[6] Chaity Banerjee, Tathagata Mukherjee, and Eduardo Pasiliao Jr. +An +empirical study on generalizations of the relu activation function. In +Proceedings of the 2019 ACM Southeast Conference, pages 164–167, +2019. +[7] Luciano Bello and Marco Pistoia. Ares: triggering payload of evasive +android malware. In 2018 IEEE/ACM 5th International Conference on +Mobile Software Engineering and Systems (MOBILESoft), pages 2–12. +IEEE, 2018. +[8] David Brumley, Cody Hartwig, Zhenkai Liang, James Newsome, Dawn +Song, and Heng Yin. Automatically identifying trigger-based behavior +in malware. In Botnet Detection, pages 65–88. Springer, 2008. +[9] Jun Cai, Peng Zou, Jinxin Ma, and Jun He. Sworddta: A dynamic taint +analysis tool for software vulnerability detection. +Wuhan University +Journal of Natural Sciences, 21(1):10–20, Feb 2016. +[10] Erika Chin, Adrienne Porter Felt, Kate Greenwood, and David Wagner. +Analyzing inter-application communication in android. In Proceedings +of the 9th International Conference on Mobile Systems, Applications, +and Services, MobiSys ’11, page 239–252, New York, NY, USA, 2011. +Association for Computing Machinery. +[11] Jacob Cohen. A coefficient of agreement for nominal scales. Educational +and Psychological Measurement, 20(1):37–46, 1960. +[12] Janez Demˇsar. Statistical comparisons of classifiers over multiple data +sets. J. Mach. Learn. Res., 7:1–30, December 2006. +[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. +Bert: Pre-training of deep bidirectional transformers for language un- +derstanding. arXiv preprint arXiv:1810.04805, 2018. +[14] William Enck, Peter Gilbert, Seungyeop Han, Vasant Tendulkar, Byung- +Gon Chun, Landon P Cox, Jaeyeon Jung, Patrick McDaniel, and +Anmol N Sheth. Taintdroid: an information-flow tracking system for +realtime privacy monitoring on smartphones. +ACM Transactions on +Computer Systems (TOCS), 32(2):1–29, 2014. +[15] William Enck, Damien Octeau, Patrick D McDaniel, and Swarat Chaud- +huri. +A study of android application security. +In USENIX security +symposium, volume 2, 2011. +[16] Ming Fan, Le Yu, Sen Chen, Hao Zhou, Xiapu Luo, Shuyue Li, Yang +Liu, Jun Liu, and Ting Liu. An empirical evaluation of gdpr compliance +violations in android mhealth apps. In 2020 IEEE 31st International +Symposium on Software Reliability Engineering (ISSRE), pages 253– +264, 2020. +[17] Pietro Ferrara and Fausto Spoto. Static analysis for gdpr compliance. +In ITASEC, 2018. +[18] Y. Fratantonio, A. Bianchi, W. Robertson, E. Kirda, C. Kruegel, and +G. Vigna. +Triggerscope: Towards detecting logic bombs in android +applications. In 2016 IEEE Symposium on Security and Privacy (SP), +pages 377–396, 2016. +[19] Clint Gibler, Jonathan Crussell, Jeremy Erickson, and Hao Chen. An- +droidleaks: Automatically detecting potential privacy leaks in android +applications on a large scale. In Stefan Katzenbeisser, Edgar Weippl, +L. Jean Camp, Melanie Volkamer, Mike Reiter, and Xinwen Zhang, +editors, Trust and Trustworthy Computing, pages 291–307, Berlin, +Heidelberg, 2012. Springer Berlin Heidelberg. +[20] Michael I Gordon, Deokhwan Kim, Jeff H Perkins, Limei Gilham, +Nguyen Nguyen, and Martin C Rinard. Information flow analysis of +android applications in droidsafe. In NDSS, volume 15, page 110, 2015. +[21] Sigmund Albert Gorski, Benjamin Andow, Adwait Nadkarni, Sunil Man- +andhar, William Enck, Eric Bodden, and Alexandre Bartel. Acminer: +Extraction and analysis of authorization checks in android’s middleware. +In Proceedings of the Ninth ACM Conference on Data and Application +Security and Privacy, CODASPY ’19, page 25–36, New York, NY, USA, +2019. Association for Computing Machinery. +[22] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q +Weinberger. Densely connected convolutional networks. In Proceedings +of the IEEE conference on computer vision and pattern recognition, +pages 4700–4708, 2017. +[23] Hao Jiang, Hongli Yang, Shengchao Qin, Zhendong Su, Jian Zhang, and +Jun Yan. Detecting energy bugs in android apps using static analysis. +In Zhenhua Duan and Luke Ong, editors, Formal Methods and Soft- +ware Engineering, pages 192–208, Cham, 2017. Springer International +Publishing. +[24] Mohsin Junaid, Donggang Liu, and David Kung. Dexteroid: Detecting +malicious behaviors in android apps using reverse-engineered life cycle +models. Computers & Security, 59:92–117, 2016. +[25] Jinyung Kim, Yongho Yoon, Kwangkeun Yi, Junbum Shin, and SWRD +Center. Scandal: Static analyzer for detecting privacy leaks in android +applications. MoST, 12:1, 2012. +[26] Ron Kohavi et al. A study of cross-validation and bootstrap for accuracy +estimation and model selection. In Ijcai, volume 14, pages 1137–1145. +Montreal, Canada, 1995. +[27] Li Li, Kevin Allix, Daoyuan Li, Alexandre Bartel, Tegawend´e F. +Bissyand´e, and Jacques Klein. Potential component leaks in android +apps: An investigation into a new feature set for malware detection. In +2015 IEEE International Conference on Software Quality, Reliability +and Security, pages 195–200, 2015. +[28] Li Li, Alexandre Bartel, Tegawend´e F Bissyand´e, Jacques Klein, Yves +Le Traon, Steven Arzt, Siegfried Rasthofer, Eric Bodden, Damien +Octeau, and Patrick McDaniel. Iccta: Detecting inter-component privacy +leaks in android apps. +In 2015 IEEE/ACM 37th IEEE International +Conference on Software Engineering, volume 1, pages 280–291. IEEE, +2015. +[29] Li Li, Alexandre Bartel, Tegawend´e F. Bissyand´e, Jacques Klein, and +Yves Le Traon. +Apkcombiner: Combining multiple android apps to +support inter-app analysis. In Hannes Federrath and Dieter Gollmann, +editors, ICT Systems Security and Privacy Protection, pages 513–527, +Cham, 2015. Springer International Publishing. +[30] Li Li, Tegawend´e F. Bissyand´e, Yves Le Traon, and Jacques Klein. +Accessing inaccessible android apis: An empirical study. +In 2016 +IEEE International Conference on Software Maintenance and Evolution +(ICSME), pages 411–422, 2016. +[31] Li Li, Tegawend´e F. Bissyand´e, Mike Papadakis, Siegfried Rasthofer, +Alexandre Bartel, Damien Octeau, Jacques Klein, and Le Traon. Static +analysis of android apps: A systematic literature review. Information +and Software Technology, 88:67 – 95, 2017. +[32] Zhuang Liu, Wayne Lin, Ya Shi, and Jun Zhao. A robustly optimized +bert pre-training approach with post-training. In Sheng Li, Maosong +Sun, Yang Liu, Hua Wu, Liu Kang, Wanxiang Che, Shizhu He, and +Gaoqi Rao, editors, Chinese Computational Linguistics, pages 471–484, +Cham, 2021. Springer International Publishing. +[33] V. Benjamin Livshits and Monica S. Lam. Finding security vulnerabili- +ties in java applications with static analysis. In Proceedings of the 14th +Conference on USENIX Security Symposium - Volume 14, SSYM’05, +page 18, USA, 2005. USENIX Association. +[34] Long Lu, Zhichun Li, Zhenyu Wu, Wenke Lee, and Guofei Jiang. Chex: +Statically vetting android apps for component hijacking vulnerabilities. +In Proceedings of the 2012 ACM Conference on Computer and Com- +munications Security, CCS ’12, page 229–240, New York, NY, USA, +2012. Association for Computing Machinery. +[35] Linghui Luo, Eric Bodden, and Johannes Sp¨ath. A qualitative analysis +of android taint-analysis results. In 2019 34th IEEE/ACM International +Conference on Automated Software Engineering (ASE), pages 102–114, +2019. +[36] Yuhong Nan, Zhemin Yang, Xiaofeng Wang, Yuan Zhang, Donglai +Zhu, and Min Yang. Finding clues for your secrets: Semantics-driven, +learning-based privacy discovery in mobile apps. In NDSS, 2018. +[37] James Newsome and Dawn Xiaodong Song. +Dynamic taint analysis +for automatic detection, analysis, and signaturegeneration of exploits on +commodity software. In NDSS, volume 5, pages 3–4. Citeseer, 2005. +[38] Damien Octeau, Daniel Luchaup, Matthew Dering, Somesh Jha, and +Patrick McDaniel. +Composite constant propagation: Application to +12 + +android inter-component communication analysis. In 2015 IEEE/ACM +37th IEEE International Conference on Software Engineering, volume 1, +pages 77–88, 2015. +[39] Xiaorui Pan, Xueqiang Wang, Yue Duan, XiaoFeng Wang, and Heng +Yin. +Dark hazard: Learning-based, large-scale discovery of hidden +sensitive operations in android apps. In NDSS, 2017. +[40] Thanasis Petsas, Giannis Voyatzis, Elias Athanasopoulos, Michalis Poly- +chronakis, and Sotiris Ioannidis. +Rage against the virtual machine: +Hindering dynamic analysis of android malware. +In Proceedings of +the Seventh European Workshop on System Security, EuroSec ’14, New +York, NY, USA, 2014. Association for Computing Machinery. +[41] Siegfried Rasthofer, Steven Arzt, Enrico Lovat, and Eric Bodden. Droid- +force: Enforcing complex, data-centric, system-wide policies in android. +In 2014 Ninth International Conference on Availability, Reliability and +Security, pages 40–49. IEEE, 2014. +[42] Siegfried Rasthofer, Steven Arzt, Marc Miltenberger, and Eric Bodden. +Harvesting runtime values in android applications that feature anti- +analysis techniques. In NDSS, 2016. +[43] Dhruv Rathi and Rajni Jindal. Droidmark: A tool for android malware +detection using taint analysis and bayesian network. +arXiv preprint +arXiv:1805.06620, 2018. +[44] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings +using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. +[45] J. Samhi, A. Bartel, T. F. Bissyande, and J. Klein. Raicc: Revealing +atypical inter-component communication in android apps. +In 2021 +IEEE/ACM 43rd International Conference on Software Engineering +(ICSE), pages 1398–1409, Los Alamitos, CA, USA, may 2021. IEEE +Computer Society. +[46] J. Samhi, L. Li, T. F. Bissyande, and J. Klein. Difuzer: Uncovering sus- +picious hidden sensitive operations in android apps. In 2022 IEEE/ACM +44th International Conference on Software Engineering (ICSE), pages +723–735, Los Alamitos, CA, USA, May 2022. IEEE Computer Society. +[47] Jordan Samhi, Kevin Allix, Tegawend´e F. Bissyand´e, and Jacques Klein. +A first look at android applications in google play related to covid-19. +Empirical Software Engineering, 26(4):57, April 2021. +[48] Jordan Samhi and Alexandre Bartel. On the (in)effectiveness of static +logic bomb detector for android apps. IEEE Transactions on Dependable +and Secure Computing, pages 1–1, August 2021. +[49] Jordan Samhi, Jun Gao, Nadia Daoudi, Pierre Graux, Henri Hoyez, +Xiaoyu Sun, Kevin Allix, Tegawend´e F Bissyand´e, and Jacques Klein. +Jucify: A step towards android code unification for enhanced static +analysis. In 2022 IEEE/ACM 44th International Conference on Software +Engineering (ICSE), pages 1232–1244, Los Alamitos, CA, USA, May +2022. IEEE Computer Society. +[50] Golam Sarwar, Olivier Mehani, Roksana Boreli, and Mohamed Ali +Kaafar. On the effectiveness of dynamic taint analysis for protecting +against private information leaks on android-based devices. +In SE- +CRYPT, volume 96435, 2013. +[51] Venkatesh Gauri Shankar, Gaurav Somani, Manoj Singh Gaur, Vijay +Laxmi, and Mauro Conti. +Androtaint: An efficient android malware +detection framework using dynamic taint analysis. In 2017 ISEA Asia +Security and Privacy (ISEASP), pages 1–13, 2017. +[52] Sagar Sharma, Simone Sharma, and Anidhya Athaiya. +Activation +functions in neural networks. +towards data science, 6(12):310–316, +2017. +[53] Dawei Shi, Xiucun Tang, and Zhibin Ye. +Detecting environment- +sensitive malware based on taint analysis. +In 2017 8th IEEE In- +ternational Conference on Software Engineering and Service Science +(ICSESS), pages 322–327, 2017. +[54] M. Stone. Cross-validatory choice and assessment of statistical predic- +tions. Journal of the Royal Statistical Society. Series B (Methodological), +36(2):111–147, 1974. +[55] F. Tomassetti. Javaparser, https://github.com/javaparser/javaparser. Ac- +cessed August 2021. +[56] Victor Van Der Veen, Herbert Bos, and Christian Rossow. Dynamic +analysis of android malware. Internet & Web Technology Master thesis, +VU University Amsterdam, 2013. +[57] Weiping Wang, Jianjian Wei, Shigeng Zhang, and Xi Luo. Lscdroid: +Malware detection based on local sensitive api invocation sequences. +IEEE Transactions on Reliability, 69(1):174–187, 2020. +[58] Fengguo Wei, Sankardas Roy, Xinming Ou, and Robby. Amandroid: +A precise and general inter-component data flow analysis framework +for security vetting of android apps. In Proceedings of the 2014 ACM +SIGSAC Conference on Computer and Communications Security, CCS +’14, page 1329–1341, New York, NY, USA, 2014. Association for +Computing Machinery. +[59] Nattanon Wongwiwatchai, Phannawhat Pongkham, and Kunwadee Sri- +panidkulchai. +Comprehensive detection of vulnerable personal infor- +mation leaks in android applications. +In IEEE INFOCOM 2020 - +IEEE Conference on Computer Communications Workshops (INFOCOM +WKSHPS), pages 121–126, 2020. +[60] Songyang Wu, Pan Wang, Xun Li, and Yong Zhang. Effective detection +of android malware based on the usage of data flow apis and machine +learning. Information and software technology, 75:17–25, 2016. +[61] Z. Yang and M. Yang. +Leakminer: Detect information leakage on +android with static taint analysis. In 2012 Third World Congress on +Software Engineering, pages 101–104, 2012. +[62] Zhemin Yang, Min Yang, Yuan Zhang, Guofei Gu, Peng Ning, and +X Sean Wang. +Appintent: Analyzing sensitive data transmission in +android for privacy leakage detection. In Proceedings of the 2013 ACM +SIGSAC conference on Computer & communications security, pages +1043–1054, 2013. +[63] Sergio Yovine and Gonzalo Winniczuk. +Checkdroid: A tool for au- +tomated detection of bad practices in android applications using taint +analysis. In 2017 IEEE/ACM 4th International Conference on Mobile +Software Engineering and Systems (MOBILESoft), pages 175–176, 2017. +[64] Mu Zhang, Yue Duan, Heng Yin, and Zhiruo Zhao. Semantics-aware +android malware classification using weighted contextual api depen- +dency graphs. In Proceedings of the 2014 ACM SIGSAC Conference +on Computer and Communications Security, CCS ’14, page 1105–1116, +New York, NY, USA, 2014. Association for Computing Machinery. +[65] Mu Zhang and Heng Yin. +Appsealer: Automatic generation of +vulnerability-specific patches for preventing component hijacking at- +tacks in android applications. In NDSS. Citeseer, 2014. +[66] Qingchuan Zhao, Chaoshun Zuo, Brendan Dolan-Gavitt, Giancarlo +Pellegrino, and Zhiqiang Lin. Automatic uncovering of hidden behaviors +from input validation in mobile apps. +In 2020 IEEE Symposium on +Security and Privacy (SP), pages 1106–1120. IEEE, 2020. +[67] Hao Zhou, Wei Zhang, Fengqiong Wei, and Yunfang Chen. Analysis of +android malware family characteristic based on isomorphism of sensitive +api call graph. In 2017 IEEE Second International Conference on Data +Science in Cyberspace (DSC), pages 319–327, 2017. +13 + diff --git a/e9E1T4oBgHgl3EQfegRi/content/tmp_files/load_file.txt b/e9E1T4oBgHgl3EQfegRi/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab6268bf38d33603cc6145f3585b6d6c34e75eff --- /dev/null +++ b/e9E1T4oBgHgl3EQfegRi/content/tmp_files/load_file.txt @@ -0,0 +1,1202 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf,len=1201 +page_content='Negative Results of Fusing Code and Documentation for Learning to Accurately Identify Sensitive Source and Sink Methods An Application to the Android Framework for Data Leak Detection Jordan Samhi∗, Maria Kober‡§, Abdoul Kader Kabore∗, Steven Arzt†, Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyand´e∗, Jacques Klein∗ ∗ SnT, University of Luxembourg, firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='lastname@uni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='lu † Fraunhofer Institute for Secure Information Technology, Darmstadt, Hessen, Germany, steven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='arzt@sit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='fraunhofer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='de ‡ mariakober.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='research@gmx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='com Abstract—Apps on mobile phones manipulate all sorts of data, including sensitive data, leading to privacy-related concerns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Recent regulations like the European GDPR provide rules for the processing of personal and sensitive data, like that no such data may be leaked without the consent of the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Researchers have proposed sophisticated approaches to track sensitive data within mobile apps, all of which rely on specific lists of sensitive SOURCE and SINK API methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The data flow analysis results greatly depend on these lists’ quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Previous approaches either used incomplete hand-written lists that quickly became outdated or relied on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The latter, however, leads to numerous false positives, as we show.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This paper introduces CODOC, a tool that aims to revive the machine-learning approach to precisely identify privacy- related SOURCE and SINK API methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In contrast to previous approaches, CODOC uses deep learning techniques and combines the source code with the documentation of API methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Firstly, we propose novel definitions that clarify the concepts of sensitive SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Secondly, based on these definitions, we build a new ground truth of Android methods representing sensitive SOURCE, SINK, and NEITHER (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', no source or sink) methods that will be used to train our classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We evaluate CODOC and show that, on our validation dataset, it achieves a precision, recall, and F1 score of 91% in 10-fold cross-validation, outperforming the state-of-the-art SUSI when used on the same dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, similarly to existing tools, we show that in the wild, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', with unseen data, CODOC performs poorly and generates many false positive results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Our findings, together with time-tested results of previous approaches, suggest that machine-learning models for abstract concepts such as privacy fail in practice despite good lab results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To encourage future research, we release all our artifacts to the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' INTRODUCTION Given the ubiquity of mobile devices nowadays and the proliferation of apps installed and used by end users, Android apps’ analysis has become a common topic in software engi- neering research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Numerous approaches have been proposed to check security properties, detect malicious code, and detect program bugs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' These approaches leverage techniques such as dynamic analysis [56], [40], [14], static analysis [31], [48], [23], [58], [38], [45], [4], [20], [46], or both (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', hybrid analyses) [8], [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' §Most of the work was completed while Maria Kober was present at Fraunhofer SIT The previously mentioned analysis approaches usually con- sist of several specific techniques that are applied to apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' One of them is taint analysis, which checks whether data obtained from a given SOURCE method (or any kind of data derived from it, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', after some computation) is passed to a SINK method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In the context of Android-based privacy research, a SOURCE is an API method that provides privacy-sensitive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A SINK is an API method that writes data to the outside, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', via the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The need for sensitive SOURCE and SINK lists is ubiquitous in taint analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Indeed, the literature is full of approaches and techniques that set up privacy strategies building on sensitive API methods that allow retrieving sensitive data and/or API methods allowing to expose this kind of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The range of approaches relying on sensitive SOURCE and SINK methods that could benefit from a complete and precise list of SOURCE and SINK methods is large: sensitive data leak detection [31],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [19],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [14],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [4],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [25],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [47],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [49],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Android component leak detection [27],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' dynamic policy enforcement [41],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' malware detection [67],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [51],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [43],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' hidden behavior detection [39],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [66],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' inter-app communication analysis [10],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [29],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' component hijacking vulnerabilities detection [34],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [65],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' the uncovering of run-time sensitive values [42],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' as well as GDPR compliance checks [17],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In its public-facing API – i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', methods contained in the official online documentation for Android –, Android 111 contains more than 34 000 methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, it has been shown [30] that developers have access to many more methods inside the Android framework that are not directly available in the public-facing API (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', hidden to developers using the annotation ”@hide”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Android 11, for example, more than 210 000 methods are available to developers in total, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', through the use of reflection2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' These large numbers render a manual classification of sources and sinks infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Fur- thermore, additional methods are added in every new release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' New frameworks, such as Android Auto or Chromecast, bring 1API level 30, which was released in September 2020 2Note that reflection can also be used to make private methods accessible since Android does not provide a Java security manager.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='03207v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='CR] 9 Jan 2023 in new methods, and thus potentially new sources and sinks as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, automatic approaches for identifying sources and sinks are needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Several approaches have been proposed in the literature to solve this problem [19], [36], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' SUSI [3], which is based on supervised machine-learning, is currently one of the most popular approaches and the state-of-the-art [4], [60], [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To the best of our knowledge, it is also the most comprehensive and state-of-the-art approach for deriving lists of sources and sinks from frameworks like Android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, the sources list generated by SUSI is neither precise, nor specific for privacy analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' As we show in Section IV, SUSI classifies methods as sources, even though they are clearly irrelevant for privacy analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Secondly, SUSI relies on technical categories (network information, unique identifiers, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=') to structure its output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Selecting all categories that could be relevant for privacy analysis leads to a large number of irrelevant APIs being selected as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' These observations are not surprising because SUSI makes no upfront assumptions on the sensitivity of the sources yielded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, some methods are misclassified entirely by SUSI, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', the method getScrollIndicatorBounds of the android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='ViewGroup class is categorized as a source in the ”SMS MMS” category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' As we show in Sec- tion VI-E, these issues lead to a profusion of false positives for any data flow tracker that relies on SUSI’s source/sink lists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Several works in the literature [57], [24], [35] have come to similar conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' show that SUSI’s sources list leads to a false positive rate of almost 80% while trying to detect sensitive data leaks [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, in accordance with our own findings, they state the following regarding the sources yielded by SUSI: the root cause of these false positives is that their sources [.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='] are actually inappropriate, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', they do not return sensitive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We note that SUSI’s evaluation in the original paper [3] does not highlight these issues, and that SUSI performs well on its training data and select examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, in the real-world, the false positive rate is much higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To the best of our knowledge, no other approach has tackled the problem of automatically identifying sources and sinks in Android since the release of SUSI in 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In other words, SUSI is still the most relevant approach despite its shortcomings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Since the problem remains highly relevant and unsolved regarding sensitive data, we attempted to improve the SUSI approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The SUSI features for the supervised machine learning rely entirely on static code analysis on the Android bytecode implementation (the Android platform JAR file).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' SUSI considers individual properties as features, such as method names, parameter types, or method/class modifiers which do not capture the entire semantic of the code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Our approach, that we named CODOC, on the other hand, captures the entire semantics of a given method by taking its’ complete source code into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Additionally, CODOC also considers the JavaDoc documentation of the Android API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We observed that the Android documentation, which is fairly extensive for most classes and methods, provides enough guidance to the developer to correctly use the API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We then assumed that analyzing this documentation would also help in more precisely discovering sensitive SOURCE and SINK methods in the Android API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Lastly, CODOC is an attempt to incorporate the ground- breaking advances that have been made in text and code embedding [2], [44] and, thus, machine learning since 2014 when SUSI was published.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' While our evaluation shows that CODOC outperforms SUSI in the lab, CODOC’s real-world performance, likewise, is still lacking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Manually inspecting the SOURCE and SINK methods identified in a set of previously unseen API methods from the Android framework reveals many false positives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We, therefore, argue that even adding documentation and improving the machine learning techniques do not solve the problem of accurately identifying privacy-related sources and sinks in Android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Even with more and better training data and careful optimization of the training, the overall goal remains elusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We argue that the semantic gap between an individual API method (code or documentation) and an abstract concept such as user privacy are unlikely to be closed by supervised machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Instead, novel approaches are necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, we call for a more careful evaluation of machine learning results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In the lab studies based on 10-fold cross- validation, SUSI is sufficient, and CODOC is even better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Still, on real-world data, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', previously unseen methods from the Android framework, both fail to meet expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Overall, we make the following contributions: we propose CODOC: a novel, fully-automated, deep- learning-based approach to detect sensitive SOURCE and SINK methods in the Android framework based on API method source code and documentation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' we release a new ground-truth of methods labeled as sensitive SOURCE, SINK, or NEITHER;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' we evaluate CODOC and show that it outperforms the state-of-the-art SUSI on a small evaluation dataset, reach- ing a precision, recall, and F1 score of 91% in the lab;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' we apply CODOC on public methods from the Android framework and show that, likewise SUSI, it yields a high rate of false positives;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We release our open-source prototype CODOC to the community and all the artifacts used in our study at: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='com/JordanSamhi/CoDoC II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' BACKGROUND In this section, we provide the reader with context for the work presented in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Taint Analysis Taint analysis is a particular dataflow analysis that tracks data through the control flow graph of a program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' If a variable V is assigned the return value of a specific function, like a SOURCE method, it becomes tainted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' If a tainted value is as- signed to some other variable W, this variable W gets tainted as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The same applies if W is assigned the result of some operation on an already tainted variable V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In other words: 2 the taint is propagated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' When a tainted variable is passed to a SINK function as a parameter, a leak is reported, as the value derived from the SOURCE reached a SINK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In the case of data leak detection in the context of privacy analysis, an example of a SOURCE in the Android Framework is getImei() and an example of a SINK is sendTextMessage().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Text Embedding Our work relies on methods’ source code and documenta- tion to train a machine learning model and infer SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In order to be processed by machine learning algorithms, these textual representations need to be transformed (embedded) into numerical representations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', numerical vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In this section, we briefly describe two state-of-the-art techniques for this transformation, namely Sentence-BERT [44] and Code2Vec [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 1) Sentence-BERT: Method documentation embedding re- quires efficient natural language processing techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' SENTENCE-BERT is a modified and more computationally ef- ficient version of the well-known BERT neural network [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' It relies on siamese and triplet network structures to obtain meaningful sentence embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 2) CODE2VEC: Similarly to natural language embedding, making predictions from source code requires code embedding to have a homogeneous representation of different source code inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CODE2VEC embeds Java methods to predict method names.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Methods are transformed into ASTs (Abstract Syntax Trees) to construct path representations between different leaf nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Then, using the attention mechanism [5], bag of path- contexts are aggregated into a single vector that represents the method body.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' DEFINITIONS In the literature, there is no consensus on the definitions of sensitive SOURCE and SINK methods, which leads to a lack of clarity in papers related to taint analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' As described in Section II-A, taint analysis tracks the flow of data from a given SOURCE to a given SINK, no matter the type of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, in most of the papers, the authors mix sensitive SOURCE with SOURCE, which makes taint analysis appear as tracking sensitive data, which is not always the case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Tracking sensitive data is an instance of the more general task of tracking data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To cope with this problem and provide state-of-the-art approaches that aim at tracking sensitive data with clear terms, we propose the following definitions: Definition 1 (Data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Any value or reference to a value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Definition 2 (Composite Data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Any structure composed of multiple Data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', an object).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Definition 3 (Sensitive Data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Any data or composite data that holds private value(s) that: can identify users, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', usernames and personally identi- fying data like email address or name can identify the device, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', unique device identifiers are related data to personal information (of the phone user), e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', photographs and files, phone calls, SMS represent data owned by users holding information about other users, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', contacts and phone lists, emails, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' represent environment and sensor information, including geolocation data, camera, and microphone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Definition 4 (Sensitive SOURCE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A function that returns a Sensitive Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Note that functions that return constant values are never sensitive sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Definition 5 (SINK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A function that sends out one or more Data, Composite Data, or values derived from a Data or a Composite Data from the application memory space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' There is no notion of sensitivity for sinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The nature of the data (more precisely: the SOURCE from which the data was originally obtained) passed to the sink determines whether a leak of sensitive data occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' MOTIVATION Tracking sensitive data within Android apps is of high interest since it is used in numerous security-related ap- proaches [62], [50], [61], [4], [31] and part of legal compli- ance, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', according to the GDPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, there is a need to provide analysts and researchers with sources and sinks lists that precisely enclose sensitive data (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Definition 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Android API: As briefly explained in Section I, the number of public and documented API methods intended for use by Android developers amounts to tens of thousands and increases with every new API version (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Still, even identifying all sources and sinks in these documented public API methods is not sufficient, as developers also call methods not intended for direct use, yet present in the Android framework [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' With tens of thousands of public API meth- ods and hundreds of thousands of overall methods, manual classification for every new release is obviously infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, automated solutions are needed to produce sen- sitive sources and sinks lists for every new release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In the following, we explain why the existing state of the art is inappropriate for this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 0 5 10 15 20 25 30 API level 0 5000 10000 15000 20000 25000 30000 35000 40000 Number of methods in the Android API All methods Public methods Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 1: Number of methods in the public-facing Android API by API level Problem with the existing state of the art: The state-of- the-art approach SUSI [3] uses machine learning to automati- cally classify Android SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, it has been shown several times [57], [24], [35] that SUSI’s lists are inappropriate since it is not specific to a particular 3 14 16 18 20 22 24 26 28 30 API level 0 50000 100000 150000 200000 250000 300000 350000 Number of methods in the Android framework All methods Public methods Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 2: Number of methods in the entire Android framework code by API level 1 public void method() { 2 int p = 7;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 3 int q = 4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 4 Rational r = new Rational(p, q);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 5 int value = r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='intValue();' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 6 SmsManager s = SmsManager.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='getDefault();' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 7 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='sendTextMessage("0", null, value, null, null);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 8 } Listing 1: Example of SUSI non-sensitive data leak analysis like tracking sensitive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore it produces many false positives and forces analysts to manually select appropriate SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Consider the example in Listing 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In line 4, a Rational object is created from two integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In line 5, the integer representation of the Rational is retrieved using the method intValue() and stored in variable value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Eventually, this value is sent out of the device via SMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Since SUSI wrongly considers the method intValue() as a SOURCE and the method sendTextMessage() as a SINK, a taint analysis based on SUSI will report a leak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This leak, however, is irrelevant in the context of security and privacy as the SOURCE is not sensitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Thus, analysts will consider it a false positive when aiming to detect sensitive data leaks in Android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We aim to improve upon the state of the art by producing a more adequate ground truth to train an improved machine learning model based on method documentation and source code, unexplored until now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' APPROACH In this paper, we aim to automatically identify sensitive SOURCE and SINK methods in the Android framework among all API methods available to developers (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', > 210 000 in Android 11) using supervised machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Figure 3 shows an overview of our approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Similar to SUSI, we build our training data by manually labeling Android methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We consider a method as a sensitive SOURCE if it matches Definition 43, a SINK if it matches Definition 5, and NEITHER otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In contrast to SUSI, our approach then uses features extracted from the code as well as the documentation to train a machine-learning model on our ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This is a key difference to SUSI, which only uses distinct properties extracted from parts of the code, 3in the following, when we refer to SOURCE, we mean ”Sensitive” SOURCE such as method names and parameters or class and method modifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, SUSI completely disregards the method’s documentation, which CODOC includes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Moreover, as we rely on the entire source code of a method, we are able to capture the entire semantic of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We finally use our generated model to predict new sensitive SOURCE and SINK methods from the Android framework methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We explain the individual steps in the following sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' First, we give details about our manual labeling of Android methods in Section V-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Then, in Section V-B, we explain what features were chosen for training our models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Lastly, in Section V-C, we explicit on what machine learning models our approach builds upon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Manual Labeling Since our approach relies on supervised machine learning algorithms, labeled data is needed to train our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, manual labeling is a challenging and time-consuming task, especially if we randomly chose methods from the Android framework to label one by one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, finding a SOURCE or a SINK through random picking is highly unlikely as most methods in the Android framework are neither.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, we opted for a better strategy divided into three phases: Phase 1: The authors first constituted a golden dataset based on well-known methods that return sensitive data described in the literature [18], [4], [14], [39], [15], [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' These meth- ods span across classes such as: TelephonyManager, AccountManager, LocationManager, SmsMessage, or SensorManager.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This step yielded an initial set of 39 SOURCE and 35 SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Phase 2: As explained in Section IV, SUSI can generate lists of sources and sinks (from its own definition, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', not restricted to sensitive methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We applied SUSI on Android API version 30 to generate additional pre-selected input that we hand-labeled as training data for CODOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' As described previously, randomly picking methods from the Android API would mostly lead to methods that are neither sources nor sinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, we opted to focus hand-labeling on methods that are more likely to be relevant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We concatenated the list of sources and the list of sinks computed by SUSI to obtain a full list of methods M that SUSI considers relevant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Note that we did not manually post-process methods that SUSI classified as neither a source or a sink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Two of the authors then applied manual post-processing as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' One author started from the top of each list manually classify each method in the respective list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The other author started from the bottom of each list with same task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For each method m ∈ M, the authors independently read the documentation and the source code to be able to classify it as a SOURCE, a SINK, or NEITHER based on the definitions described in Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This step leads to three lists per author: a x SOURCE list;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' a y SINK list;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and z a NEITHER list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Phase 3: The third phase aimed at calculating the inter-rater agreement between the data labeled by both authors in phase 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We use inter-author agreement as a quality measure for 4 Public methods extraction Documented methods filter Documentation extraction Source code extraction Source code embedding Documentation embedding Classification Documentation extraction Source code extraction Source code embedding Documentation embedding Training Sources Sinks Neither Reference Set Model Vectors Source Sink Neither Vectors Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 3: Overview of the CODOC approach our hand-labeled dataset, which is later used for training the CODOC classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To do so, both authors together alternately verified the results of each other and noted the agreement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Eventually, a Cohen’s Kappa coefficient [11] was computed to evaluate the level of agreement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Due to the clear definitions given in Section III, both authors reached a perfect agreement level of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In phase 2, both authors classified 192 methods as sources, resulting in a total of 231 SOURCE methods from phase 1 and phase 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 95 methods as sinks, resulting in a total of 130 SINK methods;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' as well as 654 NEITHER methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In total, we have a set of 1015 API methods for model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Data collection and representation Our approach relies on two different types of input: x the documentation of a method;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and y the source code of a method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This section explains how these data were gathered and transformed into numerical value vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Data Collection: As an open-source project, the Android source code is directly available on the Internet 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We down- loaded and parsed it using JAVAPARSER [55] to extract public methods that were documented, and implemented (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', con- crete methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A method that is documented means either that: x the method itself is documented;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' or y in one of its parent classes/interfaces is documented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For each method in the so derived dataset, we extracted x its source code;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and y its documentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Eventually, our dataset consists of 46 034 methods from the Android framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Source Code Representation: Source code must be pre- processed as a piece of textual information before it can serve as input for machine-learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In our work, we rely on CODE2VEC [2] (also see ection II).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Since the samples have different sizes, they must be transformed into fixed-size numerical vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CODE2VEC relies on a neural network that needs to be trained in order to generate source code vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' As the source code of the Android framework is Java code and the original pre-trained model available in the CODE2VEC project repository5 has been trained on Java source code as well, we could have used this model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, 4https://android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='googlesource.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='com/ 5https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='com/tech-srl/code2vec 0 200 400 600 800 1000 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 4: Distribution of the number of words in the documen- tation collected per Android method since Android code contains platform-specific semantic tokens that cannot be found in regular Java source code (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', Activity, BroadcastReceiver, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' ), we decided against this approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Instead, we trained the model with the source code from the Android framework to assure that our model properly captures the platform-specific tokens prevalent in Android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' After training the CODE2VEC model with Android frame- work data, we fed the model with the 46 034 Android methods previously extracted to generate their numerical value vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Eventually, 46 034 vectors of size 384 were generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Documentation representation: In the same way as the source code, the documentation has to be embedded into fixed- sized numerical value vectors to be fed into machine learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We relied on SENTENCE-BERT [44] to generate those vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We leave experimentation with other models such as BERT [13] or RoBERTa [32] to future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We used SENTENCE-BERT with the ”paraphrase-mpnet- base-v2” pre-trained model, which is the one achieving the best performances6 at the time of writing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Eventually, all documentation of the 46 034 previously gathered methods were converted into 768-value long vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The distribution of the number of words in the documentation collected is available in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Note that on average, the number of words in the documentation collected is 56, and the median is 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In future work, we will investigate the effect of text summarizing, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', learning from more compact texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Deep Learning Architecture Our deep learning model architecture is straightforward and aims at combining documentation and source code vectors into a single representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The overall architecture is available in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Since we are working on two different inputs 6https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='sbert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='net/docs/pretrained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='html 5 <>Documentation vector Source code Vector Dense layers Dense layers Parallel Neural Networks Concatenation layer Dense layers Pneither Softmax Psource Psink Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 5: CODOC neural network architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', the documentation and the source code) of two different sizes, we decided to rely on two parallel and identical sub- neural networks and combine their output into a single vector that, in turn, is used for a classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Each of those two parallel networks is built using a stack of three dense [22] layers with ReLU [6] as activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' They are used for extracting fixed-size features from the two inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Thus, the first and the second sub-networks take as input the 768-long documentation vector and the 384-long source code vector, respectively, and provide as output two vectors of size 128 each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Those outputs are combined using a concatenation layer that produces an unique 256-long vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We use this unique vector for a classification task carried out in 3 additional dense layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A softmax [52] loss function is considered in the last dense layer in order to perform a multi-class classification, resulting in a classification as SOURCE, SINK, or NEITHER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' EVALUATION To evaluate CODOC, we address the following research questions: RQ1: Do documentation and source code features provide complementary input for classification?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ2: How does CODOC perform in 10-fold cross-validation and how does it compare with SUSI?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ3: Can CODOC make better predictions than SUSI in the wild, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', with unseen data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ4: How does CODOC perform on previously unseen meth- ods?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ5: How do the source and sink lists created by CODOC and by SUSI compare in data flow analysis?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ1: Features complementarity Objective: In this section, we aim at evaluating to what extent both the source code and the documentation are needed to predict sensitive SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Intuitively, the source code and the documentation should contribute complementary pieces of semantic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Experimental Setup: To experimentally evaluate this hypoth- esis, we run and compare 4 configurations of CODOC: 1) Binary classification with SOURCE and ¬SOURCE a) Only documentation b) Only source code 2) Binary classification with SINK and ¬SINK a) Only documentation b) Only source code Note that we tested these configurations on multiple clas- sifiers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', we exchanged the dense classification layer in Figure 5 with other classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We did so to ensure that RQ1 is answered in-depth and not dependent on a single clas- sification approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For each binary classification described above, we performed a stratified 10-fold cross-validation [26], [54], and for each iteration we retained the methods that were mis-classified considering only the documentation and well-classified using only the source code, and vice-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The results using only code resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' only documentation are available in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We notice that predictions using only documentation yield better results both for source and sink predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 25 33 110 Source Code Documentation Number of sources correctly predicted (KNN) (a) Sources prediction 4 14 64 Source Code Documentation Number of sinks correctly predicted (KNN) (b) Sinks prediction Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 6: Number of sources and sinks correctly predicted with the KNN algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, we computed the overall number of methods that were mis-classified with the documentation but well-classified with the source code, and vice-versa for sources and sinks and for each classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Figures 6a and 6b illustrate the results of this experiment for the KNN algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' There we can see that classification with the source code can correctly predict 25 sources that classification with documentation cannot, and 33 vice-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In the same way, classification with the source code can correctly predict 4 sinks that classification with documentation cannot, and 14 vice-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This shows that, although classification with documentation is better, it misses some samples that classification with source code does not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This shows the need to use both features for our work to improve our model capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ1 answer: Both the documentation and the code inde- pendently bring additional information with regard to the other, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', several sources/sinks could only be found with the documentation, or only with code features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ2: 10-fold cross validation on CODOC’s model and comparison with the state-of-the-art SUSI Objective: In this section, we investigate whether our ap- proach can better identify sensitive SOURCE and SINK meth- ods than the state-of-the-art SUSI approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We use CODOC with source code and documentation as input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Note that classi- fying SOURCE and SINK methods is not a binary classification 6 SOURCE prediction SINK prediction Code Documentation Code Documentation A P R F K A P R F K A P R F K A P R F K XGB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 SVC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 DT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 RF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='51 GNB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='81 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 SGD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 KNN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='69 BDT1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 BDT2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 ET 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='57 ADA1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 ADA2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='70 GB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='59 NN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='00 TABLE I: Results of binary classification on multiple classifiers with code and documentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' (A = Accuracy, P = Precision, R = Recall, F = F1 score, K = Kappa score) Precision Recall F1 score SOURCE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 SINK 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 NEITHER 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='94 Macro Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 Weighted Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 TABLE II: CODOC performances Precision Recall F1 score SOURCE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 SINK 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='78 NEITHER 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 Macro Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='84 Weighted Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='87 TABLE III: SUSI performances with our ground-truth problem since a method can either be: x a SOURCE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' y a SINK;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' or z NEITHER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Hence, our model relies on a multiclass classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CODOC performances: To evaluate our approach, we apply a stratified 10-fold cross-validation and compute the following metrics: x precision ( |T P | |T P |+|F P |);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' y recall ( |T P | |T P |+|F N|);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and z F1 score (2 × precision×recall precision+recall), with TP = True Positive, FP = False Positive, and FN = False Negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Table II, we present the results of CODOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Note that we retain the activation function that yielded the best results for our neural network, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', ReLU [1] (we tested the following activation functions: ReLU, sigmoid, tangent, and several combinations of these functions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Comparison with SUSI: SUSI is not intended to generate privacy-sensitive SOURCE and SINK methods, as explained in Section I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, a direct comparison between our ap- proach CODOC and the pre-trained SUSI would be unfair [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We therefore trained SUSI on our own ground truth to evaluate it and compare it with CODOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Table III shows the results of a 10-fold cross validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' First, we notice that both CODOC and SUSI independently yield better results for the NEITHER class compared to SOURCE and SINK classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This is expected since the training data set is highly imbalanced towards NEITHER methods – most of the Android API is not a source or sink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' While Lists # SOURCE # SINK CODOC 15 105 1061 SUSI ORIGINAL 25 369 5913 SUSI NEW 12 082 1010 TABLE IV: Number of SOURCE and SINK methods in the lists generated by CODOC and SUSI this imbalance may be relieved using over-/undersampling techniques or class weights, it does not affect the comparative performance of the tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Second, CODOC clearly yields slightly better performance than SUSI to classify sensitive SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CODOC outperforms SUSI by 4 points for the precision, the recall, and the F1 score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ2 answer: The performance of our approach CODOC achieves an F1 score of 91% to identify SOURCE and SINK methods in the Android framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Furthermore, on the same training set, CODOC achieves a slightly better score than the state-of-the-art SOURCE and SINK classifier SUSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ3: CODOC and SUSI comparison in the wild Objective: This research question aims to compare which methods are identified as sensitive SOURCE or SINK methods by CODOC, and which ones by SUSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To do so, we check the (non-)overlap to judge the performance of both approaches outside of the 10-fold cross-validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We apply SUSI and CODOC on the full Android SDK, containing mostly non- labeled methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Experimental setup: To compare CODOC against SUSI, we generate the following lists of SOURCE and SINK methods (sizes shown in Table IV): 1) CODOC: the lists generated by CODOC on Android 30 (API version 11), trained on our ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 2) SUSI ORIGINAL: the lists generated by SUSI on a more recent version of Android (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', 30), trained on SUSI’s original ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 3) SUSI NEW: the lists generated by SUSI on a more recent version of Android (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', 30), trained on our ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 7 We first analyze the overlap between SUSI’s and CODOC’s lists in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We notice that for both SUSI ORIGINAL and SUSI NEW, and both SOURCE and SINK methods, CODOC and SUSI do not have much in common.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='We manually curated 6 data sets for further inspection: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='1) We select 100 non-sensitive SOURCE methods classified ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='by SUSI ORIGINAL and compare CODOC’s predictions ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='2) We select 100 misclassified SINK methods predicted by ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI ORIGINAL and compare CODOC’s predictions ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='3) We select 100 non-sensitive SOURCE methods classified ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='by SUSI NEW and compare CODOC’s predictions ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='4) We select 100 misclassified SINK methods predicted by ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI NEW and compare CODOC’s predictions ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='5) We select 100 misclassified SOURCE methods predicted ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='by CODOC and compare with both SUSI ORIGINAL and ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI NEW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='6) We select 100 misclassified SINK methods predicted by ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='CODOC and compare with both SUSI ORIGINAL and ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI NEW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='Methods ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='selection: Regarding the sources,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' the authors randomly browsed the SOURCE methods yielded by SUSI ORIGINAL,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' SUSI NEW,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and CODOC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and consulted the source code and the documentation to ensure the sensitiveness of the methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In case the methods did not return sensitive values, they were retained for this research question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The authors stopped after 100 SOURCE methods (which is statistically significant at a 95% confidence level and a confidence interval ± 10% from the 25 369 sources of SUSI ORIGINAL, the 12 082 of SUSI NEW, and the 15 105 of CODOC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Regarding the sinks, the same procedure was applied as for sources, except the authors checked if the methods could send data out of the app (100 sinks are statistically significant at a 95% confidence level, and a confidence interval ± 10% from the 5913 sinks of SUSI ORIGINAL, the 1010 of SUSI NEW, and the 1061 of CODOC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' An example of a non-sensitive SOURCE method misclassified is: ”android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='bluetooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='BluetoothCodecStatus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='describeContents()” that can be seen in Listing 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Indeed, the documentation and the source code are very explicit: this method only returns 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' An example of a SINK method misclassified is: ”com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='internal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='util.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='FastMath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='round(float)”, the source code and documentation of which are available in Listing 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Indeed, this method is only intended to round a float number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' It does not write any value outside the app.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Eventually, we gathered 6 datasets with 100 methods each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Results: The overall results are available in Table V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' From the 100 non-sensitive SOURCE methods classified as sources by SUSI, CODOC only classified 13 as sensitive SOURCE methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' From the 100 non-SINK methods classified as sinks by SUSI, CODOC only classified 4 as SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Discussion: These results, in the wild, do not confirm that CODOC is better than SUSI (contrary to what is shown in Section VI-B), nor the other way around, due to the very low overlap between the SOURCE and SINK methods predicted by both tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Tables V and VI do not show that any approach 10507 7484 4598 CoDoC SuSi_New (a) Sources prediction 982 931 79 CoDoC SuSi_New (b) Sinks prediction 8158 18422 6947 CoDoC SuSi_Original (c) Sources prediction 858 5710 203 CoDoC SuSi_Original (d) Sinks prediction Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 7: Overlap between SOURCE and SINK methods lists of SUSI and CODOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 1 /** 2 Always returns 0 3 4 @return 0 5 @hide 6 / 7 public int describeContents() { 8 return 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 9 } Listing 2: Source code of method ”describeContents” of class ”android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='bluetooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='BluetoothCodecStatus” is stronger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Better, taken independently, Table V shows that CODOC is preferable, while Table VI shows that SUSI is preferable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Hence, even if intuitively, by relying on both the source code and the documentation, better results are expected, in practice, it is not true according to our experiments and observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 1 /** 2 Fast round from float to int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This is faster than Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='round() �→ 3 thought it may return slightly different results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' It does not try to �→ 4 handle (in any meaningful way) NaN or infinities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 5 / 6 public static int round(float value) { 7 long lx = (long) (value * (65536 * 256f));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 8 return (int) ((lx + 0x800000) >> 24);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 9 } Listing 3: Source code of method ”round” of class ”com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='internal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='util.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='FastMath” ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='Prediction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI ORIGINAL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='CODOC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI NEW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='CODOC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SOURCE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SINK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='TABLE V: CODOC performance on SUSI’s misclassified ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SOURCE and SINK methods ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='Prediction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='CODOC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI ORIGINAL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='CODOC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SUSI NEW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SOURCE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='32 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SINK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='TABLE VI: SUSI performance on CODOC’s misclassified ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SOURCE and SINK methods ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='False positives ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='True positives ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SOURCE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='SINK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='56 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='44 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='TABLE VII: Source and sink methods’ false and true posi- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='tives’ rates predicted by CODOC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='RQ3 answer: While the outputs of CODOC and SUSI differ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' neither is strictly superior over the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Though CODOC excels on SUSI’s misclassified results, the inverse is also correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ4: Real-world performance of CODOC Objective: In this section, we aim to qualitatively evaluate CODOC’s predicted lists of SOURCE and SINK methods, which is the main goal of this approach, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', generating lists of SOURCE and SINK methods actionable to other tools, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', data leak detectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To do so, we randomly selected 100 SOURCE and 100 SINK methods in the lists generated by CODOC, which is statistically significant for the dataset at a 95% confidence level and a confidence interval ± 10%, and inspected each method based on two criteria: x the sensitiveness of SOURCE methods for user privacy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and y the fact that SINK methods can actually make data leave the application space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Results: The results of our manual analyses are available in Table VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Although, as seen in Section VI-B, CODOC achieves a score of 82% precision in the lab to classify SOURCE methods, in the wild, it reaches a false positive rate of 90%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Same conclusion for SINK methods for which though CODOC achieves a precision score of 93% to classify SINK methods, in the wild, it reaches a false positive rate of 56%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ4 answer: Results indicate that though CODOC achieves high-performance scores when assessing its underlying deep learning model, in the wild, it achieves poorly with over 90% false positive for SOURCE methods and 56% false positive for SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ5: False positive measurement in sensitive data leak detection To evaluate the effect of the false positives generated by CODOC on real-world applications, we utilize FLOW- DROID [4] to find data leaks in Android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Intuitively, a false positive in a list of sources or sinks is only relevant if it leads to spurious leaks in the data flow analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A method that is never used in an app might be on a source or sink list, but does not have any negative effect in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For this evaluation, we randomly selected 500 popular apps from the 2022 GOOGLE PLAY across all available categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For each app analysis, we set FLOWDROID timeouts to 5 min for data flow analysis and 3 min for callback collection and configured the JVM with 768GB maximum heap size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We ran the analysis on a system with 144 logical cores backed by four physical Intel Xeon Gold 6254 CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Note that we focus on the quality of sources and sinks and not the performance of the data flow analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We, therefore, opted for a system with sufficient resources to scale to large apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We configure FLOWDROID with three different lists of sources and sinks: 1) CODOC: The list generated by CODOC on Android version 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 2) SUSI NEW: The list generated by SUSI, where SUSI was trained on the ground truth presented in this paper and classified methods of Android version 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 3) SUSI ORIGINAL: The list generated by SUSI, where SUSI was trained on the original SUSI training data and classified methods of Android version 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For each of these lists (SrcNN, SnkNN with NN ∈ {SUSI, SUSI NEW, CODOC}), FLOWDROID yields a set of data flows FlowNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We then use the data flows FlowNN to remove all sources and sinks from the lists that are not used in at least one data flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This leads to a reduced list of sources and sinks, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' � SrcNN, � SnkNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We validate these lists by hand and count the number of methods that lead to leaks, but that are not actually privacy-sensitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For CODOC, we find StringBuilder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='toString(), which is clearly a false positive, to be the most com- monly used “source” in the data flow analysis (72% of all flows).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The second most common source was StringBuffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='toString() with around 8% of all flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The sinks are more reasonable, with the Android log methods being the most prevalent ones (21% of all flows).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The SUSI NEW results lead to far fewer flows (3,410 instead of 71 211 for CODOC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The used sources and sinks are more widely distributed, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', the top source only accounts for 10% of all sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Still, the used sources and sinks are mostly false positives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' For SUSI ORIGINAL, we find the most flows (153 558).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The structure of used sources and sinks resembles SUSI NEW, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', a wide variety of methods, most of which are false positives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RQ5 answer: The results show that the false positives generated by SUSI and CODOC have a major negative impact on the precision of the data flow analysis that uses these lists of sources and sinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' DISCUSSION We designed our study and approach under the hypothesis that adding more semantic, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', using code and documentation together, would provide better results than the current state of the art to classify SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Further, we integrated recent advances in machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Unfortunately, empirical results show that CODOC performs poorly in prac- tice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' More precisely, our investigations show that: 9 Although code and documentation are complementary to predict SOURCE and SINK methods, CODOC performs poorly in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Although CODOC achieves good lab results and better lab results than SUSI, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', precision, recall, and F1 score of 91% compared to a precision, recall and F1 score of 87%, it performs poorly in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CODOC generates many false positives when applied to Android framework methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', classifies SOURCE and SINK methods that are not SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The false positives in the SOURCE and SINK lists lead to false positives in the data flow analysis which renders the lists unfit for real-world data leak detection scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' LIMITATIONS AND THREATS TO VALIDITY CODOC relies on two inputs to make its prediction, namely the source code and the documentation of Android methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We acknowledge that the Android framework contains un- documented methods which cannot be taken into account by CODOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The lack of method documentation makes CODOC miss some methods to classify, hence, sensitive SOURCE and SINK methods are certainly missed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, the proportion of methods missed is too low (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='6%) to fully explain the poor real-world performance of the approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CODOC relies on supervised machine learning techniques which, by definition, need labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, we per- formed manual labeling based on our expertise to label An- droid framework methods as SOURCE, SINK, or NEITHER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Consequently, though we observed a strict and consistent procedure, our labels can be influenced by human subjectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Nonetheless, we make public all of our artifacts to the research community to mitigate this threat to validity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Our training set is limited to 1015 samples across three classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This might not be enough training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' We will explore the use of data augmentation in future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Sensitiveness is a concept not well-defined, especially for technical frameworks, and exposed to human subjectivity since there is no formal definition of what it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Also, the authors noticed during manual labeling that sometimes there is only a fine line between a sensitive value and a non-sensitive one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, the choices regarding sensitiveness can be biased based on human subjectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' As already described and motivated in Section V-A, our manual labeling process was performed without taking into account SUSI’s NEITHER methods list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, we note that since SUSI yields very good results on the NEITHER category [3], there is a high chance that this list contained well-classified samples, hence being more representative than the ones misclassified as SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' RELATED WORK In this section, we present the related works available in the literature that are close to our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Taint analysis, which requires proper lists of sources and sinks, is used for a variety of purposes: vulnerability detection [33], [9], [37], sensitive data leak detection [4], [28], [45], [14], [58], [20], [58], [45], [61], hidden behavior detection [66], [53], malware detection [51] or bad practices detection [63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' All of these works require lists of SOURCE and SINK methods that have to be defined in advance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' If these lists are not complete, the approaches may miss important data flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Therefore, automated techniques were proposed to catch as many SOURCE and SINK methods as possible, aiming to reach completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2012, Gibler & al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [19] proposed an approach to automatically detect SOURCE and SINK methods based on mappings between methods and the permissions needed to invoke those methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Methods requiring sensitive permissions were considered sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Methods requiring the INTERNET permission were considered sinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' However, not all sensitive methods need permissions in the Android framework [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Thus, permission-based approaches miss relevant sources and sinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Our work, on the other hand, considers all methods in the Android framework regardless of required permissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Two years later, Arzt & al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [3] proposed SUSI, an automated approach that relies on machine learning to classify SOURCE and SINK methods in the Android framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' SUSI relies on features based on the method signature (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', the method name, the parameter types, the return value, the modifiers, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=') and based on dataflow features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In contrast to SUSI, our approach CODOC relies on a multi-input deep-learning classifier based on: x the source code, and y the documentation of a method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' More recently, Wongwiwatchai & al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [59] proposed an approach to detect privacy leaks in Android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The authors did not rely on existing lists of SOURCE and SINK methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Rather, they defined their own lists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' To do so, the authors study well-known frameworks on data protection (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', the GDPR) and constitute a list of personal information commonly defined in these regulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The process of mapping personal infor- mation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', an age) to Android APIs is opaque in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Hence it is difficult to judge the approach’s comprehensiveness and the rate of false positives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In contrast, our approach aims to automatically and systematically map sensitive API methods to sensitive data with machine learning techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' CONCLUSION As described in Section VI-D, CODOC, likewise SUSI, does not provide actionable results in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Indeed, although we have shown in Section VI-B that CODOC outperforms SUSI with a score of 91% on our ground truth, our manual evaluations have shown that CODOC performs poorly on the Android framework methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Hence, the resulting SOURCE and SINK methods’ list produced cannot be relied upon in real-world data leak detection scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' This negative result and the literature [57], [24], [35] show: x the problem of classifying SOURCE and SINK methods is not trivial;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' and y there is an urgent need of a community effort to produce an actionable list of SOURCE and SINK methods for sensitive data leak detection in Android apps carrying highly sensitive data about end users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 10 XI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' DATA AVAILABILITY For the sake of Open Science, we provide to the community all the artifacts used in our study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In particular, we make available the datasets used during our experimentation, the source code of our prototype as well as the scripts to execute CODOC, our manually labeled datasets, the vector represen- tation of source code and documentation used, and SUSI related artifacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' The project’s repository including all artifacts is available at: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='com/JordanSamhi/CoDoC XII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' ACKNOWLEDGMENT This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Additionally, this work was partly supported by the Luxembourg National Research Fund (FNR), under projects Reprocess C21/IS/16344458 and the AFR grant 14596679.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 11 REFERENCES [1] Abien Fred Agarap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Deep learning using rectified linear units (relu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' arXiv preprint arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='08375, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [2] Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Code2vec: Learning distributed representations of code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' ACM Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', 3(POPL), January 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [3] Steven Arzt, Siegfried Rasthofer, and Eric Bodden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Susi: A tool for the fully automated classification and categorization of android sources and sinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' University of Darmstadt, Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' TUDCS-2013-0114, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [4] Steven Arzt, Siegfried Rasthofer, Christian Fritz, Eric Bodden, Alexan- dre Bartel, Jacques Klein, Yves Le Traon, Damien Octeau, and Patrick McDaniel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Flowdroid: Precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' SIGPLAN Not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', 49(6):259–269, June 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Neural machine translation by jointly learning to align and translate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='0473, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [6] Chaity Banerjee, Tathagata Mukherjee, and Eduardo Pasiliao Jr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' An empirical study on generalizations of the relu activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 2019 ACM Southeast Conference, pages 164–167, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [7] Luciano Bello and Marco Pistoia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Ares: triggering payload of evasive android malware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2018 IEEE/ACM 5th International Conference on Mobile Software Engineering and Systems (MOBILESoft), pages 2–12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [8] David Brumley, Cody Hartwig, Zhenkai Liang, James Newsome, Dawn Song, and Heng Yin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Automatically identifying trigger-based behavior in malware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Botnet Detection, pages 65–88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Springer, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [9] Jun Cai, Peng Zou, Jinxin Ma, and Jun He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Sworddta: A dynamic taint analysis tool for software vulnerability detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Wuhan University Journal of Natural Sciences, 21(1):10–20, Feb 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [10] Erika Chin, Adrienne Porter Felt, Kate Greenwood, and David Wagner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Analyzing inter-application communication in android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services, MobiSys ’11, page 239–252, New York, NY, USA, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [11] Jacob Cohen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A coefficient of agreement for nominal scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Educational and Psychological Measurement, 20(1):37–46, 1960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [12] Janez Demˇsar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Statistical comparisons of classifiers over multiple data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=', 7:1–30, December 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bert: Pre-training of deep bidirectional transformers for language un- derstanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='04805, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [14] William Enck, Peter Gilbert, Seungyeop Han, Vasant Tendulkar, Byung- Gon Chun, Landon P Cox, Jaeyeon Jung, Patrick McDaniel, and Anmol N Sheth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Taintdroid: an information-flow tracking system for realtime privacy monitoring on smartphones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' ACM Transactions on Computer Systems (TOCS), 32(2):1–29, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [15] William Enck, Damien Octeau, Patrick D McDaniel, and Swarat Chaud- huri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A study of android application security.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In USENIX security symposium, volume 2, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [16] Ming Fan, Le Yu, Sen Chen, Hao Zhou, Xiapu Luo, Shuyue Li, Yang Liu, Jun Liu, and Ting Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' An empirical evaluation of gdpr compliance violations in android mhealth apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE), pages 253– 264, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [17] Pietro Ferrara and Fausto Spoto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Static analysis for gdpr compliance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In ITASEC, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [18] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Fratantonio, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bianchi, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Robertson, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Kirda, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Kruegel, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Vigna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Triggerscope: Towards detecting logic bombs in android applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2016 IEEE Symposium on Security and Privacy (SP), pages 377–396, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [19] Clint Gibler, Jonathan Crussell, Jeremy Erickson, and Hao Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' An- droidleaks: Automatically detecting potential privacy leaks in android applications on a large scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Stefan Katzenbeisser, Edgar Weippl, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Jean Camp, Melanie Volkamer, Mike Reiter, and Xinwen Zhang, editors, Trust and Trustworthy Computing, pages 291–307, Berlin, Heidelberg, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Springer Berlin Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [20] Michael I Gordon, Deokhwan Kim, Jeff H Perkins, Limei Gilham, Nguyen Nguyen, and Martin C Rinard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Information flow analysis of android applications in droidsafe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In NDSS, volume 15, page 110, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [21] Sigmund Albert Gorski, Benjamin Andow, Adwait Nadkarni, Sunil Man- andhar, William Enck, Eric Bodden, and Alexandre Bartel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Acminer: Extraction and analysis of authorization checks in android’s middleware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the Ninth ACM Conference on Data and Application Security and Privacy, CODASPY ’19, page 25–36, New York, NY, USA, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [22] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Densely connected convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [23] Hao Jiang, Hongli Yang, Shengchao Qin, Zhendong Su, Jian Zhang, and Jun Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Detecting energy bugs in android apps using static analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Zhenhua Duan and Luke Ong, editors, Formal Methods and Soft- ware Engineering, pages 192–208, Cham, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Springer International Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [24] Mohsin Junaid, Donggang Liu, and David Kung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Dexteroid: Detecting malicious behaviors in android apps using reverse-engineered life cycle models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Computers & Security, 59:92–117, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [25] Jinyung Kim, Yongho Yoon, Kwangkeun Yi, Junbum Shin, and SWRD Center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Scandal: Static analyzer for detecting privacy leaks in android applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' MoST, 12:1, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [26] Ron Kohavi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A study of cross-validation and bootstrap for accuracy estimation and model selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Ijcai, volume 14, pages 1137–1145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Montreal, Canada, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [27] Li Li, Kevin Allix, Daoyuan Li, Alexandre Bartel, Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyand´e, and Jacques Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Potential component leaks in android apps: An investigation into a new feature set for malware detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2015 IEEE International Conference on Software Quality, Reliability and Security, pages 195–200, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [28] Li Li, Alexandre Bartel, Tegawend´e F Bissyand´e, Jacques Klein, Yves Le Traon, Steven Arzt, Siegfried Rasthofer, Eric Bodden, Damien Octeau, and Patrick McDaniel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Iccta: Detecting inter-component privacy leaks in android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, volume 1, pages 280–291.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [29] Li Li, Alexandre Bartel, Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyand´e, Jacques Klein, and Yves Le Traon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Apkcombiner: Combining multiple android apps to support inter-app analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Hannes Federrath and Dieter Gollmann, editors, ICT Systems Security and Privacy Protection, pages 513–527, Cham, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Springer International Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [30] Li Li, Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyand´e, Yves Le Traon, and Jacques Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Accessing inaccessible android apis: An empirical study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2016 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 411–422, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [31] Li Li, Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyand´e, Mike Papadakis, Siegfried Rasthofer, Alexandre Bartel, Damien Octeau, Jacques Klein, and Le Traon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Static analysis of android apps: A systematic literature review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Information and Software Technology, 88:67 – 95, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [32] Zhuang Liu, Wayne Lin, Ya Shi, and Jun Zhao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A robustly optimized bert pre-training approach with post-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Sheng Li, Maosong Sun, Yang Liu, Hua Wu, Liu Kang, Wanxiang Che, Shizhu He, and Gaoqi Rao, editors, Chinese Computational Linguistics, pages 471–484, Cham, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Springer International Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [33] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Benjamin Livshits and Monica S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Lam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Finding security vulnerabili- ties in java applications with static analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 14th Conference on USENIX Security Symposium - Volume 14, SSYM’05, page 18, USA, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' USENIX Association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [34] Long Lu, Zhichun Li, Zhenyu Wu, Wenke Lee, and Guofei Jiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Chex: Statically vetting android apps for component hijacking vulnerabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 2012 ACM Conference on Computer and Com- munications Security, CCS ’12, page 229–240, New York, NY, USA, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [35] Linghui Luo, Eric Bodden, and Johannes Sp¨ath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A qualitative analysis of android taint-analysis results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 102–114, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [36] Yuhong Nan, Zhemin Yang, Xiaofeng Wang, Yuan Zhang, Donglai Zhu, and Min Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Finding clues for your secrets: Semantics-driven, learning-based privacy discovery in mobile apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In NDSS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [37] James Newsome and Dawn Xiaodong Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Dynamic taint analysis for automatic detection, analysis, and signaturegeneration of exploits on commodity software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In NDSS, volume 5, pages 3–4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Citeseer, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [38] Damien Octeau, Daniel Luchaup, Matthew Dering, Somesh Jha, and Patrick McDaniel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Composite constant propagation: Application to 12 android inter-component communication analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, volume 1, pages 77–88, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [39] Xiaorui Pan, Xueqiang Wang, Yue Duan, XiaoFeng Wang, and Heng Yin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Dark hazard: Learning-based, large-scale discovery of hidden sensitive operations in android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In NDSS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [40] Thanasis Petsas, Giannis Voyatzis, Elias Athanasopoulos, Michalis Poly- chronakis, and Sotiris Ioannidis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Rage against the virtual machine: Hindering dynamic analysis of android malware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the Seventh European Workshop on System Security, EuroSec ’14, New York, NY, USA, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [41] Siegfried Rasthofer, Steven Arzt, Enrico Lovat, and Eric Bodden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Droid- force: Enforcing complex, data-centric, system-wide policies in android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2014 Ninth International Conference on Availability, Reliability and Security, pages 40–49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [42] Siegfried Rasthofer, Steven Arzt, Marc Miltenberger, and Eric Bodden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Harvesting runtime values in android applications that feature anti- analysis techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In NDSS, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [43] Dhruv Rathi and Rajni Jindal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Droidmark: A tool for android malware detection using taint analysis and bayesian network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' arXiv preprint arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='06620, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [44] Nils Reimers and Iryna Gurevych.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Sentence-bert: Sentence embeddings using siamese bert-networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' arXiv preprint arXiv:1908.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='10084, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [45] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Samhi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bartel, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyande, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Raicc: Revealing atypical inter-component communication in android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pages 1398–1409, Los Alamitos, CA, USA, may 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE Computer Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [46] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Samhi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyande, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Difuzer: Uncovering sus- picious hidden sensitive operations in android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), pages 723–735, Los Alamitos, CA, USA, May 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE Computer Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [47] Jordan Samhi, Kevin Allix, Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Bissyand´e, and Jacques Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' A first look at android applications in google play related to covid-19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Empirical Software Engineering, 26(4):57, April 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [48] Jordan Samhi and Alexandre Bartel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' On the (in)effectiveness of static logic bomb detector for android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE Transactions on Dependable and Secure Computing, pages 1–1, August 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [49] Jordan Samhi, Jun Gao, Nadia Daoudi, Pierre Graux, Henri Hoyez, Xiaoyu Sun, Kevin Allix, Tegawend´e F Bissyand´e, and Jacques Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Jucify: A step towards android code unification for enhanced static analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), pages 1232–1244, Los Alamitos, CA, USA, May 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE Computer Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [50] Golam Sarwar, Olivier Mehani, Roksana Boreli, and Mohamed Ali Kaafar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' On the effectiveness of dynamic taint analysis for protecting against private information leaks on android-based devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In SE- CRYPT, volume 96435, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [51] Venkatesh Gauri Shankar, Gaurav Somani, Manoj Singh Gaur, Vijay Laxmi, and Mauro Conti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Androtaint: An efficient android malware detection framework using dynamic taint analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2017 ISEA Asia Security and Privacy (ISEASP), pages 1–13, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [52] Sagar Sharma, Simone Sharma, and Anidhya Athaiya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Activation functions in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' towards data science, 6(12):310–316, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [53] Dawei Shi, Xiucun Tang, and Zhibin Ye.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Detecting environment- sensitive malware based on taint analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2017 8th IEEE In- ternational Conference on Software Engineering and Service Science (ICSESS), pages 322–327, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [54] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Stone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Cross-validatory choice and assessment of statistical predic- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Journal of the Royal Statistical Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Series B (Methodological), 36(2):111–147, 1974.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [55] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Tomassetti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Javaparser, https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content='com/javaparser/javaparser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Ac- cessed August 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [56] Victor Van Der Veen, Herbert Bos, and Christian Rossow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Dynamic analysis of android malware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Internet & Web Technology Master thesis, VU University Amsterdam, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [57] Weiping Wang, Jianjian Wei, Shigeng Zhang, and Xi Luo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Lscdroid: Malware detection based on local sensitive api invocation sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE Transactions on Reliability, 69(1):174–187, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [58] Fengguo Wei, Sankardas Roy, Xinming Ou, and Robby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Amandroid: A precise and general inter-component data flow analysis framework for security vetting of android apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, page 1329–1341, New York, NY, USA, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [59] Nattanon Wongwiwatchai, Phannawhat Pongkham, and Kunwadee Sri- panidkulchai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Comprehensive detection of vulnerable personal infor- mation leaks in android applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 121–126, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [60] Songyang Wu, Pan Wang, Xun Li, and Yong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Effective detection of android malware based on the usage of data flow apis and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Information and software technology, 75:17–25, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [61] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Yang and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Leakminer: Detect information leakage on android with static taint analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2012 Third World Congress on Software Engineering, pages 101–104, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [62] Zhemin Yang, Min Yang, Yuan Zhang, Guofei Gu, Peng Ning, and X Sean Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Appintent: Analyzing sensitive data transmission in android for privacy leakage detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, pages 1043–1054, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [63] Sergio Yovine and Gonzalo Winniczuk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Checkdroid: A tool for au- tomated detection of bad practices in android applications using taint analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2017 IEEE/ACM 4th International Conference on Mobile Software Engineering and Systems (MOBILESoft), pages 175–176, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [64] Mu Zhang, Yue Duan, Heng Yin, and Zhiruo Zhao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Semantics-aware android malware classification using weighted contextual api depen- dency graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, page 1105–1116, New York, NY, USA, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [65] Mu Zhang and Heng Yin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Appsealer: Automatic generation of vulnerability-specific patches for preventing component hijacking at- tacks in android applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In NDSS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Citeseer, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [66] Qingchuan Zhao, Chaoshun Zuo, Brendan Dolan-Gavitt, Giancarlo Pellegrino, and Zhiqiang Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Automatic uncovering of hidden behaviors from input validation in mobile apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2020 IEEE Symposium on Security and Privacy (SP), pages 1106–1120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' [67] Hao Zhou, Wei Zhang, Fengqiong Wei, and Yunfang Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' Analysis of android malware family characteristic based on isomorphism of sensitive api call graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' In 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC), pages 319–327, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} +page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/e9E1T4oBgHgl3EQfegRi/content/2301.03207v1.pdf'} diff --git a/eNE0T4oBgHgl3EQfWwDh/content/2301.02284v1.pdf b/eNE0T4oBgHgl3EQfWwDh/content/2301.02284v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e058dfdfb149ee8faa468d24f5197cf73d06d011 --- /dev/null +++ b/eNE0T4oBgHgl3EQfWwDh/content/2301.02284v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a66b21629e6c03f2968f6c4d4257e623743cc0857de3dcc9dd44fce3aa5819c2 +size 443911 diff --git a/eNE0T4oBgHgl3EQfWwDh/vector_store/index.faiss b/eNE0T4oBgHgl3EQfWwDh/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..7f850c1e50438b1c7e8ef89f99a7813bc88b9bdc --- /dev/null +++ b/eNE0T4oBgHgl3EQfWwDh/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:488eba5298a6c5633de3fea2642c05269cc961708450f54ccad06be8c1dae47a +size 4194349 diff --git a/eNE0T4oBgHgl3EQfWwDh/vector_store/index.pkl b/eNE0T4oBgHgl3EQfWwDh/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..87b6f01e9a6a2f1fd487858bdc6bd76fc3c45a9a --- /dev/null +++ b/eNE0T4oBgHgl3EQfWwDh/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff3019021a75f3c0fa0a607e6393ac7606cabbd9636e7a477d85c4576f4ba63a +size 138554 diff --git a/fdE_T4oBgHgl3EQf2RwE/content/tmp_files/2301.08339v1.pdf.txt b/fdE_T4oBgHgl3EQf2RwE/content/tmp_files/2301.08339v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..3bb03c7dff981f5030bc1ab1d73f65cf8a9833a5 --- /dev/null +++ b/fdE_T4oBgHgl3EQf2RwE/content/tmp_files/2301.08339v1.pdf.txt @@ -0,0 +1,335 @@ +Radiation Shielding Analysis for the PIP-II Linac at Fermilab +FERMILAB-CONF-22-709-AD +Igor Rakhno,* Nikolai Mokhov,# Igor Tropin,§ Sergei Striganov,Δ† Yury Eidelman†§ +*Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, Illinois 60510-5011, rakhno@fnal.gov +#Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, Illinois 60510-5011, mokhov@fnal.gov +§Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, Illinois 60510-5011, tropin@fnal.gov +†Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, Illinois 60510-5011, strigano@fnal.gov +†§Euclid Techlabs, LLC, Solon, Ohio 44139, eidelyur@fnal.gov +ΔDeceased +INTRODUCTION +The Proton Improvement Plan-II (PIP-II) [1] has been +developed at Fermilab to provide powerful proton beams to +the laboratory’s experiments. An 800-MeV superconducting +linear accelerator—a centerpiece of the project—is currently +under construction in Batavia, Illinois (USA). After +completion, the superconducting linac will be the starting +point for the 1.2 MW (Phase 1) and 2.4-MW (Phase 2) +proton beam that is needed for the Long-Baseline Neutrino +Facility (LBNF) at Fermilab [2]. Due to unavoidable loss of +a fraction of the beam in the accelerator components, a +certain level of radiation will be generated in the accelerator +tunnel both during normal operation and at accidents. This +work deals with radiation shielding design for the +accelerator facility. +MARS15 Model of the Accelerator and Beam +A detailed computation model for the entire PIP-II +Linac, Linac-to-Booster transfer line and corresponding +shielding has been developed with the MARS15 Monte +Carlo code [3]. Several parts of this model are shown in +Figs. 1 thru 6. The accelerator model—based on engineering +design and CAD geometry models—comprises major beam- +line components including quadrupole and dipole magnets, +solenoid magnets, superconducting accelerating cavities and +cryomodules. Such a detailed model allows us to predict +three-dimensional distributions of prompt and residual dose +rate with a high level of accuracy. The MARS15 model is +based on a built-in three-dimensional MAD-X based beam- +line builder [4] and ROOT geometry [5] that provides great +flexibility when building complicated geometry structures. +Electromagnetic field distributions in the magnets and +accelerating cavities were accounted for as well. Transport +of charged secondary particles scattered back to an aperture +of these elements is performed by means of the solvers +comprised the ODEINT package of the BOOST C++ +library. As an extra verification step, a comparison of +individual trajectories has been made with analytical +solution for energy gain along design trajectory of the linac +[6] and trajectories generated by means of TraceWin code +[7]. For energy gain on design trajectory, a comparison +between our code and TraceWin code revealed perfect +agreement. For distant trajectories, an acceptable agreement +has been observed. +The accelerator shielding represents a bulk permanent +structure with approximately a hundred of penetrations for +both equipment and personnel. Current shielding design is +based on an initial design developed during the initial stage +of the project using conservative assumptions and simplified +analytical methods. In fact, the presented shielding analysis +is a shielding optimization study. +The PIP-II Linac is designed for negatively charged +hydrogen ions (H-) in order to mitigate space-charge effects +inevitable for high power beams. At the end of the transfer +line, two electrons will be stripped off each ion using the +standard stripping foil technique, which ultimately produces +a proton beam. In order to properly describe the beam +transport in electromagnetic fields and beam interactions +with matter, two new particles have been introduced to the +MARS15 code, namely H- and H0. Interactions of these H- +and H0 particles with matter are simulated using a model +based on experimental data. +Fig. 1. A plan view of a major fragment of the model that +shows the Linac with Front End Building and initial part of +the Linac-to-Booster transfer line with shielding. The light +blue, grey and light green colors correspond to the air, +concrete and soil, respectively. + +(w)x +10.0 +5.0 +0.0 +-5.0 +-10.0 +0.0 +100.0 +200.0 +(w)zFig. 2. A 3D view of the beamline components in the +transition region from the Linac to the transfer line. +Fig. 3. A 3D view of several beamline components that +belong to a single cryomodule. +Fig. 4. A sample set of H- trajectories in the Linac. The +region with s from 20 to 80 m contains beamline +components with apertures less than 20 mm. +Fig. 5. A schematic engineering rendering of the RF +waveguides and cable penetrations along the Linac. +Fig. 6. A fragment of the model that shows a cross section +of the Linac tunnel, klystron gallery and penetrations for the +RF waveguides. The beige color corresponds to a crashed +rock layer around the tunnel. +Beam Loss +A worldwide-spread 1 W/m rule of uniform beam loss +rate during normal operation, derived at the brainstorming +workshop [8] from hands-on maintenance conditions for +proton energy above 200 MeV, is used in this study as +overall normalization in corresponding sections of the +Linac. For normal operation, our goal is to make sure the +prompt dose rate outside the Linac shielding does not +exceed 0.5 µSv/hr. +As the worst-case beam accident scenario, we follow +the approach used for the ESS linear accelerator [6]. In this +case, the misbehaved beam of a full intensity is assumed to +hit the beam pipe upstream of the corrector doublet in the + +y(mm) +40.0 +20.0 +0.0 +20.0 +40.0 +100.0 +200.0 +300.0 +s(m) +400.0y(m) +15.0 +12.0 +9.0 +6.0 +3.0 +0.0 +x(m) +5.0 +0.0 +-5.0 +-10.0last Linac section that corresponds to the highest particle +energies. Duration of the accident is assumed to be 3 +seconds and the angle of incidence is 2.5 milliradian. Our +goal is to make sure that, due to the accident, prompt dose +rate both atop the Linac shielding and in the klystron gallery +will not exceed 0.01 mSv/hr. +RESULTS +Various prompt dose rate distributions have been +calculated: along the Linac and Linac-to-Booster transfer +line, on the berm and in the klystron gallery. A sophisticated +combination of splitting and Russian roulette has been used +in order to deal with the deep penetration problem (i.e. thick +shielding above the accelerator tunnel). A comparison +between this accurate approach and a simplified one +mentioned above [6] confirmed that the latter represents a +reasonable approximation to the accurate solution. Figures +7-8 and 9-10 show calculated dose distributions for normal +operation and the accident scenario, respectively. It is worth +mentioning that the exponential fitting works well not only +for normal operation when relatively long flat regions can +be present (see Fig. 7, z from 210 to 240 m), but for +localized accidents as well (see Figs. 9-10). +Fig. 7. A calculated distribution of prompt dose (elevation +view) along the Linac at normal operation. The irregularities +in the distribution along z axis are due to essentially +heterogeneous structure of the beamline model introduced +by cryomodules and accounting for electromagnetic fields in +accelerating SRF cavities in the cryomodules. +Fig. 8. The calculated prompt dose distribution above the +Linac tunnel at normal operation, averaged along z axis +from 210 to 240 m (see Fig. 7), and an exponential fitting +function. +Fig. 9. A calculated distribution of prompt dose (elevation +view) along the Linac at the accident assuming one accident +per hour. +Prompt +dose +distributions +along +the +multiple +penetrations is a separate topic in this study. Calculations +revealed that the round cable penetrations are much less +important than larger rectangular penetrations for the RF +waveguides (see Fig. 5) from the standpoint of enhanced +radiation streaming. Also, detailed calculations revealed that +the goal of not exceeding 0.01 mSv/hr in the klystron +gallery at the accident can be achieved at a relatively modest +price tag, namely using a concrete lid as thick as 90 cm in +the RF vault (see Figs. 6, 11 and 12). +Other distributions, essential from the radiological +standpoint, have been calculated as well: surface water + +y(m) +8.0 +4.0 +0.0 +z(m) +0.0 +50.0 +100.0 +150.0 +200.0 +250.0 +8.1e+05 +10610410210°10′210'410-610-8 +Prompt dase fmSv/hrNormal operation +- MARS15 +Fitting curve +10° +10 +10 +Fitting curve A*exp(-x/2) +103 +A +15.68 ± 0.06 +2 +41.85 ± 0.1 +10° +0 +100 +200 +300 +400 +500 +Soil thickness above tunnel with a crashed rock layer, X (cm)y(m) +8.0 +4.0 +0.0 +(w)z +160.0 +200.0 +240.0 +280.0 +1.3e+08 +y +10°10610410²10°10-210-410-610-8 +Prompt dase fmSvfhr far a full beam accidentactivation, beam line component activation, residual dose, +air activation as well as a skyshine contribution. +Fig. 10. The calculated prompt dose distribution above the +Linac tunnel at the accident, averaged along z axis from 222 +to 228 m (see Fig. 9), and an exponential fitting function. +Fig. 11. A calculated distribution of prompt dose around the +Linac (cross section at z = 227 m) at the accident assuming +one accident per hour. +ENDNOTES +This work is supported by Fermi Research Alliance, LLC +under contract No. DE-AC02-07CH11359 with the U.S. +Department of Energy. +This research used, in part, an ALCC allocation at the +Argonne Leadership Computing Facility, which is a DOE +Office of Science User Facility supported under Contract +DE-AC02-06CH11357. +Fig. 12. The calculated prompt dose distribution in the RF +vault (see Figs. 6 and 11). Basic configuration means using +a lid as thick as 30 cm. +REFERENCES +1. https://pip2.fnal.gov. +2. https://www.fnal.gov/pub/science/lbnf-dune/index.html. +3. N. V. MOKHOV and C.C. JAMES, “The MARS code +system User’s Guide, Version 15 (2019),” Fermilab-FN- +1058-APC (2017); https://mars.fnal.gov. +4. N. V. MOKHOV and I. S. TROPIN, “MARS15-Based +System for Beam Loss and Collimation Studies,” Proc. +ICFA Mini-Workshop on Tracking for Collimation in +Particle Accelerators, CERN, Geneva, Switzerland, October +30, 2018, Vol. 2/2018, p. 57, CERN (2018). +5. R. BRUN and F. RADEMAKERS, “ROOT – An Object- +Oriented Data Analysis Framework,” Nucl. Inst. & Meth. In +Phys. Res., A 389, 81 (1997). +6. +N. MOKHOV, I. TROPIN, I. RAKHNO, Yu. +EIDELMAN, L. TCHELIDZE, “ESS accelerator prompt +radiation shielding design assessment,” ESS-0052477 +(2016). +7. D. URIOT, N. PICHOFF, “Status of TraceWin Code,” +Proc. 6th Int. Particle Accel. Conf., Richmond, Virginia, +USA, +May +3-8, +2015, +MOPWA008 +(2000); +http://accelconf.web.cern.ch/AccelConf/IPAC2015/papers/p +roceed1.pdf. +8. +N.V. Mokhov in J. ALONSO, “Beam Loss Working +Group Report,” Proc. 7th ICFA Mini-Workshop on High +Intensity High Brightness Hadron Beams, Interlaken Resort +on lake Como, Wisconsin, USA, September 13-15, 1999, p. +51, Fermilab (2000). + +Full beam accident +-MARS15 +103 +Fitting curve +10° +l0 +TITTT +109 +10 +Fitting curve A*exp(-x/2) +A +2948.4 ± 5.0 +41.24 ±0.05 +10°2 +0 +100 +200 +300 +400 +500 +Soil thickness above tunnel with a crashed rock layer, X (cm)y(m) +15.0 +12.0 +9.0 +6.0 +3.0 +0.0 +(w)x +5.0 +0.0 +-5.0 +-10.0 +6.1e+07 +y +10610410²10°10-210′410-610-8 +10 +Promnnt dose fnsvihr-Basicconfiguration +10 +=RFvault lid3ft,accidentplaneat30deg +.... RF vault lid 3 ft.vertical accident plane +Prompt dose (mSv/accident) +10° +10° +104 +200 +300 +400 +500 +600 +Elevation above beamline (cm) \ No newline at end of file diff --git a/fdE_T4oBgHgl3EQf2RwE/content/tmp_files/load_file.txt b/fdE_T4oBgHgl3EQf2RwE/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..47b2f01b5b8eced6f7f3a250ebdbbcc0de3a5604 --- /dev/null +++ b/fdE_T4oBgHgl3EQf2RwE/content/tmp_files/load_file.txt @@ -0,0 +1,244 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf,len=243 +page_content='Radiation Shielding Analysis for the PIP-II Linac at Fermilab FERMILAB-CONF-22-709-AD Igor Rakhno,* Nikolai Mokhov,# Igor Tropin,§ Sergei Striganov,Δ† Yury Eidelman†§ Fermi National Accelerator Laboratory, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Box 500, Batavia, Illinois 60510-5011, rakhno@fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov #Fermi National Accelerator Laboratory, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Box 500, Batavia, Illinois 60510-5011, mokhov@fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov §Fermi National Accelerator Laboratory, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Box 500, Batavia, Illinois 60510-5011, tropin@fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov †Fermi National Accelerator Laboratory, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Box 500, Batavia, Illinois 60510-5011, strigano@fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov †§Euclid Techlabs, LLC, Solon, Ohio 44139, eidelyur@fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov ΔDeceased INTRODUCTION The Proton Improvement Plan-II (PIP-II) [1] has been developed at Fermilab to provide powerful proton beams to the laboratory’s experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' An 800-MeV superconducting linear accelerator—a centerpiece of the project—is currently under construction in Batavia, Illinois (USA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' After completion, the superconducting linac will be the starting point for the 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='2 MW (Phase 1) and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='4-MW (Phase 2) proton beam that is needed for the Long-Baseline Neutrino Facility (LBNF) at Fermilab [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Due to unavoidable loss of a fraction of the beam in the accelerator components, a certain level of radiation will be generated in the accelerator tunnel both during normal operation and at accidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' This work deals with radiation shielding design for the accelerator facility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' MARS15 Model of the Accelerator and Beam A detailed computation model for the entire PIP-II Linac, Linac-to-Booster transfer line and corresponding shielding has been developed with the MARS15 Monte Carlo code [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Several parts of this model are shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 1 thru 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The accelerator model—based on engineering design and CAD geometry models—comprises major beam- line components including quadrupole and dipole magnets, solenoid magnets, superconducting accelerating cavities and cryomodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Such a detailed model allows us to predict three-dimensional distributions of prompt and residual dose rate with a high level of accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The MARS15 model is based on a built-in three-dimensional MAD-X based beam- line builder [4] and ROOT geometry [5] that provides great flexibility when building complicated geometry structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Electromagnetic field distributions in the magnets and accelerating cavities were accounted for as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Transport of charged secondary particles scattered back to an aperture of these elements is performed by means of the solvers comprised the ODEINT package of the BOOST C++ library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' As an extra verification step, a comparison of individual trajectories has been made with analytical solution for energy gain along design trajectory of the linac [6] and trajectories generated by means of TraceWin code [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' For energy gain on design trajectory, a comparison between our code and TraceWin code revealed perfect agreement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' For distant trajectories, an acceptable agreement has been observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The accelerator shielding represents a bulk permanent structure with approximately a hundred of penetrations for both equipment and personnel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Current shielding design is based on an initial design developed during the initial stage of the project using conservative assumptions and simplified analytical methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' In fact, the presented shielding analysis is a shielding optimization study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The PIP-II Linac is designed for negatively charged hydrogen ions (H-) in order to mitigate space-charge effects inevitable for high power beams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' At the end of the transfer line, two electrons will be stripped off each ion using the standard stripping foil technique, which ultimately produces a proton beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' In order to properly describe the beam transport in electromagnetic fields and beam interactions with matter, two new particles have been introduced to the MARS15 code, namely H- and H0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Interactions of these H- and H0 particles with matter are simulated using a model based on experimental data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A plan view of a major fragment of the model that shows the Linac with Front End Building and initial part of the Linac-to-Booster transfer line with shielding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The light blue, grey and light green colors correspond to the air, concrete and soil, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' (w)x 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 (w)zFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A 3D view of the beamline components in the transition region from the Linac to the transfer line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A 3D view of several beamline components that belong to a single cryomodule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A sample set of H- trajectories in the Linac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The region with s from 20 to 80 m contains beamline components with apertures less than 20 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A schematic engineering rendering of the RF waveguides and cable penetrations along the Linac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A fragment of the model that shows a cross section of the Linac tunnel, klystron gallery and penetrations for the RF waveguides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The beige color corresponds to a crashed rock layer around the tunnel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Beam Loss A worldwide-spread 1 W/m rule of uniform beam loss rate during normal operation, derived at the brainstorming workshop [8] from hands-on maintenance conditions for proton energy above 200 MeV, is used in this study as overall normalization in corresponding sections of the Linac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' For normal operation, our goal is to make sure the prompt dose rate outside the Linac shielding does not exceed 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='5 µSv/hr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' As the worst-case beam accident scenario, we follow the approach used for the ESS linear accelerator [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' In this case, the misbehaved beam of a full intensity is assumed to hit the beam pipe upstream of the corrector doublet in the y(mm) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 300.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 s(m) 400.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0y(m) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 x(m) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0last Linac section that corresponds to the highest particle energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Duration of the accident is assumed to be 3 seconds and the angle of incidence is 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='5 milliradian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Our goal is to make sure that, due to the accident, prompt dose rate both atop the Linac shielding and in the klystron gallery will not exceed 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='01 mSv/hr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' RESULTS Various prompt dose rate distributions have been calculated: along the Linac and Linac-to-Booster transfer line, on the berm and in the klystron gallery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A sophisticated combination of splitting and Russian roulette has been used in order to deal with the deep penetration problem (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' thick shielding above the accelerator tunnel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A comparison between this accurate approach and a simplified one mentioned above [6] confirmed that the latter represents a reasonable approximation to the accurate solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Figures 7-8 and 9-10 show calculated dose distributions for normal operation and the accident scenario, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' It is worth mentioning that the exponential fitting works well not only for normal operation when relatively long flat regions can be present (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 7, z from 210 to 240 m), but for localized accidents as well (see Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 9-10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A calculated distribution of prompt dose (elevation view) along the Linac at normal operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The irregularities in the distribution along z axis are due to essentially heterogeneous structure of the beamline model introduced by cryomodules and accounting for electromagnetic fields in accelerating SRF cavities in the cryomodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The calculated prompt dose distribution above the Linac tunnel at normal operation, averaged along z axis from 210 to 240 m (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 7), and an exponential fitting function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A calculated distribution of prompt dose (elevation view) along the Linac at the accident assuming one accident per hour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Prompt dose distributions along the multiple penetrations is a separate topic in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Calculations revealed that the round cable penetrations are much less important than larger rectangular penetrations for the RF waveguides (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 5) from the standpoint of enhanced radiation streaming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Also, detailed calculations revealed that the goal of not exceeding 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='01 mSv/hr in the klystron gallery at the accident can be achieved at a relatively modest price tag, namely using a concrete lid as thick as 90 cm in the RF vault (see Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 6, 11 and 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Other distributions, essential from the radiological standpoint, have been calculated as well: surface water y(m) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 z(m) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 150.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content="1e+05 10610410210°10′210'410-610-8 Prompt dase fmSv/hrNormal operation MARS15 Fitting curve 10° 10 10 Fitting curve A*exp(-x/2) 103 A 15." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='68 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='06 2 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='85 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='1 10° 0 100 200 300 400 500 Soil thickness above tunnel with a crashed rock layer, X (cm)y(m) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 (w)z 160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 280.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='3e+08 y 10°10610410²10°10-210-410-610-8 Prompt dase fmSvfhr far a full beam accidentactivation, beam line component activation, residual dose, air activation as well as a skyshine contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The calculated prompt dose distribution above the Linac tunnel at the accident, averaged along z axis from 222 to 228 m (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 9), and an exponential fitting function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' A calculated distribution of prompt dose around the Linac (cross section at z = 227 m) at the accident assuming one accident per hour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' ENDNOTES This work is supported by Fermi Research Alliance, LLC under contract No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' DE-AC02-07CH11359 with the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Department of Energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' This research used, in part, an ALCC allocation at the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' The calculated prompt dose distribution in the RF vault (see Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 6 and 11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Basic configuration means using a lid as thick as 30 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' REFERENCES 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' https://pip2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov/pub/science/lbnf-dune/index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' MOKHOV and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' JAMES, “The MARS code system User’s Guide, Version 15 (2019),” Fermilab-FN- 1058-APC (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' https://mars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='fnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='gov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' MOKHOV and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' TROPIN, “MARS15-Based System for Beam Loss and Collimation Studies,” Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' ICFA Mini-Workshop on Tracking for Collimation in Particle Accelerators, CERN, Geneva, Switzerland, October 30, 2018, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 2/2018, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 57, CERN (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' BRUN and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' RADEMAKERS, “ROOT – An Object- Oriented Data Analysis Framework,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' & Meth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' In Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=', A 389, 81 (1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' MOKHOV, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' TROPIN, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' RAKHNO, Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' EIDELMAN, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' TCHELIDZE, “ESS accelerator prompt radiation shielding design assessment,” ESS-0052477 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' URIOT, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' PICHOFF, “Status of TraceWin Code,” Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 6th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Particle Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=', Richmond, Virginia, USA, May 3-8, 2015, MOPWA008 (2000);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' http://accelconf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='cern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='ch/AccelConf/IPAC2015/papers/p roceed1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Mokhov in J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' ALONSO, “Beam Loss Working Group Report,” Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 7th ICFA Mini-Workshop on High Intensity High Brightness Hadron Beams, Interlaken Resort on lake Como, Wisconsin, USA, September 13-15, 1999, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' 51, Fermilab (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content=' Full beam accident MARS15 103 Fitting curve 10° l0 TITTT 109 10 Fitting curve A*exp(-x/2) A 2948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='4 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='24 ±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='05 10°2 0 100 200 300 400 500 Soil thickness above tunnel with a crashed rock layer, X (cm)y(m) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 (w)x 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='1e+07 y 10610410²10°10-210′410-610-8 10 Promnnt dose fnsvihr-Basicconfiguration 10 =RFvault lid3ft,accidentplaneat30deg .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='. RF vault lid 3 ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} +page_content='vertical accident plane Prompt dose (mSv/accident) 10° 10° 104 200 300 400 500 600 Elevation above beamline (cm)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/fdE_T4oBgHgl3EQf2RwE/content/2301.08339v1.pdf'} diff --git a/gNE0T4oBgHgl3EQfpAFc/vector_store/index.faiss b/gNE0T4oBgHgl3EQfpAFc/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..5c863a4922a7695c79e93538a474f95b6f9cb3f2 --- /dev/null +++ b/gNE0T4oBgHgl3EQfpAFc/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f629b420ad747d88af44c493781de78f7dd4a8c7d74074f74ccaea3e053f4692 +size 3801133 diff --git a/gtE1T4oBgHgl3EQfzAWY/content/2301.03440v1.pdf b/gtE1T4oBgHgl3EQfzAWY/content/2301.03440v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d108a438aa0e220cb8c56fe5e613bd129cedc093 --- /dev/null +++ b/gtE1T4oBgHgl3EQfzAWY/content/2301.03440v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2a5778edf6792171c49056de38ab705d873929f250be758889ee151c500d9c3 +size 2158779 diff --git a/gtE1T4oBgHgl3EQfzAWY/vector_store/index.faiss b/gtE1T4oBgHgl3EQfzAWY/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..5ed882f7c4b2976a25e3960bd8fcb17d46264aec --- /dev/null +++ b/gtE1T4oBgHgl3EQfzAWY/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:876e783dc6d1d2a28f39d5003e61146dc0b3bcb0a9360119e93343ca51f1eba1 +size 3473453 diff --git a/htE0T4oBgHgl3EQfXwCw/content/2301.02298v1.pdf b/htE0T4oBgHgl3EQfXwCw/content/2301.02298v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d8f9090aa585dce0752178dd1c5388fef7419f32 --- /dev/null +++ b/htE0T4oBgHgl3EQfXwCw/content/2301.02298v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c5d8fc4abf37344aa4fdd868ab62167e94d74ef6bfc21a56fce3cdbf3436929 +size 861456 diff --git a/htE0T4oBgHgl3EQfXwCw/vector_store/index.faiss b/htE0T4oBgHgl3EQfXwCw/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..aaa3064a5d99f06630d2390d03134fb23deaa896 --- /dev/null +++ b/htE0T4oBgHgl3EQfXwCw/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27f6600a257282220c931d17de285224b22ca86382cd6f2cbaaf37ce47e51937 +size 3211309 diff --git a/htE0T4oBgHgl3EQfXwCw/vector_store/index.pkl b/htE0T4oBgHgl3EQfXwCw/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a8a46acd341b6ba6f0d2822332f8d46ed52efd76 --- /dev/null +++ b/htE0T4oBgHgl3EQfXwCw/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b953bde8309b24a5f96bda492c03471000958e104a5f2735a8cec7504cb1e43 +size 109228 diff --git a/i9AzT4oBgHgl3EQfM_vp/content/2301.01143v1.pdf b/i9AzT4oBgHgl3EQfM_vp/content/2301.01143v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ae9c22ace7bccf3f070912ddc54f8f59d3ff59e9 --- /dev/null +++ b/i9AzT4oBgHgl3EQfM_vp/content/2301.01143v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bdb52430eadacf660df01a53341373b1af7f63c3501fa8bd90d0c4bb40648e5 +size 735781 diff --git a/i9AzT4oBgHgl3EQfM_vp/vector_store/index.pkl b/i9AzT4oBgHgl3EQfM_vp/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..ae43f42ec75906c324f5cc956e32bdac8a71f322 --- /dev/null +++ b/i9AzT4oBgHgl3EQfM_vp/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70b9c5f6ae24b1ec2f639fbe188da22ac2a92f07ec7f198fb9d6ed314b2535a7 +size 130657 diff --git a/i9E0T4oBgHgl3EQfYQAP/content/2301.02303v1.pdf b/i9E0T4oBgHgl3EQfYQAP/content/2301.02303v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7b80e25bcda7f54d724f3cd6fc4dd6a6ddbc945e --- /dev/null +++ b/i9E0T4oBgHgl3EQfYQAP/content/2301.02303v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d26001e5552a20c98f5c26f6b95f9e4bfc6d1f4a5c79dd6b2cadcd60e7b6cad +size 1592290 diff --git a/i9E0T4oBgHgl3EQfYQAP/vector_store/index.faiss b/i9E0T4oBgHgl3EQfYQAP/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..32b43275ead37188bc544630a2b64e9ac2f6d43e --- /dev/null +++ b/i9E0T4oBgHgl3EQfYQAP/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3466ca08bd8ee261d15a7ac37cfc62733d0d1a39adbe657d933badf12f743da8 +size 6684717 diff --git a/i9E0T4oBgHgl3EQfYQAP/vector_store/index.pkl b/i9E0T4oBgHgl3EQfYQAP/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..220fb61ea9d852ec3de101949d589d6d86fade38 --- /dev/null +++ b/i9E0T4oBgHgl3EQfYQAP/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:222abfaa5eb289a6d44f6528861865c35dcc68cca7e96d35e3fcef1ee75296ef +size 215209 diff --git a/iNE2T4oBgHgl3EQfHwaf/vector_store/index.faiss b/iNE2T4oBgHgl3EQfHwaf/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..27db7ecd1a66005ef7fd2e7e23f6478fa1e93ab9 --- /dev/null +++ b/iNE2T4oBgHgl3EQfHwaf/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f0cda2d647f11a058cf333dcc6489a886242814c681b22b6b59748e77532a7b +size 7602221 diff --git a/jNAzT4oBgHgl3EQf4_5S/content/tmp_files/2301.01852v1.pdf.txt b/jNAzT4oBgHgl3EQf4_5S/content/tmp_files/2301.01852v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e602d88058e60601e1db3ae9e279f640cfb618ac --- /dev/null +++ b/jNAzT4oBgHgl3EQf4_5S/content/tmp_files/2301.01852v1.pdf.txt @@ -0,0 +1,2832 @@ +arXiv:2301.01852v1 [stat.ME] 4 Jan 2023 +Censored Regression with Serially Correlated +Errors: a Bayesian approach +Rodney Sousa∗ +Isabel Pereira† +Maria Eduarda Silva‡ +Brendan McCabe§ +Abstract +The problem of estimating censored linear regression models with au- +tocorrelated errors arises in many environmental and social studies. The +present work proposes a Bayesian approach to estimate censored regres- +sion models with AR(p) errors. The algorithm developed here considers +the Gibbs sampler with data augmentation (GDA), in which, at each it- +eration, both the model parameters and the latent variables are sampled. +The data augmentation is achieved from multiple sampling of the latent +variables from the corresponding conditional distributions. +A suitable +variable transformation allows the full likelihood to be obtained. A sim- +ulation study indicates that the proposed approach produces estimates +with a high accuracy even in scenarios where the proportion of censored +observations is large. +The method is further illustrated in a real data of cloud ceiling height, +including model checking and selection for censored time series data. +keywords: Censored Data, Linear Regression, Autocorrelation, Bayesian +Analysis, Gibbs sampler, Data augmentation +1 +Introduction +Censored observations arise when explicit limits are placed on the observed +data and occur in several fields including environmental monitoring, economics, +medical and social sciences. The censoring may due to measuring device lim- +itations, such as detection limits in air pollution or mineral concentration in +water, Hopke et al. (2001)). In economics, censoring occurs when constraints or +regulations are imposed, such as on observations in international trade where +exports and imports are subject to trade barriers, Zangari and Tsurumi (1996). +∗Corresponding author: rodney@ua.pt, University of Aveiro, Portugal & CIDMA +†isabel.pereira@ua.pt, Departamento de Matemática, Universidade de Aveiro, Portugal & +CIDMA +‡mesilva@fep.up.pt, Faculdade de Economia, Universidade do Porto, Portugal & LIADD- +INESC +§Brendan.Mccabe@liverpool.ac.uk, +Managment +School, +Chatham +Building Chatham +Street, University of Liverpool, L69 7ZH +1 + +Since the work of Buckley and James (1979) an extensive body of literature on +regression analysis with censored responses has been developed. In addition to +censoring, the data often exhibit serial correlation, leading to the adoption of +dynamic censored models. +In the time series regression context, censoring has been addressed by several +authors. The first methodological approach to estimation of censored regressions +with autocorrelated errors was proposed by Zeger and Brookmeyer (1986), who +presented the exact likelihood function for this model. The likelihood is con- +structed based on blocks of data of variable dimensions. +As the block size +usually increases with the censoring rate, maximum likelihood quickly becomes +numerically intractable. Acknowledging this issue, the authors suggest an ap- +proximate approach based on a pseudo-likelihood. Park et al. (2007) introduced +an imputation method to estimate an ARMA model from a censored time series. +The potentially censored values are imputed from random values simulated from +their conditional distribution given the observed data and the censoring infor- +mation. The resulting time series is considered complete and may be analysed +with the usual time series methods. Mohammad (2014) proposed a quasi-EM +algorithm to fit ARMA models in the presence of censoring with the particu- +larity of treating missing data as a special case of censoring. Schumacher et al. +(2017) suggests using a Stochastic Approximation of the EM technique, SAEM, +based on the unconditional likelihood function of the linear regression models +with AR(p) errors. These authors have shown via simulations that their method +yields consistent estimates even when the proportion of censored values is large +(≈ 40%). Houseman and Virji (2017) proposed a Bayesian approach to handle +exposure time series data subject to left censoring, where the autocorrelation is +modelled by a spline-based method in order to account for non-stationary auto- +correlation. Wang and Chan (2018) suggested a quasi-likelihood method based +on a system of equations and performed model checking based on simulated +residuals (Gourieroux et al., 1987). +The problem of estimating regression models with autocorrelated errors +from censored observations has also been addressed in a Bayesian framework. +Zangari and Tsurumi (1996) considered three Bayesian procedures for censored +regression models with AR(1) errors. The authors derive posterior densities +for the parameters of the model building on the work of Zeger and Brookmeyer +(1986), using Laplace approximations, a Gibbs sampler with data augmentation +and a quadrature numerical integration procedure. However, the authors found +that the Gibbs sampler using a data augmentation algorithm failed to converge +for moderate censoring percentages (10-15%) and strongly correlated distur- +bances. Later, Wei and Tanner (1990) considered a censored autoregression of +order p with exogenous variables (censored ARX(p)) and developed a sampling +scheme for the conditional posterior distributions of the censored data, success- +fully applying the Gibbs sampler with data augmentation. This procedure also +builds on the Zeger and Brookmeyer (1986) decomposition of the likelihood. +The present work proposes a Bayesian approach to estimate censored regres- +sion models with AR(p) errors, as it is acknowledged that the coefficients of these +models have the usual interpretation and thus are easier to explicate in com- +2 + +parison with ARX models. The algorithm developed here considers the Gibbs +sampler with data augmentation (GDA), in which, at each iteration, both the +model parameters and the latent variables are sampled. The data augmentation +is achieved by multiple sampling of the latent variables from the correspond- +ing conditional distributions. The censored observations are thus replaced by +a mean of multiple samples leading to faster convergence of the algorithm and +more accurate estimates. Under data augmentation, the computation of the +likelihood function reduces to that of the likelihood of a multivariate Gaussian +sample. In time series analysis it is usual to resort to the conditional likelihood. +However, in the current situation a suitable variable transformation allows the +full likelihood to be obtained. Additionally, a procedure for model selection and +model assessment in this Bayesian framework based on data augmentation is +proposed. The relative performance of competing models can be assessed using +the Bayes factors, based on the ratio of normalising constants under each model, +referred to as evidence. A review of some commonly used methods of estimating +the model evidence is given in Friel and Wyse (2012). The current paper further +contributes to the literature by showing that GDA is useful for model selec- +tion using measures of predictive performance, traditionally named information +criteria, allowing for forecast evaluation through leave-one-out cross-validation +suitable for time series data. Empirical experiments with synthetic and real +data sets indicate that the proposed approach overcomes the bias introduced by +the censoring even when the censoring rate is high (40 +Finally, note that attention here is restricted to left censoring in the devel- +opment of the procedure. This is, however, easily adapted and extended to the +right censoring case as shown in its application to a time series of cloud ceiling +heights, thus demonstrating the flexibility of the procedure. +The paper is organized as follows: Section 2 defines the model under study; +Section 3 describes the proposed Bayesian approach with data augmentation, +detailing the steps all the required steps and illustrates the performance of the +method under three different censorship scenarios using synthetic data sets. +Section 4 discusses model assessment when using censored data and Section 5 +analyses a time series of cloud ceiling heights, previously analysed by Park et al. +(2007) and by Schumacher et al. (2017), which was originally collected by the +National Center for Atmospheric Research (NCAR). The data consists of 716 +hourly observations in San Francisco, during the month of March 1989, of which +41.7% are censored. Some final remarks and possible future extensions are given +in the conclusion. +2 +Censored Linear Regression with Autocorre- +lated Errors +A latent variable w is said to be left censored at L if only the values above L +are recorded, while the values less or equal to this limit are reported as L. The +observed variable y is, then, defined as +3 + +y = +� +w, if w > L +L, if w ≤ L +(1) +or, equivalently, y = max(w, L). Similarly, if w is right censored, the recorded +values will be y = min(w, L). L may be thought of as a detection limit. +Now consider the classic linear regression model with serially correlated er- +rors defined as an AR(p) process, denoted as LR-AR. The discrete time repre- +sentation of this model for the response variable wt at time t is given by +wt = xtβ + ut +(2) +ut = ρ1ut−1 + ρ2ut−2 + ... + ρput−p + εt, +εt ∼ N(0, σ2 +ǫ) +where xt = (1, xt2, . . . , xtk) is a 1×k vector of explanatory variables or features, +β = (β0, β1, . . . , βk−1) is the k vector of regression coefficients, ut is a stationary +AR(p) process with Gaussian innovations εt and AR coefficients ρ = (ρ1 . . . , ρp), +satisfying the usual stationarity conditions. +Assume now that we observe possibly censored values yt = max(wt, L), where +L is a known censoring limit. Then we write the Censored Linear Regression +model with AR errors, CLR-AR as +yt = max(wt, L) +(3) +wt = xtβ + ut, +ut = ρ1ut−1 + ρ2ut−2 + ... + ρput−p + εt, +εt ∼ N(0, σ2 +ε) +Henceforward, y = (y1, . . . , yT ) represents the actual recorded values which +are possibly censored while w = (w1, . . . , wT ) denotes the corresponding latent +process wt, X represents the T × k matrix of the regressors and θ = (β, ρ, σ2 +ǫ) +the parameter vector. The history of a process ωt up to time t is represented +by ωt = (ω1, . . . , ωt.) +3 +Bayesian inference with data augmentation +A natural approach to inference for censored data in the Bayesian framework +is based on the Gibbs sampler (Gelfand and Smith, 1990; Casella and George, +1992) with data augmentation, GDA (Tanner and Wong, 1978; Fridley and Dixon, +2007; Chib, 1992). This approach can be described as a two-step procedure in +each iteration: (i) the (possibly) censored observations are imputed with values +generated from a truncated conditional distribution thus originating an aug- +mented data set that is considered complete; (ii) the model parameters are +generated from their full conditional distributions. +First, we describe the approach to the data augmentation procedure. +4 + +3.1 +Data augmentation +Usually the data augmentation step relies on the simulation of a single value for +the censored observation at each iteration, a procedure that does not account +for the variance of the truncated distribution (Hopke et al., 2001). +In order +to overcome this problem, this work proposes a new approach to the GDA +algorithm in which the censored observation is imputed with the mean of several, +say m, values simulated from the truncated distribution. +Numerical studies +with synthetic data, see Section 3.5, show that this approach, denoted by GDA- +MSM, leads to posterior distributions for the parameters with good location +and dispersion properties. +The other important issue relates to the truncated distribution from which +we impute the (possibly) censored observations. Given a data set y = (y1, ..., yT ) +possibly with censored observations the augmented data set is defined as defined +as follows, +zt = +� +yt, if yt > L +zt ∼ F(zt|y, θ, zt ≤ L), if yt = L, +(4) +where F(zt|y, θ, zt ≤ L) is the truncated distribution corresponding to the cen- +sored values of the latent variable, with support in ] − ∞, L]. Specifically, under +the Gaussian assumption +f(zt|y, θ, zt ≤ L) = 1 +σ × φ +� zt−µt|zt−1 +σ +� +Φ +� L−µt|zt−1 +σ +� × I(−∞,L)(zt), +(5) +with φ(.) and Φ(.) denoting, respectively, the pdf and cdf of the standard normal +distribution and +µt| zt−1 =ρ1zt−1 + ρ2zt−2 + ... + ρpzt−p +(6) ++ (xt − ρ1xt−1 − ρ2xt−2 − ...ρpxt−p)β. +The resulting vector of augmented data z = (z1, . . . , zT ) is regarded as a hy- +pothetical observations of the latent variable which satisfy the model expressed +in equation (2) and is the object of ensuing Bayesian analysis. The following +sections introduce the elements required for Bayesian analysis: likelihood and +full conditional distributions. +3.2 +Complete Likelihood +To compute the complete likelihood function +L(z|y, θ) = f(z1, . . . , zT|y, θ) +(7) +consider the following variable transform +5 + +z∗ = Qz +(8) +where the T × T matrix Q is such that Q′Q is proportional to the inverse of +the variance-covariance matrix of u. In fact Q is a matrix of the form +Q = + + +q11 +0 +· · · +0 +0 +0 +· · · +0 +· · · +0 +0 +q21 +q22 +· · · +0 +0 +0 +· · · +0 +· · · +0 +0 +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +qp1 +qp2 +· · · +qpp +0 +0 +· · · +0 +· · · +0 +0 +−ρp +−ρp−1 +· · · +−ρ1 +1 +0 +· · · +0 +· · · +0 +0 +0 +−ρp +· · · +−ρ2 +−ρ1 +1 +· · · +0 +· · · +0 +0 +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +· · · +0 +0 +· · · +0 +0 +0 +· · · +−ρp−1 +· · · +1 +0 +0 +0 +· · · +0 +0 +0 +· · · +−ρp +· · · +−ρ1 +1 + + +(9) +with the elements qij obtained under the restriction expressed in equation (11). +In fact, this transform is induced by the following relationship between ε = +(ε1, ε2, ..., εT ) and u = (u1, u2, ..., uT) in model (2) +ε = Qu. +(10) +Let Σε = σ2 +εI and Σu denote the variance-covariance matrices of ε and u, +respectively and I is the identity matrix. Since Σε = σ2 +εI = QΣuQ′ then Q +must satisfy +σ2 +ε(Q′Q)−1 = Σu +(11) +Therefore +z∗ +t = + + + + + + + + + + + + + + + +q11z1, +t = 1 +q21z1 + q22z2, +t = 2 +... +qp1z1 + . . . + qppzp, +t = p +zt − ρ1zt−1 − . . . − ρpzt−p, +t = p + 1, ..., T +(12) +Define X∗ = QX which results in +x∗ +tj = + + + + + + + + + + + + + + + +q11x1j, +t = 1 +q21x1j + q22x2j, +t = 2 +... +qp1x1j + . . . + qppxpj, +t = p +xtj − ρ1xt−1,j − . . . − ρpxt−p,j, +t = p + 1, ..., T +(13) +for j = 2, . . . k and x∗ +t1 = 1, +t = 1, . . . , T. +Then likelihood function (7) is equivalent to +6 + +L(z∗|y, θ) = |Q| +� +1 +2πσ2 +� T +2 +exp +� +− +1 +2σ2 +T +� +t=1 +(z∗ +t − x∗ +t β)2� +(14) +3.3 +Full Conditional Distributions +Bayesian analysis involves formal consideration of prior information and infer- +ences about the model parameters are obtained from the posterior distribution, +π(θ|y), defined by +π(θ|y) ∝ L(y|θ) × π(θ), +(15) +where θ is the parameters vector, L(y|θ) is the likelihood function of the ob- +served data and π(θ) represents the joint prior distribution of the parameters. In +the absence of prior information, noninformative prior distributions are consid- +ered, assumming that β, σ2 and ρ are independent variables with the following +prior specifications +π(β) ∝ c1, +π(σ2) ∝ 1 +σ2 , +π(ρ) ∝ c2 × Iρ∈Sρ, +(16) +where c1, c2 are constants, Sρ is the region of stationarity of the process ut and +I(·) denotes the indicator function. +By combining (14) and (16), the posterior distribution with the augmented +data is written as follows: +π(θ|y, z) ∝ |Q| +� 1 +σ2 +� T +2 +1 +exp +� +− +1 +2σ2 +T +� +t=1 +(z∗ +t − x∗ +t β)2� +× Iρ∈Sρ. +(17) +From (17) it follows that the full conditional distributions for the model +parameters are given by +π(β|σ2, ρ, y, z) ∝ exp +� +− 1 +2(β − ˆβ)′ 1 +σ2 (X∗′X∗)(β − ˆβ) +� +, +(18) +π(σ2|β, ρ, y, z) ∝ +� 1 +σ2 +� T +2 +1 +exp +� +− +1 +2σ2 +T +� +t=1 +(z∗ +t − x∗ +t β)2� +, +(19) +π(ρ|β, σ2, y, z) ∝ |Q| exp +� +− +1 +2σ2 +T +� +t=1 +(z∗ +t − x∗ +t β)2� +× Iρ∈Sρ, +(20) +where ˆβ = (X∗′X∗)−1X∗′z∗ is the Feasible Generalized Least Squares (FGLS) +estimator. The functional forms of (18) and (19) show that +β|σ2, ρ, y, z ∼ N( ˆβ, σ2(X∗′X∗) +−1), +(21) +σ2|β, ρ, y, z ∼ IG +�T +2 , 1 +2(z∗ − X∗β)′(z∗ − X∗β) +� +. +(22) +However, to sample values of ρ we need to use the Metropolis-Hastings algo- +rithm within the Gibbs sampler (Gilks et al., 1995). +7 + +3.4 +GDA-MSM algorithm +The following algorithm describes how to perform Bayesian inference in the +CLR-AR(p) using the Gibbs sampler with the described data augmentation +procedure, GDA-MSM. +Given a data set y = (y1, ..., yT ), possibly with censored observations, the +GDA-MSM algorithm allows the construction of a Markov Chain for the pa- +rameters of the CLR-AR(p) model as follows: +Algorithm 1: Gibbs sampler with Data augmentation (GDA) +1. Initialize with y, L ∈ R, N ∈ Z and θ(0) = (β(0), σ2(0), ρ(0)) +2. Set z(0) = y +3. For i = 1, ..., N +4. +Sample β(i) ∼ π(β|σ2(i−1), ρ(i−1), y, z(i−1)) +5. +Sample σ2(i) ∼ π(σ2|β(i), ρ(i−1), y, z(i−1)) +6. +Sample ρ(i) ∼ π(ρ|β(i), σ2(i), y, z(i−1)) +7. +For t = 1, ..., T +8. +If yt ≤ L +9. +For j = 1, ..., m +10. +Sample ztj ∼ F(zt|y, θ(i), zt ≤ L) × I(zt≤L) +11. +z(i) +t +:= 1 +m +m +� +j=1 +ztj +12. +Else +13. +z(i) +t +:= yt +14. Return Θ = [θ(1) · · · θ(N)]′ and z(N). +The MCMC estimates of the model parameters θ are usually obtained by +calculating the sample mean of the GDA output Θ , unless the marginal pos- +terior density indicates a highly skewed distribution; in this case it is more +appropriate to use the sample median. The resulting augmented data z(N) = +(z(N) +1 +, . . . , z(N) +T +) can be regarded as observations on the latent variable for fur- +ther inferences (Tanner and Wong, 1978; Law and Jackson, 2017). +3.5 +Illustration with synthetic data sets +The performance of the above procedure is illustrated with censored time se- +ries simulated from the CLR-AR(1) model with and without explanatory vari- +bles, several positive and negative values for the lag 1 correlation, namely, +ρ1 = −0.8, −0.48, −0.15, 0.15, 0.48, 0.8, three different scenarios of censorship +5%, 20% and 40% and three sample sizes 100, 500 and 1000. Values for the +model parameters β0, β1, σ2 were chosen based on the papers Schumacher et al. +(2017) and Wang and Chan (2018) and are given in Table 1. Note that the +model designated as M1 corresponds to an AR(1) with mean β0/(1 − ρ1). The +total number of models is eighteen, leading to 18 × 3 × 3 = 162 simulation sce- +8 + +narios. The simulation allows control of the degree of censorship and of serial +correlation. +Table 1: Parameters of the CLR-AR(1) model in the simulation study +Model +Parameter +β0 +β1 +σ2 +M1 +2 +0 +2 +M2 +2 +1 +2 +M3 +0. 2 +0.4 +0.607 +The procedure is implemented in R (R Core Team, 2020) and, in particular, +the packages MASS (Ripley et al., 2021) and invgamma (Kahle and Stamey, +2017) are used to sample from the multivariate normal and from the inverted +gamma distributions. The algorithm is iterated N = 4 × 104 times, the b = +2 × 104 initial burn-in iterations were discarded and only every 20th value of +the last iterations is kept to reduce the autocorrelation within the chain. The +convergence of the MCMC algorithm was duly analysed with the usual diagnos- +tic tests available in package Coda (Plummer et al., 2006; Robert and Casella, +2010). The initial estimates are obtained by FGLS estimates and the model +parameters are estimated by posterior means from the remaining M = 1 × 103 +values in the chain. The number of simulated values for the data augmentation +is m = 5 as used by Hopke et al. (2001). +The overall results and performance of the method are illustrated by the +posterior densities for M2 with ρ1 = 0.48 under the three censorship scenarios +in Figures 1–3. The plots illustrate the efficiency and Bayes consistency of the +GDA–MMS method: the marginal posterior distributions are, in general, con- +centrated on sets containing the true values of the parameters (vertical dashed +red lines), with the variability decreasing as the sample sizes increase, for all the +scenarios of censorship considered. Illustration of posterior densities for other +values of ρ1 and the other models are presented in Appendix A. +To further study the properties of the method, the 100 realizations of each +the 162 scenarios is generated and the results are summarized in Tables 4 to 7 +in Appendix B. +Thus, the approach works well at estimating censored regression models with +AR errors. +4 +Model assessment in censored data +This section presents criteria for model assessment and model selection in the +context of Bayesian analysis of censored regressions with autocorrelated errors. +First define jackknife one-step-ahead residuals at t + 1 (Harrison and West, +9 + +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +1 +2 +3 +4 +5% of cens: π(β0|y,β1,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.6 +0.8 +1.0 +1.2 +1.4 +0 +2 +4 +6 +8 +5% of cens: π(β1|y,β0,σ2,ρ) +β1 = 1 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +0 +2 +4 +5% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +0.2 +0.4 +0.6 +0.8 +0 +5 +10 +15 +5% of cens: π(ρ|y,β0,β1,σ2) +ρ1 = 0.48 +n=100 +n=500 +n=1000 +Figure 1: Model 2 with ρ1 = 0.48: Posterior density of the model parameters +for n = 100, 500 and 1000 under 5% of censorship. +1991; Shiffrin et al., 2008) as +dJ +t+1 = zt+1 − E[zt+1|zt] +� +V ar[zt+1|zt] +(23) +which are calculated by adopting the leave-future-out-cross-validation (LFO- +CV) method (Burkner et al, 2020), a modification of the popular leave-one-out +cross validation method, by leaving out all future observations to assess predic- +tive performance in time series models. In practice, the sample t = 1, . . . , T, is +partitioned into a training set with n observations which grows continuously and +a test set with the remaining n − T observations, Wagenmakers et al. (2006). +The value for n is chosen so that estimation is consistent. +In practice, the mean, E[zt+1|zt], and variance, V ar[zt+1|zt] in equation +(23), are approximated by their sample counterparts using M values generated +from the one-step-ahead predictive distribution of zt+1 +f(zt+1|zt) = +� +Θ +f(zt+1|zt, θ)π(θ|zt)dθ +(24) +which is not available in closed form. Therefore +E[zt+1|zt] ≈ 1 +M +M +� +j=1 +z(j) +t+1, +(25) +10 + +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +1 +2 +3 +4 +20% of cens: π(β0|y,β1,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.6 +0.8 +1.0 +1.2 +1.4 +0 +2 +4 +6 +8 +20% of cens: π(β1|y,β0,σ2,ρ) +β1 = 1 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +0 +2 +4 +20% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +0.2 +0.4 +0.6 +0.8 +0 +5 +10 +15 +20% of cens: π(ρ|y,β0,β1,σ2) +ρ1 = 0.48 +n=100 +n=500 +n=1000 +Figure 2: Model 2 with ρ1 = 0.48: Posterior density of the model parameters +for n = 100, 500 and 1000 under 20% of censorship. +V ar[zt+1|zt] ≈ +1 +M − 1 +M +� +j=1 +� +z(j) +t+1 − E[zt+1|zt] +�2, +(26) +where z(j) +t+1 are simulated from N(µ, σ2) with µ as in equation (6), given a +MCMC output θ(j) generated from π(θ|zt). +In the context of censored data, the computation of the residuals dJ +t , equa- +tions (23), (25), (26) and subsequent model assessment is achieved in a two step +procedure: +Step 1: Given a (possibly) censored data set y1, . . . , yT fit the model via GDA- +MSM algorithm and obtain an augmented data set, z = (z1, . . . , zT ); +choose n < T +Step 2: For each t = n + 1, . . . , T +2.1 Generate θ(j) +t +by applying the GDA-MSM algorithm to (y1, . . . , yt) +and then generate z(j) +t +∼ N(µ, σ2), µ given by equation (6) for j = +1, . . . , M +2.2 Approximate the expectation and variance of the predictive distribu- +tion using equations (25) and (26) +2.3 Regarding z = (z1, . . . , zT ) as the actually observed data, compute +(23) (see Gourieroux et al. (1987), Law and Jackson (2017)). +11 + +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +1 +2 +3 +4 +40% of cens: π(β0|y,β1,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.6 +0.8 +1.0 +1.2 +1.4 +0 +2 +4 +6 +8 +40% of cens: π(β1|y,β0,σ2,ρ) +β1 = 1 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +0 +2 +4 +40% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +0.2 +0.4 +0.6 +0.8 +0 +5 +10 +15 +40% of cens: π(ρ|y,β0,β1,σ2) +ρ1 = 0.48 +n=100 +n=500 +n=1000 +Figure 3: Model 2 with ρ1 = 0.48; Posterior density of the model parameters +for n = 100, 500 and 1000 under 40% of censorship. +The standardized Bayesian residuals (23) thus obtained may now be used to +assess not only the quality of the fitted model but also to comparatively evaluate +competing models in terms of their predictive performance via, e.g., the sum of +squares (or of absolute values) in which case, models models with smaller values +are favoured. +Regarding model selection, the most popular Bayesian criteria are the De- +viance Information Criterion, DIC (Spiegelhalter et al., 2002) and the Widely +Applicable Information Criterion, WAIC (Watanabe, 2013). +Expressions for +DIC and WAIC can be found in Appendix C. Given a MCMC output, an ap- +proximate value for pw in WAIC measure (30) is calculated by +pw ≈ −2 +T +� +t=1 +� 1 +M +M +� +j=1 +lnf(yt|θ(j)) − ln +� 1 +M +M +� +j=1 +f(yt|θ(j)) +�� +(27) +As in residual analysis, the augmented data set z = (z1, . . . , zT) is used to +evaluate the likelihood function f(y|θ), by replacing yt by zt, +t = 1, . . . , T , +obtained from the GDA algorithm. +5 +Analysis of cloud ceiling height time series +Consider the meteorological time series of cloud ceiling height, previously ana- +lyzed in Park et al. (2007) and Schumacher et al. (2017). Cloud ceiling height is +12 + +defined as the distance from the ground to the botton of a cloud and is measured +in hundred feet. According to Park et al. (2007) an accurate determination of +the cloud ceiling height is important mainly because it is one of the major fac- +tors contributing to weather-related accidents and one of the major causes of +flight delays. The recording device has a detection limit of 12000 feet, so the +observed data can be considered a right-censored time series. +The data were originally collected by the National Center for Atmospheric +Research (NCAR) based of hourly observations in San Francisco, during the +month of March 1989, consisting in 716 observations, 41.7% of which are cen- +sored. The log-transformed data is available in the package ARCensReg Schumacher et al. +(2016) of the software R. A plot of the data is shown in the Figure 4. +Time t +Log−transformed CloudCeiling +0 +100 +200 +300 +400 +500 +600 +700 +−2 +0 +2 +4 +6 +Figure 4: Censored time series of log-transformed hourly cloud ceiling height in +San Francisco during March 1989 +In the absense of explanatory variables, the CLR-AR(p) models correspond +to censored AR(p) models and two values for p = 1, 2 are considered. +The burn-in is set to b = 2 × 104 for the CLR-AR(1) and to b = 4 × 104 for +the CLR-AR(2) as a result of monitoring the chains convergence (Appendix D). +After discarding the burn-in, the z-scores of the Geweke test, Geweke (1992), +13 + +are {-0.302, 0.350, 0.624} and {0.272, 0.137, 0.787, -0.991} for p = 1 and p = 2, +respectively, suggesting the convergence of the chains (see Appendix D). In +order to reduce the autocorrelation in the MCMC outputs and get subsamples +of length M = 1 × 103 to compute the estimates, lag = 80 and N = 1 × 105 are +set for p = 1, while for p = 2 those values are: lag = 180 and N = 2.2 × 105. +Plots of autocorrelation function (ACF) in Appendix D suggest no significant +autocorrelation in these subsamples. The resulting posterior densities for the +parameters are presented in Figure 5. +3.0 +3.5 +4.0 +4.5 +5.0 +0.0 +1.0 +2.0 +π(β0|y,z,σ2,ρ) +0.6 +0.7 +0.8 +0.9 +1.0 +1.1 +0 +2 +4 +6 +8 +π(σ2|y,z,β0,ρ) +0.70 +0.80 +0.90 +0 +5 +15 +π(ρ|y,z,β0,σ2) +3.0 +3.5 +4.0 +4.5 +5.0 +0.0 +1.0 +2.0 +π(β0|y,z,σ2,ρ) +0.6 +0.7 +0.8 +0.9 +1.0 +1.1 +0 +2 +4 +6 +8 +π(σ2|y,z,β0,ρ) +0.55 +0.65 +0.75 +0.85 +0 +4 +8 +12 +π(ρ1|y,z,β0,σ2) +0.00 +0.10 +0.20 +0.30 +0 +4 +8 +12 +π(ρ2|y,z,β0,σ2) +Figure 5: Posterior densities,top: AR(1) model; bottom: AR(2) model. +To compute the parameter estimates, the sample means of the retained +MCMC subsamples were calculated. +Moreover, other summary statistics of +those subsamples are also provided in the Table 2, namely, the median, the stan- +dard error (SE) and the HPD credible interval (CI), with probability 0.95. The +CI´s were calculated using the R package HDInterval (Meredith and Kruschke, +2018). +To obtain the jackknife forecast residuals, the size of the initial training +sample was set to n = 600 and the corresponding mean and variance were, +respectively, 0.007 and 1.257 for the AR(1) model, and 0.061 and 0.95 for the +AR(2) model. These values are close to 0 and 1, respectively, and their plots +14 + +in Figure 6 top emphasize that these residuals are distributed around zero and +show no significant correlation (Figure 6 bottom). +−6 +−2 +2 +6 +Jackknife residuals +t +601 +621 +641 +661 +681 +701 +721 +−6 +−2 +2 +6 +Global residuals +t +1 +101 +201 +301 +401 +501 +601 +701 +0 +5 +10 +15 +20 +−0.2 +0.4 +1.0 +Lag +ACF +ACF of Jackknife residuals +0 +5 +10 +15 +20 +−0.2 +0.4 +1.0 +Lag +ACF +ACF of Global residuals +−6 +−2 +2 +6 +Jackknife residuals +t +601 +621 +641 +661 +681 +701 +721 +−6 +−2 +2 +6 +Global residuals +t +1 +101 +201 +301 +401 +501 +601 +701 +0 +5 +10 +15 +20 +−0.2 +0.4 +1.0 +Lag +ACF +ACF of Jackknife residuals +0 +5 +10 +15 +20 +−0.2 +0.4 +1.0 +Lag +ACF +ACF of Global residuals +Figure 6: (Left) Residuals and corresponding ACF for: (Left) p = 1 (Right) +p = 2. +Table 2: Parameter estimates of LR model with AR(p) error term for the log- +transformed cloud ceiling height data. +p +Stats +ˆβ1 +ˆσ2 +ˆρ1 +ˆρ2 +1 +Mean +4.136 +0.862 +0.822 +– +Median +4.134 +0.858 +0.823 +– +SE +0.205 +0.053 +0.022 +– +CI +[3.771, 4.543] +[0.768, 0.970] +[0.776, 0.862] +– +2 +Mean +4.070 +0.839 +0.697 +0.160 +Median +4.068 +0.836 +0.695 +0.160 +SE +0.248 +0.053 +0.037 +0.037 +CI +[3.620, 4.576] +[0.738, 0.942] +[0.626, 0.766] +[0.087, 0.231] +The values of DIC, WAIC and the sum of squared standardized jackknife +residuals (SSJR) are given in Table 3. Since the model with AR(2) error presents +lowest values of DIC, WAIC and SSJR, the AR(2) model is the chosen one. This +conclusion and the values of model parameters are identical to that obtained by +Schumacher et al. (2017) when analysing this dataset. +The augmented data, corresponding to the estimated model with AR(2) +errors is represented in Figure 7 (blue line) against the observed data (red line). +15 + +Table 3: Information criteria for model assessment +Model +DIC +WAIC +SSJR +CLR-AR(1) +684.1 +416497.4 +144.6 +CLR-AR(2) +590.6 +377259 +109.8 +Time t +Log−Transformed CloudCeiling +0 +100 +200 +300 +400 +500 +600 +700 +−2 +0 +2 +4 +6 +observed +augmented +Figure 7: Observed vs augmented data of censored time series of log-transformed +hourly cloud ceiling height in San Francisco during March 1989. +6 +Conclusions +This work proposes a Bayesian approach to perform inference in a linear regres- +sion model with AR(p) errors for censored data (CLR-AR(p) model). Ignoring +the censorship pattern in the data and applying usual estimation methods re- +sults in biased estimates. The algorithm proposed implements a Gibbs sampler +with data augmentation. The novelty stems from the data augmentation with +the mean of multiple simulations (GDA-MMS), which improves the accuracy +of the algorithm. In fact, the GDA-MMS algorithm works well even when the +proportion of censored values is large (40%). +16 + +Note that in the simulation and empirical example the Jeffrey priors were +used. However, if information about the data is available other priors with ap- +propriate hyperparameters may be used, in particular for (β, σ2) a Multivariate +Normal - Inverted Gamma distribution may be considered. +Here the censoring threshold was considered known. An open issue to be +considere in future work is to model the data under unknown censoring level. +Acknowledgements +This work is supported by Fundação Calouste Gulbenkian and the Center for Research +and Development in Mathematics and Applications (CIDMA) through the Portuguese +Foundation for Science and Technology (FCT - Fundação para a Ciência e a Tecnolo- +gia), reference UIDB/04106/2020. +References +Beach, C., MacKinnon, J., 1978. Full maximum likelihood estimation of second +order autoregressive errors models. Journal of Econometrics 7, 187–198. +Buckley, J., James, I., 1979. Linear regression with censored data. Biometrika +66, No. 3, 429–436. +Burkner,P-C., Gabry, J., Vehtari, A. 2020 Approximate leave-future-out cross- +validation for Bayesian time series models. Journal of Statistical Computation +and Simulation 90, No. 14, 2499–2523. +Casella, G., George, E., 1992. Explaining the gibbs sampler. The American +Statistician 46, No. 3, 167–174. +Chib, S., 1992. Bayes inference in the tobit censored regression model. Journal +of Econometrics 51, 79–99. +Fridley, B., Dixon, P., 2007. Data augmentation for a bayesian spatial model +involving censored observations. Environmetrics 18, 107-–123. +Friel, N., Wyse, J., 2012. Estimating the statistical evidence – a review.. Sta- +tistica Neerlandica 66, 288–308. +Gelfand, A.E., Smith, A.F.M., 1990. Sampling-based approaches to calculating +marginal densities. Journal of the American Statistical Association 85, No. +410, 398–409. +Geweke, J., 1992. Evaluating the accuracy of sampling-based approaches to the +calculation of posterior moments (with discussion), in: Bernardo, J., Berger, +J., Dawid, A., , Smith, A. (Eds.), Bayesian Statistics 4. Oxford University +Press, Oxford, pp. 169–193. +17 + +Gilks, W.R., Best, N.G., Tan, K.K.C., 1995. Adaptative rejection metroplis +sampling within gibbs sampling. Journal of the Royal Statistical Society 44, +4, 455–472. +Gourieroux, C., Monfort, A., Renault, E., Trognon, A., 1987. Simulated resid- +uals. Journal of Econometrics 34, 201–252. +Harrison, J., West, M., 1991. Dynamic linear model diagnostics. Biometrika 78, +4, 797–808. +Hopke, P., Liu, C., Rubin, D., 2001. Multiple imputation for multivariate data +with missing and below-threshold measurements: Time-series concentrations +of pollutants in the arctic. Biometrics 57, 22–33. +Houseman, E.A., Virji, M.A., 2017. A bayesian approach for summarizing and +modeling time-series exposure data with left censoring. Annals of Work Ex- +posures and Health 61, No. 7, 773––783. +Kahle, D., Stamey, J., 2017. R package ’invgamma’: The inverse gamma distri- +bution. CRAN Repository . +Law, M., Jackson, D., 2017. Residual plots for linear regression models with +censored outcome data: A refined method for visualizing residual uncertainty. +Communications in Statistics - Simulation and Computation 46:4, 3159–3171. +Meredith, M., Kruschke, J., 2018. Package hdinterval. CRAN Repository , 1–7. +Mohammad, N.M., 2014. Censored Time Series Analysis. +Phd Thesis. The +University of Western Ontario, Ontario. +Park, J., Genton, M., Ghosh, S., 2007. +Censored time series analysis with +autoregressive moving average models. The Canadian Journal of Statistics +35, 1, 151–168. +Plummer, M., Best, N., Cowles, K., Vines, K., 2006. Coda: Convergence diag- +nosis and output analysis for mcmc. R News 6, 7–11. +Prais, S., Winsten, C., 1954. Trend estimators and serial correlation. Cowles +Comission Discussion Paper: Statistics, No. 383 . +R Core Team, 2020. R: A Language and Environment for Statistical Computing. +R Foundation for Statistical Computing. Vienna, Austria. +Ripley, B., Venables, B., Hornik, K., Gebhardt, A., Firth, D., 2021. R package +’mass’: Support functions and datasets for venables and ripley’s mass. CRAN +Repository . +Robert, C., Casella, G., 2010. +Introducing Monte Carlo Methods with R. +Springer, New York. +18 + +Schumacher, F., Lachos, V., Galarza, C., 2016. R package ’arcensreg’: Fitting +univariate censored linear regression model with autoregressive errors. CRAN +Repository . +Schumacher, F.L., Lachos, V., Dey, D., 2017. Censored models with autore- +gressive errors: A likelihood-based perspective. +The Canadian Journal of +Statistics 45, 68, 375–392. +Shiffrin, R.M., Lee, M.D., Kim, W., Wagenmakers, E.J., 2008. A survey of +model evaluation approaches with a tutorial on hierarchical bayesian methods. +Cognitive Science 32, 1248–1284. +Spiegelhalter, D.J., Best, N.G., Carlin, B.P., van der Linde, A., 2002. Bayesian +measures of model complexity and fit. Royal Statistical Society 14, 867–897. +Tanner, M., Wong, W., 1978. +The calculation of posterior distributions by +data augmentation. Journal of American Statistical Association 82, No. 398, +528–540. +Wagenmakers, E.J., Gruwald, P., Steyvers, M., 2006. Accumulative prediction +error and selection of time series models. Journal of Mathematical Psycology +50, 149–166. +Wang, C., Chan, K., 2018. Quasi-likelihood estimation of a censored autore- +gressive model with exogenous variables. Journal of the American Statistical +Association 113:523, 1135–1145. +Watanabe, S., 2013. A widely applicable bayesian information criterion. Journal +of Machine Learning Research 14, 867–897. +Wei, G.C.G., Tanner, M.A., 1990. Posterior computations for censored regres- +sion data. Journal of American Statistical Association 85, 829–839. +Zangari, P., Tsurumi, H., 1996. A bayesian analysis of cansored autocorrelated +data on exports of japanese pssenger cars to the united states. Advances in +Econometrics 11, Part A, 111–143. +Zeger, S., Brookmeyer, R., 1986. Regression analysis with censored autocorre- +lated data. Journal of the American Statistical Association 81, 722–729. +A +Posterior Densities +19 + +0.5 +1.5 +2.5 +3.5 +0.0 +1.5 +5% of cens: π(β0|y,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.5 +1.5 +2.5 +3.5 +0.0 +1.5 +20% of cens: π(β0|y,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.5 +1.5 +2.5 +3.5 +0.0 +1.5 +40% of cens: π(β0|y,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +5% of cens: π(σ2|y,β0,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +20% of cens: π(σ2|y,β0,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +40% of cens: π(σ2|y,β0,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0 +15 +5% of cens: π(ρ|y,β0,σ2) +ρ1 = 0.8 +n=100 +n=500 +n=1000 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0 +15 +20% of cens: π(ρ|y,β0,σ2) +ρ1 = 0.8 +n=100 +n=500 +n=1000 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0 +15 +40% of cens: π(ρ|y,β0,σ2) +ρ1 = 0.8 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +5% of cens: π(β0|y,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +20% of cens: π(β0|y,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +40% of cens: π(β0|y,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +5% of cens: π(σ2|y,β0,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +20% of cens: π(σ2|y,β0,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +0 +2 +4 +40% of cens: π(σ2|y,β0,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +0.1 +0.3 +0.5 +0.7 +0 +10 +5% of cens: π(ρ|y,β0,σ2) +ρ1 = 0.48 +n=100 +n=500 +n=1000 +0.1 +0.3 +0.5 +0.7 +0 +10 +20% of cens: π(ρ|y,β0,σ2) +ρ1 = 0.48 +n=100 +n=500 +n=1000 +0.1 +0.3 +0.5 +0.7 +0 +10 +40% of cens: π(ρ|y,β0,σ2) +ρ1 = 0.48 +n=100 +n=500 +n=1000 +Figure 8: Model M1 with ρ = 0.8 top 3 lines and ρ = 0.48 bottom 3 lines. +Posterior density of the model parameters for n = 100, 500 and 1000 under 3 +censorship scenarios. +20 + +0.5 +1.5 +2.5 +3.5 +0.0 +1.0 +2.0 +5% of cens: π(β0|y,β1,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.5 +1.5 +2.5 +3.5 +0.0 +1.0 +2.0 +20% of cens: π(β0|y,β1,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.5 +1.5 +2.5 +3.5 +0.0 +1.0 +2.0 +40% of cens: π(β0|y,β1,σ2,ρ) +β0 = 2 +n=100 +n=500 +n=1000 +0.6 +0.8 +1.0 +1.2 +0 +2 +4 +6 +8 +5% of cens: π(β1|y,β0,σ2,ρ) +β1 = 1 +n=100 +n=500 +n=1000 +0.6 +0.8 +1.0 +1.2 +0 +2 +4 +6 +8 +20% of cens: π(β1|y,β0,σ2,ρ) +β1 = 1 +n=100 +n=500 +n=1000 +0.6 +0.8 +1.0 +1.2 +0 +2 +4 +6 +8 +40% of cens: π(β1|y,β0,σ2,ρ) +β1 = 1 +n=100 +n=500 +n=1000 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0 +5 +15 +25 +5% of cens: π(ρ|y,β0,β1,σ2) +ρ1 = 0.8 +n=100 +n=500 +n=1000 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0 +5 +15 +25 +20% of cens: π(ρ|y,β0,β1,σ2) +ρ1 = 0.8 +n=100 +n=500 +n=1000 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +0 +5 +15 +25 +40% of cens: π(ρ|y,β0,β1,σ2) +ρ1 = 0.8 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +0 +1 +2 +3 +4 +5 +5% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +0 +1 +2 +3 +4 +5 +20% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +1.0 +1.5 +2.0 +2.5 +0 +1 +2 +3 +4 +5 +40% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 2 +n=100 +n=500 +n=1000 +Figure 9: Model M2 with ρ = 0.8: Posterior density of the model parameters +for n = 100, 500 and 1000 under 3 censorship scenarios. +21 + +−0.2 +0.0 +0.2 +0.4 +0 +2 +4 +6 +8 +10 +5% of cens: π(β0|y,β1,σ2,ρ) +β0 = 0.2 +n=100 +n=500 +n=1000 +−0.2 +0.0 +0.2 +0.4 +0 +2 +4 +6 +8 +10 +20% of cens: π(β0|y,β1,σ2,ρ) +β0 = 0.2 +n=100 +n=500 +n=1000 +−0.4 +0.0 +0.2 +0.4 +0 +2 +4 +6 +8 +40% of cens: π(β0|y,β1,σ2,ρ) +β0 = 0.2 +n=100 +n=500 +n=1000 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0 +5 +10 +15 +5% of cens: π(β1|y,β0,σ2,ρ) +β1 = 0.4 +n=100 +n=500 +n=1000 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0 +5 +10 +15 +20% of cens: π(β1|y,β0,σ2,ρ) +β1 = 0.4 +n=100 +n=500 +n=1000 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0 +5 +10 +15 +40% of cens: π(β1|y,β0,σ2,ρ) +β1 = 0.4 +n=100 +n=500 +n=1000 +−0.7 +−0.5 +−0.3 +0 +5 +10 +15 +5% of cens: π(ρ|y,β0,β1,σ2) +ρ = −0.5 +n=100 +n=500 +n=1000 +−0.7 +−0.5 +−0.3 +0 +5 +10 +15 +20% of cens: π(ρ|y,β0,β1,σ2) +ρ = −0.5 +n=100 +n=500 +n=1000 +−0.7 +−0.5 +−0.3 +−0.1 +0 +5 +10 +15 +40% of cens: π(ρ|y,β0,β1,σ2) +ρ = −0.5 +n=100 +n=500 +n=1000 +0.4 +0.6 +0.8 +1.0 +0 +5 +10 +15 +5% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 0.607 +n=100 +n=500 +n=1000 +0.4 +0.6 +0.8 +1.0 +0 +4 +8 +12 +20% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 0.607 +n=100 +n=500 +n=1000 +0.4 +0.6 +0.8 +1.0 +0 +4 +8 +12 +40% of cens: π(σ2|y,β0,β1,ρ) +σ2 = 0.607 +n=100 +n=500 +n=1000 +Figure 10: Model M3 with ρ = −0.5: Posterior density of the model parameters +for n = 100, 500 and 1000 under 3 censorship scenarios. +22 + +B +Simulation Results +Table 4: Model 2: Results, mean (standard deviation), based on 100 simula- +tions of simple CLR-AR(1) model under different sample sizes and censorship +and ρ > 0. +n +% of cen +β0 = 2 +β1 = 1 +σ2 = 2 +ρ = 0.15 +100 +5% +2.027(0.066) +0.995(0.026) +2.036(0.107) +0.134(0.011) +20% +2.025(0.070) +0.995(0.025) +1.962(0.107) +0.133(0.012) +40% +2.069(0.100 +0.980(0.030) +1.756(0.165) +0.127(0.016) +500 +5% +1.996(0.018) +1.006(0.006) +2.014(0.017) +0.148(0.002) +20% +2.009(0.018) +1.002(0.006) +1.921(0.025) +0.147(0.002) +40% +2.109(0.026) +0.968(0.007) +1.659(0.135) +0.140(0.003) +1000 +5% +2.007(0.009) +0.995(0.003) +2.004(0.009) +0.149(0.001) +20% +2.026(0.011) +0.987(0.003) +1.911(0.017) +0.145(0.001) +40% +2.119(0.025) +0.956(0.005) +1.659(0.125) +0.141(0.001) +n +% of cen +β0 = 2 +β1 = 1 +σ2 = 2 +ρ = 0.48 +100 +5% +2.046(0.085) +0.992(0.023) +2.045(0.106) +0.460(0.008) +20% +2.045(0.098) +0.992(0.024) +1.985(0.120) +0.452(0.010) +40% +2.098(0.119) +0.983(0.028) +1.767(0.179) +0.450(0.012) +500 +5% +2.001(0.024) +1.006(0.006) +2.015(0.017) +0.477(0.002) +20% +2.020(0.027) +1.000(0.007) +1.929(0.023) +0.472(0.002) +40% +2.125(0.038) +0.966(0.008) +1.696(0.115) +0.463(0.002) +1000 +5% +2.009(0.013) +0.995(0.003) +2.004(0.009) +0.477(0.001) +20% +2.031(0.016) +0.987(0.003) +1.924(0.016) +0.471(0.001) +40% +2.130(0.032) +0.957(0.006) +1.689(0.108) +0.462(0.001) +n +% of cen +β0 = 2 +β1 = 1 +σ2 = 2 +ρ = 0.8 +100 +5% +2.110(0.354) +0.992(0.019) +2.106 (0.105) +0.780(0.004) +20% +2.130(0.369) +0.999(0.019) +1.991(0.111) +0.777(0.004) +40% +2.290(0.397) +0.968(0.027) +1.755(0.214) +0.762(0.007) +500 +5% +2.022(0.099) +1.005(0.005) +2.016(0.017) +0.793(0.001) +20% +2.054(0.101) +1.000(0.005) +1.942(0.022) +0.789(0.001) +40% +2.200(0.119) +0.979(0.007) +1.714(0.107) +0.779(0.002) +1000 +5% +2.013(0.058) +0.995(0.002) +2.005(0.009) +0.795(0.000) +20% +2.054(0.101) +1.000(0.005) +1.942(0.022) +0.789(0.001) +40% +2.200(0.119) +0.979(0.007) +1.714(0.107) +0.779(0.002) +23 + +Table 5: Model 2: Results based on 100 simulations of simple CLR-AR(1) +model under different sample sizes and censorship, using GDA with mean of +multiple samples and ρ < 0. +n +% of cen +β0 = 2 +β1 = 1 +σ2 = 2 +ρ = −0.15 +100 +5% +2.013(0.056) +1.000(0.023) +2.041(0.109) +-0.161(0.009) +20% +2.006(0.060) +1.003(0.023) +1.960(0.098) +-0.160(0.012) +40% +2.066(0.076 +0.980(0.024) +1.760(0.162) +-0.160(0.014) +500 +5% +1.994(0.015) +1.007(0.006) +2.014(0.017) +-0.152(0.002) +20% +2.011(0.016) +1.000(0.006) +1.914(0.026) +-0.151(0.002) +40% +2.090(0.023) +0.974(0.006) +1.672(0.125) +-0.151(0.003) +1000 +5% +2.005(0.007) +0.997(0.002) +2.005(0.009) +-0.151(0.001) +20% +2.011(0.016) +1.000(0.006) +1.917(0.025) +-0.151(0.002) +40% +2.119(0.025) +0.956(0.005) +1.655(0.128) +-0.148(0.001) +n +% of cen +β0 = 2 +β1 = 1 +σ2 = 2 +ρ = −0.48 +100 +5% +2.003(0.045) +1.004(0.017) +2.055(0.115) +-0.481(0.007) +20% +1.992(0.057) +1.009(0.018) +1.989(0.102) +-0.479(0.008) +40% +2.054(0.065) +0.985(0.018) +1.837(0.123) +-0.467(0.009) +500 +5% +1.992(0.011) +1.007(0.004) +2.021(0.017) +-0.479(0.002) +20% +2.003(0.011) +1.003(0.004) +1.919(0.030) +-0.475(0.002) +40% +2.090(0.023) +0.972(0.005) +1.741(0.086) +-0.474(0.002) +1000 +5% +2.001(0.005) +0.999(0.002) +2.011(0.009) +-0.479(0.001) +20% +2.012(0.006) +0.995(0.002) +1.940(0.014) +-0.476(0.001) +40% +2.097(0.018) +0.965(0.004) +1.740(0.078) +-0.468(0.001) +n +% of cen +β0 = 2 +β1 = 1 +σ2 = 2 +ρ = −0.8 +100 +5% +1.992(0.033) +1.007(0.012) +2.089 (0.120) +-0.797(0.003) +20% +1.994(0.052) +1.003(0.015) +2.097(0.134) +-0.793(0.003) +40% +2.065(0.103) +0.969(0.029) +2.059(0.141) +-0.780(0.005) +500 +5% +1.991(0.010) +1.007(0.003) +2.033(0.018) +-0.793(0.001) +20% +1.998(0.010) +1.007(0.003) +2.025(0.021) +-0.794(0.001) +40% +2.071(0.023) +0.977(0.006) +1.947(0.030) +-0.786(0.001) +1000 +5% +1.999(0.004) +1.000(0.001) +2.019(0.009) +-0.797(0.000) +20% +1.992(0.010) +1.006(0.003) +2.025(0.021) +-0.794(0.001) +40% +2.061(0.013) +0.977(0.003) +1.959(0.016) +-0.784(0.001) +24 + +Table 6: Model 1: Results based on 100 simulations of CLR-AR(1) model +without explanatory variables, under different sample sizes and censorship, using +GDA with mean of multiple samples and ρ > 0. +n +% of cen +β0 = 2 +σ2 = 2 +ρ = 0.15 +100 +5% +2.021(0.021) +2.037(0.108) +0.133(0.010) +20% +2.021(0.021) +1.974(0.111) +0.131(0.011) +40% +2.066(0.023) +1.693(0.208) +0.125(0.014) +500 +5% +2.006(0.058) +2.015(0.017) +0.149(0.002) +20% +2.011(0.006) +1.931(0.023) +0.147(0.002) +40% +2.053(0.008) +1.681(0.120) +0.142(0.003) +1000 +5% +2.000(0.003) +2.005(0.009) +0.148(0.001) +20% +2.005(0.003) +1.923(0.016) +0.148(0.001) +40% +2.046(0.004) +1.673(0.116) +0.143(0.001) +n +% of cen +β0 = 2 +σ2 = 2 +ρ = 0.48 +100 +5% +2.034(0.055) +2.044(0.105) +0.461(0.008) +20% +2.037(0.055) +1.993(0.135) +0.456(0.008) +40% +2.087(0.056) +1.753(0.195) +0.444(0.012) +500 +5% +2.010(0.015) +2.015(0.016) +0.477(0.002) +20% +2.018(0.015) +1.946(0.023) +0.473(0.002) +40% +2.067(0.017) +1.725(0.096) +0.462(0.002) +1000 +5% +2.001(0.008) +2.001(0.009) +0.477(0.001) +20% +2.008(0.008) +1.938(0.014) +0.472(0.001) +40% +2.058(0.010) +1.717(0.090) +0.461(0.001) +n +% of cen +β0 = 2 +σ2 = 2 +ρ = 0.8 +100 +5% +2.097(0.368) +2.056(0.111) +0.781(0.004) +20% +2.126(0.346) +2.000(0.122) +0.774(0.004) +40% +2.266(0.341) +1.708(0.231) +0.762(0.006) +500 +5% +2.027(0.097) +2.018(0.019) +0.793(0.001) +20% +2.054(0.095) +1.939(0.021) +0.788(0.001) +40% +2.191(0.109) +1.673(0.133) +0.778(0.002) +1000 +5% +2.003(0.054) +2.009(0.009) +0.795(0.000) +20% +2.036(0.053) +1.926(0.016) +0.789(0.001) +40% +2.157(0.065) +1.682(0.115) +0.780(0.001) +25 + +Table 7: Model 1: Results based on 100 simulations of CLR-AR(1) model +without explanatory variables, under different sample sizes and censorship, using +GDA with mean of multiple samples and ρ < 0. +n +% of cen +β0 = 2 +σ2 = 2 +ρ = −0.15 +100 +5% +2.017(0.012) +2.035(0.107) +-0.163(0.010) +20% +2.011(0.013) +1.972(0.104) +-0.162(0.011) +40% +2.051(0.015) +1.721(0.174) +-0.169(0.013) +500 +5% +2.004(0.003) +2.017(0.018) +-0.152(0.002) +20% +2.008(0.003) +1.932(0.023) +-0.152(0.002) +40% +2.046(0.005) +1.688(0.115) +-0.154(0.003) +1000 +5% +2.000(0.002) +2.004(0.009) +-0.151(0.001) +20% +2.004(0.002) +1.925(0.015) +-0.150(0.001) +40% +2.045(0.004) +1.674(0.115) +-0.152(0.001) +n +% of cen +β0 = 2 +σ2 = 2 +ρ = −0.48 +100 +5% +2.013(0.007) +2.048(0.112) +-0.487(0.007) +20% +2.008(0.009) +1.996(0.114) +-0.486(0.008) +40% +2.026(0.013) +1.850(0.139) +-0.478(0.008) +500 +5% +2.003(0.002) +2.021(0.018) +-0.480(0.002) +20% +2.007(0.003) +1.954(0.021) +-0.478(0.002) +40% +2.047(0.005) +1.714(0.100) +-0.481(0.002) +1000 +5% +2.000(0.001) +2.008(0.009) +-0.479(0.001) +20% +2.004(0.001) +1.947(0.013) +-0.476(0.001) +40% +2.040(0.003) +1.755(0.070) +-0.471(0.001) +n +% of cen +β0 = 2 +σ2 = 2 +ρ = −0.8 +100 +5% +2.002(0.005) +2.068(0.108) +-0.790(0.003) +20% +1.993(0.007) +2.051(0.118) +-0.787(0.004) +40% +2.012(0.013) +1.953(0.122) +-0.783(0.004) +500 +5% +2.000(0.001) +2.029(0.019) +-0.797(0.001) +20% +2.003(0.002) +1.984(0.020) +0.796(0.001) +40% +2.029(0.004) +1.882(0.039) +-0.793(0.001) +1000 +5% +1.999(0.001) +2.015(0.010) +-0.797(0.000) +20% +2.001(0.001) +1.982(0.011) +-0.795(0.000) +40% +2.025(0.002) +1.885(0.025) +-0.791(0.000) +26 + +C +Measures of predictive performance +DIC +DIC is a measure of fit quality widely used in Bayesian approach and is +calculated as follows: +DIC = −4 · Eθ|y[lnf(y|θ)] + 2 · lnf(y|ˆθ), +(28) +where Eθ|y[lnf(y|θ)] is the posterior mean of the log-likelihood function, given +by +Eθ|y[lnf(y|θ)] = 1 +M +M +� +j=1 +lnf(y|θ(j)), +(29) +and f(y|ˆθ) is the likelihood function evaluated at the Bayesian parameters es- +timates. +WAIC +WAIC is another measure of predictive accuracy, more related to Bayesian +approach than previous criterion Watanabe (2013), and is given by +WAIC = −2 +n +� +t=T +lnEθ|y[f(yt|θ)] + 2pw. +(30) +where pw is the correction term, often as the one used in DIC criterion, defined +as follows: +pw = −2 +T +� +t=1 +{Eθ|y[lnf(yt|θ)] − lnEθ|y[f(yt|θ)]}. +(31) +27 + +0 +100 +200 +300 +400 +500 +−2 +0 +2 +First iteration in segment +Z−score +X +0 +100 +200 +300 +400 +500 +−2 +0 +1 +2 +First iteration in segment +Z−score +X.1 +0 +100 +200 +300 +400 +500 +−2 +0 +1 +2 +First iteration in segment +Z−score +ro +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=80 +ACF +β0 +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=80 +ACF +σ2 +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=80 +ACF +ρ +0 +100 +200 +300 +400 +500 +−2 +0 +1 +2 +First iteration in segment +Z−score +beta0 +0 +100 +200 +300 +400 +500 +−2 +0 +1 +2 +First iteration in segment +Z−score +sigma^2 +0 +100 +200 +300 +400 +500 +−2 +0 +1 +2 +First iteration in segment +Z−score +rho1 +0 +100 +200 +300 +400 +500 +−2 +0 +1 +2 +First iteration in segment +Z−score +rho2 +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=180 +ACF +β0 +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=180 +ACF +σ2 +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=180 +ACF +ρ1 +0 +5 +10 +15 +20 +0.0 +0.4 +0.8 +lag=180 +ACF +ρ2 +Figure 11: Top: Geweke plots and ACF functions of the subsamples used to +compute the parameters estimates for CLR-AR(1). Botton: Geweke plots and +ACF functions of the subsamples used to compute the parameters estimates for +CLR-AR(2). +D +Analysis of the convergence of the chains +28 + +0e+00 +4e+04 +8e+04 +0 +2 +4 +6 +Iterations +beta0 +0e+00 +4e+04 +8e+04 +0.8 +1.0 +1.2 +Iterations +sigma^2 +0e+00 +4e+04 +8e+04 +0.80 +0.90 +Iterations +rho +0 +50000 +150000 +3.6 +4.0 +4.4 +Iterations +beta0 +0 +50000 +150000 +0.6 +0.8 +Iterations +sigma^2 +0 +50000 +150000 +0.60 +0.70 +Iterations +rho1 +0 +50000 +150000 +0.10 +0.20 +Iterations +rho2 +Figure 12: Evolution of 1st and 3rd quantiles corresponding MCMC outputs,top: +AR(1) model; bottom: AR(2) model. +29 + diff --git a/jNAzT4oBgHgl3EQf4_5S/content/tmp_files/load_file.txt b/jNAzT4oBgHgl3EQf4_5S/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..13dde66df5f04b6f4c38421f6e557a421e43ce89 --- /dev/null +++ b/jNAzT4oBgHgl3EQf4_5S/content/tmp_files/load_file.txt @@ -0,0 +1,1890 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf,len=1889 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='01852v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='ME] 4 Jan 2023 Censored Regression with Serially Correlated Errors: a Bayesian approach Rodney Sousa∗ Isabel Pereira† Maria Eduarda Silva‡ Brendan McCabe§ Abstract The problem of estimating censored linear regression models with au- tocorrelated errors arises in many environmental and social studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The present work proposes a Bayesian approach to estimate censored regres- sion models with AR(p) errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The algorithm developed here considers the Gibbs sampler with data augmentation (GDA), in which, at each it- eration, both the model parameters and the latent variables are sampled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The data augmentation is achieved from multiple sampling of the latent variables from the corresponding conditional distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A suitable variable transformation allows the full likelihood to be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A sim- ulation study indicates that the proposed approach produces estimates with a high accuracy even in scenarios where the proportion of censored observations is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The method is further illustrated in a real data of cloud ceiling height, including model checking and selection for censored time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' keywords: Censored Data, Linear Regression, Autocorrelation, Bayesian Analysis, Gibbs sampler, Data augmentation 1 Introduction Censored observations arise when explicit limits are placed on the observed data and occur in several fields including environmental monitoring, economics, medical and social sciences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The censoring may due to measuring device lim- itations, such as detection limits in air pollution or mineral concentration in water, Hopke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2001)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In economics, censoring occurs when constraints or regulations are imposed, such as on observations in international trade where exports and imports are subject to trade barriers, Zangari and Tsurumi (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' ∗Corresponding author: rodney@ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='pt, University of Aveiro, Portugal & CIDMA †isabel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='pereira@ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='pt, Departamento de Matemática, Universidade de Aveiro, Portugal & CIDMA ‡mesilva@fep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='pt, Faculdade de Economia, Universidade do Porto, Portugal & LIADD- INESC §Brendan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='Mccabe@liverpool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='uk, Managment School, Chatham Building Chatham Street, University of Liverpool, L69 7ZH 1 Since the work of Buckley and James (1979) an extensive body of literature on regression analysis with censored responses has been developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In addition to censoring, the data often exhibit serial correlation, leading to the adoption of dynamic censored models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In the time series regression context, censoring has been addressed by several authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The first methodological approach to estimation of censored regressions with autocorrelated errors was proposed by Zeger and Brookmeyer (1986), who presented the exact likelihood function for this model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The likelihood is con- structed based on blocks of data of variable dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' As the block size usually increases with the censoring rate, maximum likelihood quickly becomes numerically intractable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Acknowledging this issue, the authors suggest an ap- proximate approach based on a pseudo-likelihood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2007) introduced an imputation method to estimate an ARMA model from a censored time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The potentially censored values are imputed from random values simulated from their conditional distribution given the observed data and the censoring infor- mation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The resulting time series is considered complete and may be analysed with the usual time series methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Mohammad (2014) proposed a quasi-EM algorithm to fit ARMA models in the presence of censoring with the particu- larity of treating missing data as a special case of censoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Schumacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2017) suggests using a Stochastic Approximation of the EM technique, SAEM, based on the unconditional likelihood function of the linear regression models with AR(p) errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' These authors have shown via simulations that their method yields consistent estimates even when the proportion of censored values is large (≈ 40%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Houseman and Virji (2017) proposed a Bayesian approach to handle exposure time series data subject to left censoring, where the autocorrelation is modelled by a spline-based method in order to account for non-stationary auto- correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Wang and Chan (2018) suggested a quasi-likelihood method based on a system of equations and performed model checking based on simulated residuals (Gourieroux et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The problem of estimating regression models with autocorrelated errors from censored observations has also been addressed in a Bayesian framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Zangari and Tsurumi (1996) considered three Bayesian procedures for censored regression models with AR(1) errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The authors derive posterior densities for the parameters of the model building on the work of Zeger and Brookmeyer (1986), using Laplace approximations, a Gibbs sampler with data augmentation and a quadrature numerical integration procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' However, the authors found that the Gibbs sampler using a data augmentation algorithm failed to converge for moderate censoring percentages (10-15%) and strongly correlated distur- bances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Later, Wei and Tanner (1990) considered a censored autoregression of order p with exogenous variables (censored ARX(p)) and developed a sampling scheme for the conditional posterior distributions of the censored data, success- fully applying the Gibbs sampler with data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' This procedure also builds on the Zeger and Brookmeyer (1986) decomposition of the likelihood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The present work proposes a Bayesian approach to estimate censored regres- sion models with AR(p) errors, as it is acknowledged that the coefficients of these models have the usual interpretation and thus are easier to explicate in com- 2 parison with ARX models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The algorithm developed here considers the Gibbs sampler with data augmentation (GDA), in which, at each iteration, both the model parameters and the latent variables are sampled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The data augmentation is achieved by multiple sampling of the latent variables from the correspond- ing conditional distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The censored observations are thus replaced by a mean of multiple samples leading to faster convergence of the algorithm and more accurate estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Under data augmentation, the computation of the likelihood function reduces to that of the likelihood of a multivariate Gaussian sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In time series analysis it is usual to resort to the conditional likelihood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' However, in the current situation a suitable variable transformation allows the full likelihood to be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Additionally, a procedure for model selection and model assessment in this Bayesian framework based on data augmentation is proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The relative performance of competing models can be assessed using the Bayes factors, based on the ratio of normalising constants under each model, referred to as evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A review of some commonly used methods of estimating the model evidence is given in Friel and Wyse (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The current paper further contributes to the literature by showing that GDA is useful for model selec- tion using measures of predictive performance, traditionally named information criteria, allowing for forecast evaluation through leave-one-out cross-validation suitable for time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Empirical experiments with synthetic and real data sets indicate that the proposed approach overcomes the bias introduced by the censoring even when the censoring rate is high (40 Finally, note that attention here is restricted to left censoring in the devel- opment of the procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' This is, however, easily adapted and extended to the right censoring case as shown in its application to a time series of cloud ceiling heights, thus demonstrating the flexibility of the procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The paper is organized as follows: Section 2 defines the model under study;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Section 3 describes the proposed Bayesian approach with data augmentation, detailing the steps all the required steps and illustrates the performance of the method under three different censorship scenarios using synthetic data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Section 4 discusses model assessment when using censored data and Section 5 analyses a time series of cloud ceiling heights, previously analysed by Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2007) and by Schumacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2017), which was originally collected by the National Center for Atmospheric Research (NCAR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The data consists of 716 hourly observations in San Francisco, during the month of March 1989, of which 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7% are censored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Some final remarks and possible future extensions are given in the conclusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 2 Censored Linear Regression with Autocorre- lated Errors A latent variable w is said to be left censored at L if only the values above L are recorded, while the values less or equal to this limit are reported as L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The observed variable y is, then, defined as 3 y = � w, if w > L L, if w ≤ L (1) or, equivalently, y = max(w, L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Similarly, if w is right censored, the recorded values will be y = min(w, L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' L may be thought of as a detection limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Now consider the classic linear regression model with serially correlated er- rors defined as an AR(p) process, denoted as LR-AR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The discrete time repre- sentation of this model for the response variable wt at time t is given by wt = xtβ + ut (2) ut = ρ1ut−1 + ρ2ut−2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' + ρput−p + εt, εt ∼ N(0, σ2 ǫ) where xt = (1, xt2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , xtk) is a 1×k vector of explanatory variables or features, β = (β0, β1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , βk−1) is the k vector of regression coefficients, ut is a stationary AR(p) process with Gaussian innovations εt and AR coefficients ρ = (ρ1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , ρp), satisfying the usual stationarity conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Assume now that we observe possibly censored values yt = max(wt, L), where L is a known censoring limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Then we write the Censored Linear Regression model with AR errors, CLR-AR as yt = max(wt, L) (3) wt = xtβ + ut, ut = ρ1ut−1 + ρ2ut−2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' + ρput−p + εt, εt ∼ N(0, σ2 ε) Henceforward, y = (y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , yT ) represents the actual recorded values which are possibly censored while w = (w1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , wT ) denotes the corresponding latent process wt, X represents the T × k matrix of the regressors and θ = (β, ρ, σ2 ǫ) the parameter vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The history of a process ωt up to time t is represented by ωt = (ω1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , ωt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=') 3 Bayesian inference with data augmentation A natural approach to inference for censored data in the Bayesian framework is based on the Gibbs sampler (Gelfand and Smith, 1990;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Casella and George, 1992) with data augmentation, GDA (Tanner and Wong, 1978;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Fridley and Dixon, 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Chib, 1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' This approach can be described as a two-step procedure in each iteration: (i) the (possibly) censored observations are imputed with values generated from a truncated conditional distribution thus originating an aug- mented data set that is considered complete;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (ii) the model parameters are generated from their full conditional distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' First, we describe the approach to the data augmentation procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 Data augmentation Usually the data augmentation step relies on the simulation of a single value for the censored observation at each iteration, a procedure that does not account for the variance of the truncated distribution (Hopke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In order to overcome this problem, this work proposes a new approach to the GDA algorithm in which the censored observation is imputed with the mean of several, say m, values simulated from the truncated distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Numerical studies with synthetic data, see Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5, show that this approach, denoted by GDA- MSM, leads to posterior distributions for the parameters with good location and dispersion properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The other important issue relates to the truncated distribution from which we impute the (possibly) censored observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Given a data set y = (y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', yT ) possibly with censored observations the augmented data set is defined as defined as follows, zt = � yt, if yt > L zt ∼ F(zt|y, θ, zt ≤ L), if yt = L, (4) where F(zt|y, θ, zt ≤ L) is the truncated distribution corresponding to the cen- sored values of the latent variable, with support in ] − ∞, L].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Specifically, under the Gaussian assumption f(zt|y, θ, zt ≤ L) = 1 σ × φ � zt−µt|zt−1 σ � Φ � L−µt|zt−1 σ � × I(−∞,L)(zt), (5) with φ(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=') and Φ(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=') denoting, respectively, the pdf and cdf of the standard normal distribution and µt| zt−1 =ρ1zt−1 + ρ2zt−2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' + ρpzt−p (6) + (xt − ρ1xt−1 − ρ2xt−2 − .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='ρpxt−p)β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The resulting vector of augmented data z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , zT ) is regarded as a hy- pothetical observations of the latent variable which satisfy the model expressed in equation (2) and is the object of ensuing Bayesian analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The following sections introduce the elements required for Bayesian analysis: likelihood and full conditional distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 Complete Likelihood To compute the complete likelihood function L(z|y, θ) = f(z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , zT|y, θ) (7) consider the following variable transform 5 z∗ = Qz (8) where the T × T matrix Q is such that Q′Q is proportional to the inverse of the variance-covariance matrix of u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In fact Q is a matrix of the form ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='Q = ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='\uf8ee ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8f0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='q11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='q21 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='q22 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='qp1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='qp2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='qpp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρp−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρ1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρ1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρp−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='· · ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='−ρ1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='\uf8f9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='(9) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='with the elements qij obtained under the restriction expressed in equation (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In fact, this transform is induced by the following relationship between ε = (ε1, ε2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', εT ) and u = (u1, u2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', uT) in model (2) ε = Qu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (10) Let Σε = σ2 εI and Σu denote the variance-covariance matrices of ε and u, respectively and I is the identity matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Since Σε = σ2 εI = QΣuQ′ then Q must satisfy σ2 ε(Q′Q)−1 = Σu (11) Therefore z∗ t = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 q11z1, t = 1 q21z1 + q22z2, t = 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' qp1z1 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' + qppzp, t = p zt − ρ1zt−1 − .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' − ρpzt−p, t = p + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', T (12) Define X∗ = QX which results in x∗ tj = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 q11x1j, t = 1 q21x1j + q22x2j, t = 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' qp1x1j + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' + qppxpj, t = p xtj − ρ1xt−1,j − .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' − ρpxt−p,j, t = p + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', T (13) for j = 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' k and x∗ t1 = 1, t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Then likelihood function (7) is equivalent to 6 L(z∗|y, θ) = |Q| � 1 2πσ2 � T 2 exp � − 1 2σ2 T � t=1 (z∗ t − x∗ t β)2� (14) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 Full Conditional Distributions Bayesian analysis involves formal consideration of prior information and infer- ences about the model parameters are obtained from the posterior distribution, π(θ|y), defined by π(θ|y) ∝ L(y|θ) × π(θ), (15) where θ is the parameters vector, L(y|θ) is the likelihood function of the ob- served data and π(θ) represents the joint prior distribution of the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In the absence of prior information, noninformative prior distributions are consid- ered, assumming that β, σ2 and ρ are independent variables with the following prior specifications π(β) ∝ c1, π(σ2) ∝ 1 σ2 , π(ρ) ∝ c2 × Iρ∈Sρ, (16) where c1, c2 are constants, Sρ is the region of stationarity of the process ut and I(·) denotes the indicator function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' By combining (14) and (16), the posterior distribution with the augmented data is written as follows: π(θ|y, z) ∝ |Q| � 1 σ2 � T 2 +1 exp � − 1 2σ2 T � t=1 (z∗ t − x∗ t β)2� × Iρ∈Sρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (17) From (17) it follows that the full conditional distributions for the model parameters are given by π(β|σ2, ρ, y, z) ∝ exp � − 1 2(β − ˆβ)′ 1 σ2 (X∗′X∗)(β − ˆβ) � , (18) π(σ2|β, ρ, y, z) ∝ � 1 σ2 � T 2 +1 exp � − 1 2σ2 T � t=1 (z∗ t − x∗ t β)2� , (19) π(ρ|β, σ2, y, z) ∝ |Q| exp � − 1 2σ2 T � t=1 (z∗ t − x∗ t β)2� × Iρ∈Sρ, (20) where ˆβ = (X∗′X∗)−1X∗′z∗ is the Feasible Generalized Least Squares (FGLS) estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The functional forms of (18) and (19) show that β|σ2, ρ, y, z ∼ N( ˆβ, σ2(X∗′X∗) −1), (21) σ2|β, ρ, y, z ∼ IG �T 2 , 1 2(z∗ − X∗β)′(z∗ − X∗β) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (22) However, to sample values of ρ we need to use the Metropolis-Hastings algo- rithm within the Gibbs sampler (Gilks et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 GDA-MSM algorithm The following algorithm describes how to perform Bayesian inference in the CLR-AR(p) using the Gibbs sampler with the described data augmentation procedure, GDA-MSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Given a data set y = (y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', yT ), possibly with censored observations, the GDA-MSM algorithm allows the construction of a Markov Chain for the pa- rameters of the CLR-AR(p) model as follows: Algorithm 1: Gibbs sampler with Data augmentation (GDA) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Initialize with y, L ∈ R, N ∈ Z and θ(0) = (β(0), σ2(0), ρ(0)) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Set z(0) = y 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' For i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', N 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Sample β(i) ∼ π(β|σ2(i−1), ρ(i−1), y, z(i−1)) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Sample σ2(i) ∼ π(σ2|β(i), ρ(i−1), y, z(i−1)) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Sample ρ(i) ∼ π(ρ|β(i), σ2(i), y, z(i−1)) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' For t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', T 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' If yt ≤ L 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' For j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', m 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Sample ztj ∼ F(zt|y, θ(i), zt ≤ L) × I(zt≤L) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' z(i) t := 1 m m � j=1 ztj 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Else 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' z(i) t := yt 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Return Θ = [θ(1) · · · θ(N)]′ and z(N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The MCMC estimates of the model parameters θ are usually obtained by calculating the sample mean of the GDA output Θ , unless the marginal pos- terior density indicates a highly skewed distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' in this case it is more appropriate to use the sample median.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The resulting augmented data z(N) = (z(N) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , z(N) T ) can be regarded as observations on the latent variable for fur- ther inferences (Tanner and Wong, 1978;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Law and Jackson, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 Illustration with synthetic data sets The performance of the above procedure is illustrated with censored time se- ries simulated from the CLR-AR(1) model with and without explanatory vari- bles, several positive and negative values for the lag 1 correlation, namely, ρ1 = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='15, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='15, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8, three different scenarios of censorship 5%, 20% and 40% and three sample sizes 100, 500 and 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Values for the model parameters β0, β1, σ2 were chosen based on the papers Schumacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2017) and Wang and Chan (2018) and are given in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Note that the model designated as M1 corresponds to an AR(1) with mean β0/(1 − ρ1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The total number of models is eighteen, leading to 18 × 3 × 3 = 162 simulation sce- 8 narios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The simulation allows control of the degree of censorship and of serial correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Table 1: Parameters of the CLR-AR(1) model in the simulation study Model Parameter β0 β1 σ2 M1 2 0 2 M2 2 1 2 M3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='607 The procedure is implemented in R (R Core Team, 2020) and, in particular, the packages MASS (Ripley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2021) and invgamma (Kahle and Stamey, 2017) are used to sample from the multivariate normal and from the inverted gamma distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The algorithm is iterated N = 4 × 104 times, the b = 2 × 104 initial burn-in iterations were discarded and only every 20th value of the last iterations is kept to reduce the autocorrelation within the chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The convergence of the MCMC algorithm was duly analysed with the usual diagnos- tic tests available in package Coda (Plummer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Robert and Casella, 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The initial estimates are obtained by FGLS estimates and the model parameters are estimated by posterior means from the remaining M = 1 × 103 values in the chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The number of simulated values for the data augmentation is m = 5 as used by Hopke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The overall results and performance of the method are illustrated by the posterior densities for M2 with ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 under the three censorship scenarios in Figures 1–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The plots illustrate the efficiency and Bayes consistency of the GDA–MMS method: the marginal posterior distributions are, in general, con- centrated on sets containing the true values of the parameters (vertical dashed red lines), with the variability decreasing as the sample sizes increase, for all the scenarios of censorship considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Illustration of posterior densities for other values of ρ1 and the other models are presented in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' To further study the properties of the method, the 100 realizations of each the 162 scenarios is generated and the results are summarized in Tables 4 to 7 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Thus, the approach works well at estimating censored regression models with AR errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 4 Model assessment in censored data This section presents criteria for model assessment and model selection in the context of Bayesian analysis of censored regressions with autocorrelated errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' First define jackknife one-step-ahead residuals at t + 1 (Harrison and West, 9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 1 2 3 4 5% of cens: π(β0|y,β1,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0 2 4 6 8 5% of cens: π(β1|y,β0,σ2,ρ) β1 = 1 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 2 4 5% of cens: π(σ2|y,β0,β1,ρ) σ2 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0 5 10 15 5% of cens: π(ρ|y,β0,β1,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 n=100 n=500 n=1000 Figure 1: Model 2 with ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48: Posterior density of the model parameters for n = 100, 500 and 1000 under 5% of censorship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 1991;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Shiffrin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2008) as dJ t+1 = zt+1 − E[zt+1|zt] � V ar[zt+1|zt] (23) which are calculated by adopting the leave-future-out-cross-validation (LFO- CV) method (Burkner et al, 2020), a modification of the popular leave-one-out cross validation method, by leaving out all future observations to assess predic- tive performance in time series models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In practice, the sample t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , T, is partitioned into a training set with n observations which grows continuously and a test set with the remaining n − T observations, Wagenmakers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The value for n is chosen so that estimation is consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In practice, the mean, E[zt+1|zt], and variance, V ar[zt+1|zt] in equation (23), are approximated by their sample counterparts using M values generated from the one-step-ahead predictive distribution of zt+1 f(zt+1|zt) = � Θ f(zt+1|zt, θ)π(θ|zt)dθ (24) which is not available in closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Therefore E[zt+1|zt] ≈ 1 M M � j=1 z(j) t+1, (25) 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 1 2 3 4 20% of cens: π(β0|y,β1,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0 2 4 6 8 20% of cens: π(β1|y,β0,σ2,ρ) β1 = 1 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 2 4 20% of cens: π(σ2|y,β0,β1,ρ) σ2 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0 5 10 15 20% of cens: π(ρ|y,β0,β1,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 n=100 n=500 n=1000 Figure 2: Model 2 with ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48: Posterior density of the model parameters for n = 100, 500 and 1000 under 20% of censorship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' V ar[zt+1|zt] ≈ 1 M − 1 M � j=1 � z(j) t+1 − E[zt+1|zt] �2, (26) where z(j) t+1 are simulated from N(µ, σ2) with µ as in equation (6), given a MCMC output θ(j) generated from π(θ|zt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In the context of censored data, the computation of the residuals dJ t , equa- tions (23), (25), (26) and subsequent model assessment is achieved in a two step procedure: Step 1: Given a (possibly) censored data set y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , yT fit the model via GDA- MSM algorithm and obtain an augmented data set, z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , zT );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' choose n < T Step 2: For each t = n + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , T 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 Generate θ(j) t by applying the GDA-MSM algorithm to (y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , yt) and then generate z(j) t ∼ N(µ, σ2), µ given by equation (6) for j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , M 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 Approximate the expectation and variance of the predictive distribu- tion using equations (25) and (26) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 Regarding z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , zT ) as the actually observed data, compute (23) (see Gourieroux et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (1987), Law and Jackson (2017)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 1 2 3 4 40% of cens: π(β0|y,β1,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0 2 4 6 8 40% of cens: π(β1|y,β0,σ2,ρ) β1 = 1 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 2 4 40% of cens: π(σ2|y,β0,β1,ρ) σ2 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0 5 10 15 40% of cens: π(ρ|y,β0,β1,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 n=100 n=500 n=1000 Figure 3: Model 2 with ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Posterior density of the model parameters for n = 100, 500 and 1000 under 40% of censorship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The standardized Bayesian residuals (23) thus obtained may now be used to assess not only the quality of the fitted model but also to comparatively evaluate competing models in terms of their predictive performance via, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', the sum of squares (or of absolute values) in which case, models models with smaller values are favoured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Regarding model selection, the most popular Bayesian criteria are the De- viance Information Criterion, DIC (Spiegelhalter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2002) and the Widely Applicable Information Criterion, WAIC (Watanabe, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Expressions for DIC and WAIC can be found in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Given a MCMC output, an ap- proximate value for pw in WAIC measure (30) is calculated by pw ≈ −2 T � t=1 � 1 M M � j=1 lnf(yt|θ(j)) − ln � 1 M M � j=1 f(yt|θ(j)) �� (27) As in residual analysis, the augmented data set z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , zT) is used to evaluate the likelihood function f(y|θ), by replacing yt by zt, t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' , T , obtained from the GDA algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 5 Analysis of cloud ceiling height time series Consider the meteorological time series of cloud ceiling height, previously ana- lyzed in Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2007) and Schumacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Cloud ceiling height is 12 defined as the distance from the ground to the botton of a cloud and is measured in hundred feet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' According to Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2007) an accurate determination of the cloud ceiling height is important mainly because it is one of the major fac- tors contributing to weather-related accidents and one of the major causes of flight delays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The recording device has a detection limit of 12000 feet, so the observed data can be considered a right-censored time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The data were originally collected by the National Center for Atmospheric Research (NCAR) based of hourly observations in San Francisco, during the month of March 1989, consisting in 716 observations, 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7% of which are cen- sored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The log-transformed data is available in the package ARCensReg Schumacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2016) of the software R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A plot of the data is shown in the Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Time t Log−transformed CloudCeiling 0 100 200 300 400 500 600 700 −2 0 2 4 6 Figure 4: Censored time series of log-transformed hourly cloud ceiling height in San Francisco during March 1989 In the absense of explanatory variables, the CLR-AR(p) models correspond to censored AR(p) models and two values for p = 1, 2 are considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The burn-in is set to b = 2 × 104 for the CLR-AR(1) and to b = 4 × 104 for the CLR-AR(2) as a result of monitoring the chains convergence (Appendix D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' After discarding the burn-in, the z-scores of the Geweke test, Geweke (1992), 13 are {-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='302, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='350, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='624} and {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='272, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='137, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='787, -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='991} for p = 1 and p = 2, respectively, suggesting the convergence of the chains (see Appendix D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In order to reduce the autocorrelation in the MCMC outputs and get subsamples of length M = 1 × 103 to compute the estimates, lag = 80 and N = 1 × 105 are set for p = 1, while for p = 2 those values are: lag = 180 and N = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 × 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Plots of autocorrelation function (ACF) in Appendix D suggest no significant autocorrelation in these subsamples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The resulting posterior densities for the parameters are presented in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 π(β0|y,z,σ2,ρ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0 2 4 6 8 π(σ2|y,z,β0,ρ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='90 0 5 15 π(ρ|y,z,β0,σ2) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 π(β0|y,z,σ2,ρ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0 2 4 6 8 π(σ2|y,z,β0,ρ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='85 0 4 8 12 π(ρ1|y,z,β0,σ2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='30 0 4 8 12 π(ρ2|y,z,β0,σ2) Figure 5: Posterior densities,top: AR(1) model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' bottom: AR(2) model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' To compute the parameter estimates, the sample means of the retained MCMC subsamples were calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Moreover, other summary statistics of those subsamples are also provided in the Table 2, namely, the median, the stan- dard error (SE) and the HPD credible interval (CI), with probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The CI´s were calculated using the R package HDInterval (Meredith and Kruschke, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' To obtain the jackknife forecast residuals, the size of the initial training sample was set to n = 600 and the corresponding mean and variance were, respectively, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='257 for the AR(1) model, and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='061 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='95 for the AR(2) model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' These values are close to 0 and 1, respectively, and their plots 14 in Figure 6 top emphasize that these residuals are distributed around zero and show no significant correlation (Figure 6 bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' −6 −2 2 6 Jackknife residuals t 601 621 641 661 681 701 721 −6 −2 2 6 Global residuals t 1 101 201 301 401 501 601 701 0 5 10 15 20 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 Lag ACF ACF of Jackknife residuals 0 5 10 15 20 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 Lag ACF ACF of Global residuals −6 −2 2 6 Jackknife residuals t 601 621 641 661 681 701 721 −6 −2 2 6 Global residuals t 1 101 201 301 401 501 601 701 0 5 10 15 20 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 Lag ACF ACF of Jackknife residuals 0 5 10 15 20 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 Lag ACF ACF of Global residuals Figure 6: (Left) Residuals and corresponding ACF for: (Left) p = 1 (Right) p = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Table 2: Parameter estimates of LR model with AR(p) error term for the log- transformed cloud ceiling height data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' p Stats ˆβ1 ˆσ2 ˆρ1 ˆρ2 1 Mean 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='136 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='862 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='822 – Median 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='134 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='858 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='823 – SE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='205 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='053 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='022 – CI [3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='771, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='543] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='768, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='970] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='776, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='862] – 2 Mean 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='070 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='839 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='697 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='160 Median 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='836 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='695 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='160 SE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='248 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='053 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='037 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='037 CI [3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='620, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='576] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='738, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='942] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='626, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='766] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='087, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='231] The values of DIC, WAIC and the sum of squared standardized jackknife residuals (SSJR) are given in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Since the model with AR(2) error presents lowest values of DIC, WAIC and SSJR, the AR(2) model is the chosen one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' This conclusion and the values of model parameters are identical to that obtained by Schumacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (2017) when analysing this dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The augmented data, corresponding to the estimated model with AR(2) errors is represented in Figure 7 (blue line) against the observed data (red line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 15 Table 3: Information criteria for model assessment Model DIC WAIC SSJR CLR-AR(1) 684.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 416497.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 CLR-AR(2) 590.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 377259 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 Time t Log−Transformed CloudCeiling 0 100 200 300 400 500 600 700 −2 0 2 4 6 observed augmented Figure 7: Observed vs augmented data of censored time series of log-transformed hourly cloud ceiling height in San Francisco during March 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 6 Conclusions This work proposes a Bayesian approach to perform inference in a linear regres- sion model with AR(p) errors for censored data (CLR-AR(p) model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Ignoring the censorship pattern in the data and applying usual estimation methods re- sults in biased estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The algorithm proposed implements a Gibbs sampler with data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The novelty stems from the data augmentation with the mean of multiple simulations (GDA-MMS), which improves the accuracy of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' In fact, the GDA-MMS algorithm works well even when the proportion of censored values is large (40%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 16 Note that in the simulation and empirical example the Jeffrey priors were used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' However, if information about the data is available other priors with ap- propriate hyperparameters may be used, in particular for (β, σ2) a Multivariate Normal - Inverted Gamma distribution may be considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Here the censoring threshold was considered known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' An open issue to be considere in future work is to model the data under unknown censoring level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Acknowledgements This work is supported by Fundação Calouste Gulbenkian and the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT - Fundação para a Ciência e a Tecnolo- gia), reference UIDB/04106/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' References Beach, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', MacKinnon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Full maximum likelihood estimation of second order autoregressive errors models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of Econometrics 7, 187–198.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Buckley, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', James, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Linear regression with censored data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Biometrika 66, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 3, 429–436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Burkner,P-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Gabry, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Vehtari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 2020 Approximate leave-future-out cross- validation for Bayesian time series models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of Statistical Computation and Simulation 90, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 14, 2499–2523.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Casella, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', George, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Explaining the gibbs sampler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The American Statistician 46, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 3, 167–174.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Chib, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Bayes inference in the tobit censored regression model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of Econometrics 51, 79–99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Fridley, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Dixon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Data augmentation for a bayesian spatial model involving censored observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Environmetrics 18, 107-–123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Friel, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Wyse, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Estimating the statistical evidence – a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='. Sta- tistica Neerlandica 66, 288–308.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Gelfand, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Smith, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Sampling-based approaches to calculating marginal densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of the American Statistical Association 85, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 410, 398–409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Geweke, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (with discussion), in: Bernardo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Berger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Dawid, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', , Smith, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' ), Bayesian Statistics 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Oxford University Press, Oxford, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 169–193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 17 Gilks, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Best, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Tan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Adaptative rejection metroplis sampling within gibbs sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of the Royal Statistical Society 44, 4, 455–472.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Gourieroux, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Monfort, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Renault, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Trognon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Simulated resid- uals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of Econometrics 34, 201–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Harrison, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', West, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Dynamic linear model diagnostics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Biometrika 78, 4, 797–808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Hopke, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Multiple imputation for multivariate data with missing and below-threshold measurements: Time-series concentrations of pollutants in the arctic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Biometrics 57, 22–33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Houseman, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Virji, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A bayesian approach for summarizing and modeling time-series exposure data with left censoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Annals of Work Ex- posures and Health 61, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 7, 773––783.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Kahle, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Stamey, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R package ’invgamma’: The inverse gamma distri- bution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' CRAN Repository .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Law, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Jackson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Residual plots for linear regression models with censored outcome data: A refined method for visualizing residual uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Communications in Statistics - Simulation and Computation 46:4, 3159–3171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Meredith, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Kruschke, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Package hdinterval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' CRAN Repository , 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Mohammad, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Censored Time Series Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Phd Thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The University of Western Ontario, Ontario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Park, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Genton, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Ghosh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Censored time series analysis with autoregressive moving average models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The Canadian Journal of Statistics 35, 1, 151–168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Plummer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Best, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Cowles, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Vines, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Coda: Convergence diag- nosis and output analysis for mcmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R News 6, 7–11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Prais, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Winsten, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1954.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Trend estimators and serial correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Cowles Comission Discussion Paper: Statistics, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 383 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R Core Team, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R: A Language and Environment for Statistical Computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R Foundation for Statistical Computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Vienna, Austria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Ripley, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Venables, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Hornik, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Gebhardt, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Firth, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R package ’mass’: Support functions and datasets for venables and ripley’s mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' CRAN Repository .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Robert, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Casella, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Introducing Monte Carlo Methods with R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Springer, New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 18 Schumacher, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Lachos, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Galarza, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' R package ’arcensreg’: Fitting univariate censored linear regression model with autoregressive errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' CRAN Repository .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Schumacher, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Lachos, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Dey, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Censored models with autore- gressive errors: A likelihood-based perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The Canadian Journal of Statistics 45, 68, 375–392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Shiffrin, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Kim, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Wagenmakers, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A survey of model evaluation approaches with a tutorial on hierarchical bayesian methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Cognitive Science 32, 1248–1284.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Spiegelhalter, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Best, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Carlin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', van der Linde, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Bayesian measures of model complexity and fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Royal Statistical Society 14, 867–897.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Tanner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Wong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' The calculation of posterior distributions by data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of American Statistical Association 82, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 398, 528–540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Wagenmakers, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Gruwald, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Steyvers, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Accumulative prediction error and selection of time series models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of Mathematical Psycology 50, 149–166.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Chan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Quasi-likelihood estimation of a censored autore- gressive model with exogenous variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of the American Statistical Association 113:523, 1135–1145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Watanabe, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A widely applicable bayesian information criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of Machine Learning Research 14, 867–897.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Wei, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Tanner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Posterior computations for censored regres- sion data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of American Statistical Association 85, 829–839.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Zangari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Tsurumi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A bayesian analysis of cansored autocorrelated data on exports of japanese pssenger cars to the united states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Advances in Econometrics 11, Part A, 111–143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Zeger, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', Brookmeyer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=', 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Regression analysis with censored autocorre- lated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Journal of the American Statistical Association 81, 722–729.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' A Posterior Densities 19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 5% of cens: π(β0|y,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 20% of cens: π(β0|y,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 40% of cens: π(β0|y,σ2,ρ) β0 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 5% of cens: π(σ2|y,β0,ρ) σ2 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 20% of cens: π(σ2|y,β0,ρ) σ2 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 40% of cens: π(σ2|y,β0,ρ) σ2 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 15 5% of cens: π(ρ|y,β0,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 15 20% of cens: π(ρ|y,β0,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 15 40% of cens: π(ρ|y,β0,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 5% of cens: π(β0|y,σ2,ρ) β0 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 20% of cens: π(β0|y,σ2,ρ) β0 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 40% of cens: π(β0|y,σ2,ρ) β0 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 5% of cens: π(σ2|y,β0,ρ) σ2 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 20% of cens: π(σ2|y,β0,ρ) σ2 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 2 4 40% of cens: π(σ2|y,β0,ρ) σ2 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0 10 5% of cens: π(ρ|y,β0,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0 10 20% of cens: π(ρ|y,β0,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0 10 40% of cens: π(ρ|y,β0,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 n=100 n=500 n=1000 Figure 8: Model M1 with ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 top 3 lines and ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 bottom 3 lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Posterior density of the model parameters for n = 100, 500 and 1000 under 3 censorship scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 5% of cens: π(β0|y,β1,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 20% of cens: π(β0|y,β1,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 40% of cens: π(β0|y,β1,σ2,ρ) β0 = 2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0 2 4 6 8 5% of cens: π(β1|y,β0,σ2,ρ) β1 = 1 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0 2 4 6 8 20% of cens: π(β1|y,β0,σ2,ρ) β1 = 1 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0 2 4 6 8 40% of cens: π(β1|y,β0,σ2,ρ) β1 = 1 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 5 15 25 5% of cens: π(ρ|y,β0,β1,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 5 15 25 20% of cens: π(ρ|y,β0,β1,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 5 15 25 40% of cens: π(ρ|y,β0,β1,σ2) ρ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 1 2 3 4 5 5% of cens: π(σ2|y,β0,β1,ρ) σ2 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 1 2 3 4 5 20% of cens: π(σ2|y,β0,β1,ρ) σ2 = 2 n=100 n=500 n=1000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0 1 2 3 4 5 40% of cens: π(σ2|y,β0,β1,ρ) σ2 = 2 n=100 n=500 n=1000 Figure 9: Model M2 with ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8: Posterior density of the model parameters for n = 100, 500 and 1000 under 3 censorship scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 21 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0 2 4 6 8 10 5% of cens: π(β0|y,β1,σ2,ρ) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 n=100 n=500 n=1000 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0 2 4 6 8 10 20% of cens: π(β0|y,β1,σ2,ρ) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 n=100 n=500 n=1000 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0 2 4 6 8 40% of cens: π(β0|y,β1,σ2,ρ) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0 5 10 15 5% of cens: π(β1|y,β0,σ2,ρ) β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0 5 10 15 20% of cens: π(β1|y,β0,σ2,ρ) β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 0 5 10 15 40% of cens: π(β1|y,β0,σ2,ρ) β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 n=100 n=500 n=1000 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0 5 10 15 5% of cens: π(ρ|y,β0,β1,σ2) ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 n=100 n=500 n=1000 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 0 5 10 15 20% of cens: π(ρ|y,β0,β1,σ2) ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 n=100 n=500 n=1000 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='3 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0 5 10 15 40% of cens: π(ρ|y,β0,β1,σ2) ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 5 10 15 5% of cens: π(σ2|y,β0,β1,ρ) σ2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='607 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 4 8 12 20% of cens: π(σ2|y,β0,β1,ρ) σ2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='607 n=100 n=500 n=1000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0 4 8 12 40% of cens: π(σ2|y,β0,β1,ρ) σ2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='607 n=100 n=500 n=1000 Figure 10: Model M3 with ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='5: Posterior density of the model parameters for n = 100, 500 and 1000 under 3 censorship scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 22 B Simulation Results Table 4: Model 2: Results, mean (standard deviation), based on 100 simula- tions of simple CLR-AR(1) model under different sample sizes and censorship and ρ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' n % of cen β0 = 2 β1 = 1 σ2 = 2 ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='15 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='027(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='066) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='995(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='026) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='036(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='107) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='134(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='070) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='995(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='962(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='107) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='133(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='069(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='980(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='030) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='756(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='165) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='127(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 500 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='996(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='014(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='148(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='921(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='147(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='109(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='026) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='968(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='659(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='135) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='140(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='995(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='149(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='026(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='987(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='911(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='145(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='119(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='956(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='659(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='125) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='141(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 β1 = 1 σ2 = 2 ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='046(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='085) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='045(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='106) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='460(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='045(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='098) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='024) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='985(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='120) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='452(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='098(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='119) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='983(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='028) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='767(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='179) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='450(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='024) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='477(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='020(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='027) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='929(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='472(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='125(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='038) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='966(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='696(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='115) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='463(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='995(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='477(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='031(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='987(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='924(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='471(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='130(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='032) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='957(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='689(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='108) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='462(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 β1 = 1 σ2 = 2 ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='110(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='354) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='019) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='106 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='105) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='780(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='130(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='369) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='999(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='019) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='991(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='111) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='777(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='290(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='397) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='968(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='027) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='755(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='214) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='762(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='022(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='099) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='793(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='054(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='101) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='942(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='022) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='789(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='200(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='119) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='979(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='714(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='107) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='779(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='058) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='995(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='795(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='054(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='101) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='942(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='022) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='789(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='200(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='119) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='979(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='714(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='107) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='779(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 23 Table 5: Model 2: Results based on 100 simulations of simple CLR-AR(1) model under different sample sizes and censorship, using GDA with mean of multiple samples and ρ < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' n % of cen β0 = 2 β1 = 1 σ2 = 2 ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='15 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='056) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='041(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='109) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='161(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='060) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='960(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='098) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='160(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='066(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='980(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='024) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='760(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='162) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='160(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='014) 500 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='994(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='014(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='152(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='914(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='026) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='151(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='090(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='974(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='672(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='125) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='151(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='997(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='151(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='917(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='151(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='119(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='956(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='655(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='128) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='148(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 β1 = 1 σ2 = 2 ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='045) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='055(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='115) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='481(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 20% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='057) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='989(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='102) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='479(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='054(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='065) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='985(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='837(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='123) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='467(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 500 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='479(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='919(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='030) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='475(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='090(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='972(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='741(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='086) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='474(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='999(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='479(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='995(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='940(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='014) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='476(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='097(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='965(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='740(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='078) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='468(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 β1 = 1 σ2 = 2 ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 100 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='033) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='089 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='120) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='797(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 20% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='994(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='052) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='097(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='134) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='793(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='065(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='103) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='969(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='029) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='059(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='141) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='780(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 500 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='991(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='033(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='793(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='998(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='794(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='071(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='977(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='947(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='030) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='786(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 1000 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='999(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='019(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='797(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000) 20% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='992(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='794(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='061(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='977(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='959(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='784(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 24 Table 6: Model 1: Results based on 100 simulations of CLR-AR(1) model without explanatory variables, under different sample sizes and censorship, using GDA with mean of multiple samples and ρ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' n % of cen β0 = 2 σ2 = 2 ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='15 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='037(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='108) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='133(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='974(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='111) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='131(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='066(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='693(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='208) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='125(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='014) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='058) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='149(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='931(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='147(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='053(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='681(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='120) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='142(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='148(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='923(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='148(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='046(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='673(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='116) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='143(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 σ2 = 2 ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='034(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='055) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='044(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='105) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='461(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='037(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='055) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='993(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='135) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='456(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='087(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='056) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='753(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='195) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='444(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='477(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='946(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='473(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='067(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='725(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='096) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='462(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='477(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='938(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='014) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='472(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='058(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='717(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='090) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='461(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 σ2 = 2 ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='097(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='368) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='056(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='111) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='781(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='126(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='346) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='122) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='774(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='266(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='341) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='708(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='231) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='762(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='006) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='027(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='097) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='019) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='793(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='054(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='095) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='939(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='788(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='191(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='109) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='673(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='133) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='778(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='054) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='795(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='036(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='053) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='926(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='016) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='789(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='157(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='065) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='682(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='115) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='780(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 25 Table 7: Model 1: Results based on 100 simulations of CLR-AR(1) model without explanatory variables, under different sample sizes and censorship, using GDA with mean of multiple samples and ρ < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' n % of cen β0 = 2 σ2 = 2 ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='15 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='035(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='107) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='163(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='972(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='104) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='162(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='051(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='721(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='174) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='169(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='017(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='152(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='932(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='023) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='152(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='046(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='688(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='115) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='154(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='151(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='925(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='150(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='045(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='674(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='115) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='152(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 σ2 = 2 ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='48 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='048(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='112) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='487(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='996(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='114) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='486(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='026(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='850(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='139) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='478(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='018) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='480(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='954(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='021) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='478(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='047(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='714(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='100) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='481(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1000 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='008(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='009) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='479(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='947(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='476(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='040(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='755(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='070) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='471(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) n % of cen β0 = 2 σ2 = 2 ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 100 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='005) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='068(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='108) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='790(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003) 20% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='993(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='007) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='051(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='118) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='787(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='012(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='013) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='953(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='122) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='783(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 500 5% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='029(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='019) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='797(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='003(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='984(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='020) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='796(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='029(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='004) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='882(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='039) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='793(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 1000 5% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='999(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='015(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='010) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='797(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000) 20% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='001) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='982(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='011) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='795(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000) 40% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='002) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='885(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='025) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='791(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='000) 26 C Measures of predictive performance DIC DIC is a measure of fit quality widely used in Bayesian approach and is calculated as follows: DIC = −4 · Eθ|y[lnf(y|θ)] + 2 · lnf(y|ˆθ), (28) where Eθ|y[lnf(y|θ)] is the posterior mean of the log-likelihood function, given by Eθ|y[lnf(y|θ)] = 1 M M � j=1 lnf(y|θ(j)), (29) and f(y|ˆθ) is the likelihood function evaluated at the Bayesian parameters es- timates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' WAIC WAIC is another measure of predictive accuracy, more related to Bayesian approach than previous criterion Watanabe (2013), and is given by WAIC = −2 n � t=T lnEθ|y[f(yt|θ)] + 2pw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (30) where pw is the correction term, often as the one used in DIC criterion, defined as follows: pw = −2 T � t=1 {Eθ|y[lnf(yt|θ)] − lnEθ|y[f(yt|θ)]}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' (31) 27 0 100 200 300 400 500 −2 0 2 First iteration in segment Z−score X 0 100 200 300 400 500 −2 0 1 2 First iteration in segment Z−score X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='1 0 100 200 300 400 500 −2 0 1 2 First iteration in segment Z−score ro 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=80 ACF β0 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=80 ACF σ2 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=80 ACF ρ 0 100 200 300 400 500 −2 0 1 2 First iteration in segment Z−score beta0 0 100 200 300 400 500 −2 0 1 2 First iteration in segment Z−score sigma^2 0 100 200 300 400 500 −2 0 1 2 First iteration in segment Z−score rho1 0 100 200 300 400 500 −2 0 1 2 First iteration in segment Z−score rho2 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=180 ACF β0 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=180 ACF σ2 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=180 ACF ρ1 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 lag=180 ACF ρ2 Figure 11: Top: Geweke plots and ACF functions of the subsamples used to compute the parameters estimates for CLR-AR(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' Botton: Geweke plots and ACF functions of the subsamples used to compute the parameters estimates for CLR-AR(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' D Analysis of the convergence of the chains 28 0e+00 4e+04 8e+04 0 2 4 6 Iterations beta0 0e+00 4e+04 8e+04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='2 Iterations sigma^2 0e+00 4e+04 8e+04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='90 Iterations rho 0 50000 150000 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='4 Iterations beta0 0 50000 150000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='8 Iterations sigma^2 0 50000 150000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='70 Iterations rho1 0 50000 150000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content='20 Iterations rho2 Figure 12: Evolution of 1st and 3rd quantiles corresponding MCMC outputs,top: AR(1) model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' bottom: AR(2) model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} +page_content=' 29' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/jNAzT4oBgHgl3EQf4_5S/content/2301.01852v1.pdf'} diff --git a/k9AzT4oBgHgl3EQfNfvd/content/2301.01151v1.pdf b/k9AzT4oBgHgl3EQfNfvd/content/2301.01151v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e8190b3a123ae3bc2f019e40bd214eea1bbd52b5 --- /dev/null +++ b/k9AzT4oBgHgl3EQfNfvd/content/2301.01151v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:232279f26d2cd08326fa740100aa621aef586241e93ec3b30c1194da597420b1 +size 13222242 diff --git a/l9FPT4oBgHgl3EQfIDTd/vector_store/index.pkl b/l9FPT4oBgHgl3EQfIDTd/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..ee309f0dacb29aed174e9398e98ab537f6140e03 --- /dev/null +++ b/l9FPT4oBgHgl3EQfIDTd/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc6a0d1abb8a05b8ff6baf2e5656676e7de7be84f577196b58bd6682b298aecf +size 199326 diff --git a/lNE1T4oBgHgl3EQfNwNu/vector_store/index.faiss b/lNE1T4oBgHgl3EQfNwNu/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..6938ab1f95582b24d0c0c39904f326248408dd8f --- /dev/null +++ b/lNE1T4oBgHgl3EQfNwNu/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b894a371f341313a4957202094ce171fbfada3c5d00ac86de6b9ec7a0fe47b15 +size 2949165 diff --git a/ltFLT4oBgHgl3EQfeC95/vector_store/index.faiss b/ltFLT4oBgHgl3EQfeC95/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..08c13bd7ef38cfcdccdb68dd9a6e9bf32758e990 --- /dev/null +++ b/ltFLT4oBgHgl3EQfeC95/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4205d4536511020557a67c67dfa04a335a739e6e3db96ce28f8a7b262e681199 +size 2818093 diff --git a/mtE1T4oBgHgl3EQf1AXN/content/2301.03464v1.pdf b/mtE1T4oBgHgl3EQf1AXN/content/2301.03464v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d25f02070697fa61506b3c09c484b564e6b8d3dd --- /dev/null +++ b/mtE1T4oBgHgl3EQf1AXN/content/2301.03464v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe36ce896dde2dea08f2e0bf26040167eecec0c358a95827c1393e4610dc2e4b +size 143405 diff --git a/mtE1T4oBgHgl3EQf1AXN/vector_store/index.faiss b/mtE1T4oBgHgl3EQf1AXN/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..1d5ca0bdf34c3b379cf66b63a4551aa79b04fb9c --- /dev/null +++ b/mtE1T4oBgHgl3EQf1AXN/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0bb4acf93027acb09f182e83c2bf249d132226f4eab84c35b227ddac01b5526 +size 2162733 diff --git a/mtE1T4oBgHgl3EQf1AXN/vector_store/index.pkl b/mtE1T4oBgHgl3EQf1AXN/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..cb79cc02b649dedc2db13d3edfd3558fd6043d3c --- /dev/null +++ b/mtE1T4oBgHgl3EQf1AXN/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:937c8828257520d86f3c117d96d056fcdeb6ab2a3bb79f7061efe4959d7c3fb9 +size 83829 diff --git a/ndE2T4oBgHgl3EQfewdN/content/2301.03919v1.pdf b/ndE2T4oBgHgl3EQfewdN/content/2301.03919v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7c3072dbfda882b69915b2c5cbd87bfe4a0529da --- /dev/null +++ b/ndE2T4oBgHgl3EQfewdN/content/2301.03919v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d1dd31385745e2b83d2f7d85e3c5a3c51c562a6913cb41d93e5a4e9baa13401 +size 1374152 diff --git a/ntAyT4oBgHgl3EQfy_ls/content/tmp_files/2301.00694v1.pdf.txt b/ntAyT4oBgHgl3EQfy_ls/content/tmp_files/2301.00694v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f1ebdbcea566fc09329f638cd0503c889822237a --- /dev/null +++ b/ntAyT4oBgHgl3EQfy_ls/content/tmp_files/2301.00694v1.pdf.txt @@ -0,0 +1,906 @@ +arXiv:2301.00694v1 [physics.hist-ph] 30 Dec 2022 +Redundancies +and +Profundities +Kyle Singh∗ +Guarini School of Graduate and Advanced Studies +64 College Street Anonymous Hall Suite 102 Hanover New Hampshire 03755-3563 +(Dated: January 3, 2023) +We reevaluate the status of the gauge principle and reposition it as an intermediary structure +dependent on the initial conditions we endow on our theory. We explore how the gauge symmetry +manifests in the context of basic quantum electrodynamics, spontaneous symmetry breaking and +the modern scattering amplitudes program. We also investigate the addition of an auxiliary field +in φ4 theory and see how the dynamics are altered. Modal language is pointed to and utilized as +a convenient way to articulate the weight gauge symmetry demands in our theories as well as the +principles of locality and Lorentz invariance. A shifting scale ontology is introduced with regards to +the gauge principle and other structures of Quantum Field Theory in general. +I. +INTRODUCTION +The gauge principle is widely regarded as the corner- +stone of fundamental physics. However, the modern scat- +tering amplitudes program, as well as the discovery of the +AdS/CFT correspondence, have highlighted the redun- +dancy that comes from the implementation of the sym- +metry in Quantum Field Theory and in general. More- +over, independent of such recent developments the on- +tological status of the gauge principle has been brought +into question. This places the gauge principle in a unique +position. It is a concept that is fundamental; one that +no doubt leads us to profound physical insight by fixing +the structure of fundamental interactions in the standard +model while also revealing previously hidden degrees of +freedom, elementary particles, that are present in nature. +However, in other regimes, its presence masks the sim- +plicity underlying our theory and certain physical aspects +of our systems. We need to find a way to categorize that +which is redundant and also profound in our physical +theory. +More generally, our basic theories are such that in- +termediary concepts, as we will call them, map onto +physical data and are often most important. Consider, +for example, the fact that in General Relativity one is +free to make a choice of coordinates based on what the +physical system demands. Particular coordinate choices, +such as in the distinction between the Schwarzschild and +Kruskal coordinates for black holes, reveal different as- +pects of the spacetime structure. +In this case, for ex- +ample, the Kruskal coordinates resolve the non physical +coordinate singularities endemic to the Schwarzschild ge- +ometry. Such freedom can easily allow us to question the +role of a background space itself. Here space is instead +in an intermediary position. It is fundamental as a back- +ground but wholly arbitrary based on the physical system +and initial conditions we impose. +∗ kyle.singh.gr@dartmouth.edu +Not only is gauge symmetry profound in our funda- +mental theories, it is unavoidable. Carlo Rovelli estab- +lishes the ubiquity of gauge and claims that ”Gauge in- +variance is not just mathematical redundancy; it is an +indication of the relational character of fundamental ob- +servables in physics” (Rovelli, 7). He sets up a coupled +dynamical system and shows that gauge variables are +necessary in order for one to capture the full dynamics. +In other words, one can not decouple the system with- +out the presence of gauge variables. +1 For Rovelli, the +gauge symmetry reveals the ”relational structure of our +world”, as gauge interactions describe relative quantities +with more than one object (Rovelli, 7). +For example, +consider the fact that in ordinary quantum mechanics, +we can only measure differences in energy. Rovelli in- +sists that treating gauge as pure redundancy ignores this +relational structure. +Indeed, claiming that the gauge principle results in +pure redundancy or that it is somehow inherent to our +fundamental theory does not capture the unique posi- +tion it operates within. We are then left with a view of +the gauge principle that is purely intermediary. Our op- +tions for how we should treat the gauge symmetry were +aptly categorized by Michael Redhead in what has now +come to be known as Redheads trilemma. Redhead pur- +ports that we have three options with how we choose to +treat the gauge principle. We can either claim that gauge +symmetry is physical and motivate physical structures +directly representative of gauge fields, try to reformulate +the entire theory in terms of quantities that are purely +gauge-invariant, or we can let non-gauge-invariant quan- +tities enter as surplus structure and develop the theory +accordingly, adding further surplus structure if necessary +to make the theory work. +The third option is one often taken by physicists for +practical purposes and is the stance we will undertake. +The question then becomes one in which we ask ourselves +1 See Ref. [2] for a full discussion of this. + +2 +how we should categorize such intermediary concepts in +theoretical physics and more broadly in mathematical +representation. Redhead’s three propositions do not fully +articulate such a position, rather it seems that they cap- +ture some aspects of how we wish to treat the gauge +symmetry in each of them. His distinctions seem to arise +within the context of a particular theory or set of calcu- +lations. For example, with respect to his second propo- +sition, we can look to Wilson loops which are completely +gauge invariant quantities and work out the dynamics of +our QFT in terms of them, however they lead to non-local +physics, as is well evidenced by the Aharonov–Bohm ef- +fect. +Indeed, parsing out various physical systems and +regimes of our QFT and observing what role the gauge +principle has within them will be central to how we +choose to speak about its status. Furthermore, this will +allow us to set up a shifting scale ontology and classify +various pillars of our theory in an ontological way. +Not only do we need to deal with this status of gauge +symmetry, we must also find a way to incorporate its in- +herent ambiguity both in its implementation and with +respect to the physical objects the symmetry comes to +represent. We do not directly discuss surplus structure +broadly in mathematics, as Guay has suggested we ought +to do; however it is not so much of a concern in the par- +ticular way we choose to position gauge symmetry since +we are teasing out the physical data that it leads us to +and treating its inherent redundancy as a triviality since +we can easily choose a particular gauge even though we +are given infinite choices from which we can begin our +computations. The redundancies will only be important +if they mask the underlying simplicity of our theory. This +will be discussed within the context of the modern scat- +tering amplitudes program and it is within this context +that the discussion of surplus structure in general may +be apt. +We now begin with a brief summary of the gauge prin- +ciple and its operation in the context of electrodynamics. +II. +THE GAUGE PRINCIPLE +Let us briefly review the gauge principle in the con- +text of classical electromagnetism. +We begin with the +Lagrangian for complex scalar field theory +L = (∂µψ)† (∂µψ) − m2ψ†ψ +(1) +The Lagrangian possesses a U(1) symmetry by the fol- +lowing replacement +ψ(x) → ψ(x)eiα +(2) +Note that this symmetry is a global one, namely the field +is changed by the same amount at every spacetime point. +As is standard, we can then work out the consequences +if we impose the notion that our Lagrangian remain in- +variant under local transformations with the following +replacement +ψ(x) → ψ(x)eiα(x) +(3) +Clearly, under such a prescription the Lagrangian is not +invariant. +In order to restore the local symmetry, we +introduce a new field Aµ(x) which cancel out the ex- +tra terms resulting from the requirement of local gauge +invariance. +We must require that Aµ transforms as +Aµ(x) → Aµ(x) − 1 +q ∂µα(x).2We then introduce the co- +variant derivative defined as follows +Dµ = ∂µ + iqAµ(x) +(4) +Given this set of transformations, our theory is locally +gauge invariant. The introduction of an additional field +in order to obtain local gauge invariance is a ubiquitous +feature of gauge theories. In the context of electromag- +netism, we utilize the following Lagrangian. 3 +L = −1 +4(∂µAν − ∂νAµ)(∂µAν − ∂νAµ) − JµAµ +(5) +The equations of motion are the first two Maxwell equa- +tions. +∂2Aν − ∂ν(∂µAµ) = Jν +(6) +We can rewrite the transformation of our gauge field in +the following more generalized form +Aµ(x) → Aµ(x) − ∂µχ(x) +(7) +Of course, in any physical theory we want the fields we in- +troduce to hold physical significance. The function χ(x) +has, for all intents and purposes, an infinite number of +symmetries, one for each function.4 Therefore, we must +take further steps in constraining our gauge field to facili- +tate its representation of a physical object. In the case of +electrodynamics, the gauge field contains the dynamics +of the photon. The following two conditions, known as +choosing a gauge, are imposed to ensure that the gauge +field has two degrees of freedom since the photon can +only have two polarizations +∂µAµ(x) = 0 +(8) +∂0ξ = A +′ +0 +(9) +These particular gauges are known as Lorentz gauge and +Coulomb gauge, respectively. The gauge principle then +2 Here, q is the coupling strength and in this case is just an extra +parameter which will have physical significance in other theories. +3 Note, that this is the same as writing L = − 1 +4FµνF µν − JµAµ +4 This conundrum in itself is indicative of gauge as redundancy. +It would be startling to say that presence of infinite symmetries +would yield an infinite number of conservation laws following +from Noether’s theorem, for example! Here two states related by +a gauge transformation are indeed the same physical state. + +3 +refers to the procedure of introducing this additional dy- +namical field to maintain local gauge symmetry. More- +over, as we have just seen, the gauge field dictates the +form of the coupling. +Prima facie, there seems to be no reason at all to im- +pose local gauge invariance. +Moreover, it seems quite +contrived to insist that the gauge field introduced to pre- +serve our desired symmetry must have a precise set of +dynamics correspondent to a particular object in a the- +ory; in other words our designation of the gauge field as +a photon was put in by hand and not derived from the +gauge principle itself. Even more striking perhaps, is the +fact that we can never measure Aµ. That nature seems to +require us to introduce such objects is truly incredible.5 +Indeed, Martin calls these difficulties with the gauge +principle out and writes that the idea that the gauge +principle ”’dictates’ [or] ’determines’ the form of funda- +mental interactions as well as the existence of certain +physical fields must be taken with a large grain of salt” +(Martin, 233). He argues that, at best, the gauge prin- +ciple should be taken as heuristic and offers a differing +approach to the logic of nature, viewing the gauge sym- +metry as not a fundamental physical principle, rather, as +a relic of a theory that is more fundamental, in particu- +lar as renormalizable theories, that works in conjunction +with other physical requirements such as Lorentz invari- +ance. +It is clear that a proper revaluation of the status of +the gauge principle in our physical theories is necessary. +We will expand on Martin’s thesis and seek to make more +general modal statements relating the conception of local +gauge symmetry to our physical theory as one apparatus. +In doing so, we do not contradict Rovelli’s argument on +the ubiquity of gauge or the arbitrariness inherent to our +imposition of the gauge principle in itself. Instead, we re- +consider its position as a principle in our physical theory +all together. +III. +PURE REDUNDANCY +Consider the following Lagrangian in φ4 theory. +L = 1 +2(∂µφ)2 − m2 +2 φ2 − g +8φ4 +(10) +We shift the Lagrangian by adding a new σ field in the +following way6 +L′ = L + 1 +2g +� +σ − g +2φ2�2 +(11) +5 All of this may be a false problem, of course, and Rovelli aptly +calls into question whether or not we should ask our mathemat- +ical procedures to have a purpose; asking us whether or not we +should conclude that it was the purpose of humans to kill large +mammals if we were indeed responsible for their deaths. +6 Shifting our Langrangian in such a fashion is commonly referred +to as a Hubbard-Stratonovich transformation. +The Lagrangian is then +L′ = 1 +2(∂µφ)2 − m2 +2 φ2 − g +8φ4 + 1 +2g +� +σ − g +2φ2�2 +(12) +In QFT, the Green’s functions of our theory tell us about +the dynamics. We can compute these via the generating +functional +Z[J] = +� +DφDσexp +� +i +� +d4x (L′ + Jφ) +� +� +DφDσexp +� +i +� +d4xL′ +� +(13) +If one carries out the above functional integral, it is +straightforward to show that the expression written +above for our newly defined theory is the same as the +original one. +To emphasize this, we can compute the +equation of motion for the σ field +∂L′ +∂σ − ∂µ +∂L′ +∂(∂µσ) = 0 +(14) +and find that +σ = g +2φ2 +(15) +There are no time derivatives present in the computed +equation of motion. +Therefore, our newly added field +does not contribute to the dynamics of this system. Fur- +thermore, we can eliminate the additional field since it +can only provide a constraint on the theory. +We can promote the fields of our system to operators +and write down the vacuum-to-vacuum expansion of the +S-matrix for our theory. This yields the relevant Feyn- +man diagrams corresponding to various wick contractions +of the fields. This further tells us about the role of the σ +field. The S-matrix operator reads as follows +⟨0| S |0⟩ = ⟨0| T +� +exp +� +−i +� +d4x1 +2σφ2 +�� +|0⟩ +(16) +Writing out the first couple of terms in the perturbation +expansion yields +− i +2 +� +d4x ⟨0| T [σxφxφy] |0⟩ + 1 +2! +� +− i +2 +�2 +� +d4xd4y ⟨0| T [σxφxφxσyφyφy] |0⟩ + ... +(17) +Contractions of the φ field yield the standard free field +propagator. As we have shown, since the σ field does not +contribute to the dynamics of the theory, nothing can +propagate through it. Contractions such as σxσy take +two spacetime points and identify them with one another +playing the role of interaction vertices in the various di- +agrams which arise. This calculation is an example of a +procedure resulting in a trivial redundancy. It is clearly +distinct from the procedure undertaken in incorporating +gauge symmetry into our field equations. + +4 +Treating the gauge principle as simply an artifact of +pure redundancy does not capture its importance to the +dynamics of say Quantum Electrodynamics and to the +construction of the standard model. The σ field’s im- +portance in the constructed example and the role of the +gauge field are wholly different in terms of what they rep- +resent and what role they play in the theory, although +both were introduced in an arbitrary fashion. We must +then seek to reposition the status of gauge as it relates +to QFT and see what we can say about scientific theories +in full generality. +IV. +PURE GAUGE +As we have reviewed, any Lagrangian with local sym- +metry must harbor gauge fields. Consider the following +Lagrangian for gauged complex scalar field theory +L = +� +∂µψ† − iqAµψ†� +(∂µψ + iqAµψ) + +µ2ψ†ψ − λ(ψ†ψ)2 − 1 +4FµνF µν +(18) +It is crucial to note that the sign of the mass term has +been flipped. +This allows us to invoke the standard +symmetry breaking procedure. +We insist, again, local +gauge invariance and work in polar coordinates by let- +ting ψ(x) = σ(x)eiθ(x) for a unique ground state phase +set at θ(x) = θ0. The fact that we can not now change the +phase of the ground state both locally and globally means +that the symmetry is broken in both regimes. Now we ob- +serve, as is commonly done, what physical consequences +can be derived by computing the particle spectrum of +this system. In polar coordinates +∂µψ + iqAµψ = (∂µσ)eiθ + i(∂µθ + qAµ)σeiθ +(19) +Where the gauge field is represented in our theory in the +following way +Aµ + 1 +q ∂µθ ≡ Bµ +(20) +Therefore +(∂µψ† − iqAµψ†)(∂µψ + iqAµψ) = (∂µσ2) + σ2q2BµBµ +(21) +The Lagrangian becomes +L = (∂µσ)2 + σ2q2B2 + µ2σ2 − λσ4 − 1 +4FµνF µν +(22) +We now invoke the standard symmetry breaking proce- +dure. The minima of the potential are at σ = +� +µ2 +2λ. We +break the symmetry by setting σ0 = +� +µ2 +2λand θ0 = 0. +Expanding the Lagrangian in terms of a new field δ de- +fined as +δ +√ +2 = σ − σ0 and ignoring constants yields the +following +L = 1 +2(∂µδ)2 − µ2δ2 − +√ +λµδ3 − λ +4 δ4 − 1 +4F µνFµν ++A2 +2 B2 + q2 +�µ2 +λ +� 1 +2 +δB2 + 1 +2q2δ2B2 + ... +(23) +Here, A = q +� +µ2 +λ . Breaking the symmetry surprisingly +results in our theory containing the massive vector field +Bµ. Meanwhile, the massless excitations of the θ field +have disappeared. Note that these excitations were only +present in the case of global symmetry breaking. Our +theory, which once described two massive scalars and two +massless photons now describes one massive scalar and +three massive vector particles. Imposing the gauge trans- +formation Aµ+ 1 +q ∂µθ = Bµ removes the Goldstone modes. +This removal of the Goldstone mode via the prescribed +gauge transformations means that it is pure gauge. +Local symmetry breaking yields the massive physical +degree of freedom while removing the nonphysical mass- +less one. +This procedure, as is well known, is crucial +to the Higgs mechanism and places the Higgs boson cor- +rectly within the standard model. Local symmetry yields +new physical insight fixing the redundancy of global sym- +metry breaking. This manifestation of gauge invariance +is distinct from the pure redundancy discussed with re- +gards to the addition of an auxiliary scalar field, however +it is an indication of the intermediary role that gauge +symmetry plays in a particular regime that we wish to +probe in the context of our field theories. +V. +MODAL CONSIDERATIONS +Given the preceding discussions, it is now natural to +ask ourselves what we can say about modality with re- +spect to the gauge principle. +Modal language, even if +used loosely for our purposes currently, gives us a con- +venient way to categorize the gauge principle in relation +to the other mechanisms in our QFTs. In order to make +modal statements on gauge symmetry, it seems apt to +first take one of Redhead’s positions on how we should +treat it functionally and proceed from there in addressing +its status. That the gauge symmetry results in surplus +structure, redundant degrees of freedom, means that only +a subset of these degrees of freedom can be recognized as +physical representations. We can take Redhead’s third +proposition that theories should keep the gauge invari- +ance for as long as possible. This means that we do not +dispose of the local gauge symmetry and take into ac- +count its physical predicative power. 7 +7 Again, as discussed earlier, we are not so concerned with this +subtly and work with the gauge principle, in many ways, after +the question of what gauge to work in has been decided upon. + +5 +This conveniently allows us to split up the gauge sym- +metry into a local piece and a global one. +8 This fac- +torization allows us to evaluate any claims of necessity +independent of one another. +Global gauge invariance has physical necessity because +it carries through as a symmetry in all of our foundational +theories although it is, in most cases trivially a part of +our theories. +Any principles that are essential to the +structure of the theory are metaphysically necessary, for +example Lorentz invariance. Local gauge symmetry then +takes on an intermediary role. Since it is not wholly es- +sential in our theories which are more fundamental, this +will be discussed in the context of the modern scattering +amplitudes program, we can posit that it is metaphysi- +cally possible. It is possible that local gauge invariance +is required for the prediction of a photon coupling to a +electron, however it may not be. Perhaps it may carry +nomic necessity, however such a claim would require that +we know the true origin of why we must impose local +gauge symmetry at all. This would mean that it would +be attached to some underlying law of nature that we +clearly do not know of now. For now, in the context of +how we are exploring and seeking to categorize the gauge +symmetry, it is enough to make the modal distinction we +have made above in the hopes of clarifying how we wish +to treat gauge within the broader construction of QFT. +9 +VI. +SCATTERING AMPLITUDES AND +SIMPLICITY +The modern scattering amplitudes movement has +given us a method to compute amplitudes in a way that +forgoes gauge redundancy all together revealing aspects +of a more foundational QFT. The standard polariza- +tion vectors responsible for describing redundant mass- +less particle states are replaced by spinor-helicity vari- +ables which are trivially gauge invariant. +10This mod- +ern incarnation of the S-matrix bootstrap imposes the +fundamental principles of locality and unitarity to de- +termine amplitudes. What one finds is that calculations +which were once extremely complicated in the traditional +Feynman diagrammatic approach become tremendously +simplified and almost trivial, thanks to a cancellation of +redundancies. Taking the fermion states to be massless +for these calculations, we are working in the high-energy +8 It is safe to say that use of the word ”principle” in gauge principle +is misleading and not indicative of how we treat gauge symmetry +at large. Although the gauge principle in itself refers to the local +gauge symmetry, the use of the word would be better served +when looking at the role of gauge symmetry in particular as a +whole. +9 It would be interesting to explore whether such physical con- +cepts, including gauge, tied to physical objects can be catego- +rized within a more rigorous formal modal structure. +10 For a review of the spinor-helicity formalism refer to [4] +scattering limit which constitutes a theory at a more fun- +damental energy scale. +Let us compute the color-ordered 4-gluon amplitude, +A4[1−2−3+4+], at tree level. +Recall that such partial +amplitudes are trivially gauge invariant. +Utilizing the +standard Feynman rule for the 4 gluon vertex allows us +to write11 +A4 = (−i +√ +2g2)((ǫ1 · p2)ǫ2 − (p1 · ǫ2)ǫ1)((ǫ3 · p4)ǫ4 − (p3 · ǫ4)ǫ3) +(p1 + p2)2 +(24) +Translating this into the spinor helicity formalism, one +finds the following expression +A4 = +−2g2 +⟨12⟩ [12] +⟨12⟩ [34] +⟨13⟩ [24] +⟨12⟩ [24] +√ +2[14] +⟨13⟩ [34] +√ +2 ⟨14⟩ +(25) +Applying momentum conservation and simplifying this +expression gives us the following simple amplitude +A4 = +⟨12⟩4 +⟨12⟩ ⟨23⟩ ⟨34⟩ ⟨41⟩ +(26) +Indeed, one finds the following simple expression for +all tree level Yang-Mills amplitudes. 12 +An[1+...i−...j−...n+] = +⟨ij⟩4 +⟨12⟩ ⟨23⟩ ... ⟨n1⟩ +(27) +It is a remarkably simple expression that is fully general- +ized. In the traditional perturbative formalism comput- +ing a seven gluon amplitude would require the calcula- +tion of 154 separate diagrams, with the amplitude boiling +down still to the result above. Without the extra gauge +redundancies clouding the fundamental structure of the +scattering amplitudes, we can ask ourselves what princi- +ples we are left with. Consider the following ansatz for +for 3 particle amplitudes +A3(1h12h23h3) = c ⟨12⟩x12 ⟨13⟩x13 ⟨23⟩x23 +(28) +Under little group scaling on-shell amplitudes transform +in the following way, with helicity hi +An(|1⟩ , |1], h1, ..., ti |i⟩ , t−1 +i |i], hi, ...) = t−2hi +i +An(...|i⟩ , |i], hi...) +(29) +This fixes the following +− 2h1 = x12 + x13 +(30a) +11 Note that one needs to specify a particular set of reference +spinors. +We have chosen q’s such that the t-channel diagram +vanishes and all ǫi · ǫj’s vanish except ǫ2 · ǫ3 +12 This can be derived using the standard BCFW recursion rela- +tions + +6 +− 2h2 = x12 + x23 +(30b) +− 2h3 = x12 + x33 +(30c) +Solving the system of equations we can rewrite the ansatz +as follows +A3(1h12h23h3) = c ⟨12⟩h3−h1−h2 ⟨13⟩h2−h1−h3 ⟨23⟩h1−h2−h3 +(31) +Now, we can consider a 3-gluon amplitude with the fol- +lowing helicity configuration +A3(g− +1 g− +2 g+ +3 ) = g +⟨12⟩3 +⟨12⟩ ⟨23⟩ +(32) +Little group scaling fixes the form of the amplitude. +Moreover, the amplitude is fixed by locality, namely that +it is compatible with a term of the form AA∂A in the +Lagrangian TrFµνF µν and not a term that goes like +g′AA ∂ +□A. +We are now in position to ask ourselves what we are +left with in this high energy theory. We are left with the +principles of locality, unitarity, and Lorentz invariance, as +outlined in the simple calculations above. Gauge symme- +try plays a trivial role in this regime, where calculations +are simplified and where a more foundational structure +of our amplitudes, and perhaps our QFT, is revealed. +Coupling this insight with our previous modal claims, we +can revise and categorize a new ontological status for the +gauge principle. +VII. +THE ONTOLOGICAL STATUS OF GAUGE +SYMMETRY +We begin with a set of principles and regard them as +the basis for our ontology. Then the gauge principle, as is +customarily defined, cannot fit into our ontological con- +struction. Instead, we can take its factorized local piece +to be one step removed from the fundamental principles +of locality, unitarity and Lorentz invariance. Local gauge +symmetry, stated as a principle in the way we utilize it, +is a part of our ontology in the theory that is less fun- +damental, at lower energy scales. It is a projection onto +the more fundamental theory that becomes necessary at +lower energy and resolves itself by exiting the picture at +higher energies. +We have, then, a direct manifestation of Occam’s razor +at higher energy, whereby the theory seems to become +simpler and where our ontological stakes become more +defined. Indeed, as we have seen we are left with a set +of principles embodied within the higher energy theory +that dictate all of its tenets. It is quite likely that such a +theory is wholly inaccessible to experiment, as has been +predicated by String Theory and its exploration in the +past several decades. And so, it is not simply enough +to say that our ontology, as determined by principles, +should be determined by the theory which inhabits higher +energy scales. Instead, there is a sense in which our ontol- +ogy resolves itself at various scales. The base principles +carry throughout the scales. Physics is local for quarks, +for baseballs and for nuclei. It must be local even for +strings if they exist empirically. Certain principles then, +get added to our ontology as we lower our energy scale. +The gauge principle then, is a principle in the sense +that its imposition seems to be necessary to obtain the +relevant physical phenomena in the effective renormaliz- +able theory and thus becomes a part of our ontology in +that setting. However, our classical field equations for +example only carry global gauge invariance. Local gauge +invariance does not carry through nor is it necessary to +tell us about the physical data about our classical equa- +tions. +It is important to reconcile the fact that there are an in- +finite number of ways we could utilize our gauge freedom +in setting up our equations. This brings us to Quine’s +idea of the proxy function somewhat loosely, in which +the various choices of gauge can yield the same correct +physical result. +There is no ”true gauge” specified by +even more fundamental principles. In other words, lo- +cal gauge symmetry in particular can be treated as a +proxy for all the various gauge constraints we impose on +our equations in the theory where local gauge invariance +maps onto physical entities. Therefore, it is not strictly +ontological, as global gauge symmetry may be taken to +be, rather it is which masks the fundamental principles +in lieu of redundancies, but also plays an ontological role +in a particular regime. 13 +We can think of this shifting scale ontology as if we +were trying to resolve the pixels of a computer screen. +Various details will come in and out of focus as one zooms +in and out of the screen. Our ontological commitments +must be modified accordingly. If we are committed to +principles, those principles will shift and resolve them- +selves in accordance to what the physical phenomena ne- +cessitate. +VIII. +CONCLUSIONS +The gauge principle as an intermediary has been ex- +plored. We have set up a variety of systems, in various +contexts and exhibited the importance of the gauge prin- +ciple in each instance. We have also shown an instance +where our additions to the theory, in the case of an aux- +iliary field, result in no new physical insight whatsoever, +exemplifying the difference between the gauge principle +and simply adding extra degrees of freedom to our system +13 As arbiters of Quantum Field Theory, we can make the claim +that photons exist independent of whether or not they arise as +a result of gauge symmetry. That being said, the mathematical +representations of the photon all exhibit this symmetry and thus +we take the abstract entities that map onto physical data to be +one and the same. + +7 +with the hopes of new physical information. Moreover, in +the case of symmetry breaking, our basic example shows +that the Higgs mechanism is also a result of following our +nose after realizing the importance of local gauge sym- +metry. +A consideration of this principle poses great founda- +tional problems that have yet to be resolved and warrant +further exploration. It is an open question if we can ex- +tend the idea of intermediaries and shifting scale ontology +to a larger system; therefore, it would be worthwhile to +explore these ideas as they relate to the broader architec- +ture of the standard model. An application to accidental +symmetries and group symmetries in QFT immediately +comes to mind as well as an application to the renormal- +ization group where our equations are fully derived from +various scalings in energy. This is also particularly rele- +vant given the current landscape of theoretical physics in +which QFTs are seen as effective field theories only rele- +vant up to a certain energy scale. This has resulted in a +long search for the theory that is more fundamental and +which will resolve the decades old problem, still with- +standing, of quantizing the gravitational force. +More- +over, such an approach will surely have consequences for +a broader metaphysical set up which can extend itself +into larger epistemological considerations. +One can conceive of a philosophical system that is +sparred by the conception of intermediaries which present +themselves as fundamental as various initial conditions +are presented. +ACKNOWLEDGMENTS +We wish to thank Aden Evens and Erkki Wilho Mackey +for useful conversations and for reading the initial draft +of this work. We also wish to thank Carlo Rovelli for +useful clarifications via e-mail correspondence as well as +Laura Reutsche for helpful resources as this work was +being completed. +Appendix A: Spinor Helicity Conventions +We introduce +σµ = (1, σi), +¯σµ = (1, −σi), +(A1) +where σi are the standard Pauli matrices and +γµ = +� +0 +(σµ) ˙ab +(¯σµ)a˙b +0 +� +. +(A2) +Here, γµ are the usual gamma matrices obeying the Clif- +ford algebra +{γµ, γν} = −2ηµν. +(A3) +Defining +p ˙ab ≡ +1 +√ +2pµ(σµ) ˙ab = +1 +√ +2 +� +−p0 + p3 +p1 − ip2 +p1 + ip2 +−p0 − p3 +� +, +pa˙b ≡ +1 +√ +2pµ(¯σµ)a˙b = − 1 +√ +2 +� +p0 + p3 +p1 − ip2 +p1 + ip2 +p0 − p3 +� +, +(A4) +we obtain expressions for null momenta in terms of two- +component spinor helicity variables: +p ˙ab = −|p] ˙a⟨p|b = −˜λ˙aλb, +pa˙b = −|p⟩a[p| +˙b = −λa˜λ +˙b, +(A5) +Indices are raised and lowered with the Levi-Civita +symbol: +[p| ˙a = ǫ ˙a˙b|p]˙b, +|p⟩a = ǫab⟨p|b +(A6) +where +ǫab = ǫ ˙a˙b = +� +0 +1 +−1 0 +� +. +(A7) +Finally, in these conventions +pi · pj = ⟨ij⟩[ij], +(A8) +which can be readily verified using the identity +σµ +˙aa ¯σν a ˙a = −2ηµν. +(A9) +[1] Quine, W.V.O. ”Things and Their Place in Theories” +The Belknap Press of Harvard University Press, 1999 +[2] Rovelli, Carlo ”Why Gauge?” Foundations of Physics vol- +ume 44, pages 91–104 (2014), arXiv:1308.5599 +[3] Weinberg, Steven ”The Quantum Theory of Fields” Cam- +bridge University Press, 1995 +[4] Peskin, Michael E.; Schroeder, Daniel V. ”An Introduc- +tion To Quantum Field Theory” CRC Press, 2019 +[5] Elvang, Henriette; Huang, Yu-tin ”Scattering Amplitudes +in Gauge Theory and Gravity” Cambridge University +Press, 2015 +[6] Arkani-Hamed, +Nima; +Rodina, +Laurentiu; +Trnka, +Jaroslav ”Scattering Amplitudes in Gauge Theory and +Gravity” Phys. Rev. Lett. 120, 231602 – Published 8 June +2018 +[7] Redhead, Michael”Symmetries in Physics Philosophical +Reflections” Chapter 7 - The interpretation of gauge sym- +metry pp. 124-139, Edited by Katherine Brading, Elena +Castellani, Cambridge University Press, 2003 +[8] Guay, +Alexandre”The +arbitrariness +of +local +gauge + +8 +symmetry”philsci-archive.pitt.edu +[9] Lyre, Holger”The Principles of Gauging” Philosophy of +Science Vol. 68, No. 3, Supplement: Proceedings of the +2000 Biennial Meeting of the Philosophy of Science As- +sociation. Part I: Contributed Papers (Sep., 2001), pp. +S371-S381, The University of Chicago Press +[10] Martin, Christopher A.”Gauge Principles, Gauge Argu- +ments and the Logic of Nature” Philosophy of Science +Vol. 69, No. S3 (September 2002), pp. S221-S234, The +University of Chicago Press +[11] Resnik, Michael D.”Quine, the Argument from Proxy +Functions, and Structuralism” Philosophical Topics , +SPRING 1996, Vol. 24, No. 1, Metaphysics (SPRING +1996), pp. 129-148, University of Arkansas Press +[12] Roberts, +Alexander” From Physical to Metaphysical +Necessity” Mind, Oxford Academic; 2021 + diff --git a/ntAyT4oBgHgl3EQfy_ls/content/tmp_files/load_file.txt b/ntAyT4oBgHgl3EQfy_ls/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..0458bb27dd1675b22231cabc9e1675ba437e76f6 --- /dev/null +++ b/ntAyT4oBgHgl3EQfy_ls/content/tmp_files/load_file.txt @@ -0,0 +1,276 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf,len=275 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='00694v1 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='hist-ph] 30 Dec 2022 Redundancies and Profundities Kyle Singh∗ Guarini School of Graduate and Advanced Studies 64 College Street Anonymous Hall Suite 102 Hanover New Hampshire 03755-3563 (Dated: January 3, 2023) We reevaluate the status of the gauge principle and reposition it as an intermediary structure dependent on the initial conditions we endow on our theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We explore how the gauge symmetry manifests in the context of basic quantum electrodynamics, spontaneous symmetry breaking and the modern scattering amplitudes program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We also investigate the addition of an auxiliary field in φ4 theory and see how the dynamics are altered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Modal language is pointed to and utilized as a convenient way to articulate the weight gauge symmetry demands in our theories as well as the principles of locality and Lorentz invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' A shifting scale ontology is introduced with regards to the gauge principle and other structures of Quantum Field Theory in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' INTRODUCTION The gauge principle is widely regarded as the corner- stone of fundamental physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' However, the modern scat- tering amplitudes program, as well as the discovery of the AdS/CFT correspondence, have highlighted the redun- dancy that comes from the implementation of the sym- metry in Quantum Field Theory and in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' More- over, independent of such recent developments the on- tological status of the gauge principle has been brought into question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This places the gauge principle in a unique position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is a concept that is fundamental;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' one that no doubt leads us to profound physical insight by fixing the structure of fundamental interactions in the standard model while also revealing previously hidden degrees of freedom, elementary particles, that are present in nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' However, in other regimes, its presence masks the sim- plicity underlying our theory and certain physical aspects of our systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We need to find a way to categorize that which is redundant and also profound in our physical theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' More generally, our basic theories are such that in- termediary concepts, as we will call them, map onto physical data and are often most important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Consider, for example, the fact that in General Relativity one is free to make a choice of coordinates based on what the physical system demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Particular coordinate choices, such as in the distinction between the Schwarzschild and Kruskal coordinates for black holes, reveal different as- pects of the spacetime structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In this case, for ex- ample, the Kruskal coordinates resolve the non physical coordinate singularities endemic to the Schwarzschild ge- ometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Such freedom can easily allow us to question the role of a background space itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Here space is instead in an intermediary position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is fundamental as a back- ground but wholly arbitrary based on the physical system and initial conditions we impose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' ∗ kyle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='singh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='gr@dartmouth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='edu Not only is gauge symmetry profound in our funda- mental theories, it is unavoidable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Carlo Rovelli estab- lishes the ubiquity of gauge and claims that ”Gauge in- variance is not just mathematical redundancy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' it is an indication of the relational character of fundamental ob- servables in physics” (Rovelli, 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' He sets up a coupled dynamical system and shows that gauge variables are necessary in order for one to capture the full dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In other words, one can not decouple the system with- out the presence of gauge variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 1 For Rovelli, the gauge symmetry reveals the ”relational structure of our world”, as gauge interactions describe relative quantities with more than one object (Rovelli, 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' For example, consider the fact that in ordinary quantum mechanics, we can only measure differences in energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Rovelli in- sists that treating gauge as pure redundancy ignores this relational structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Indeed, claiming that the gauge principle results in pure redundancy or that it is somehow inherent to our fundamental theory does not capture the unique posi- tion it operates within.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We are then left with a view of the gauge principle that is purely intermediary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Our op- tions for how we should treat the gauge symmetry were aptly categorized by Michael Redhead in what has now come to be known as Redheads trilemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Redhead pur- ports that we have three options with how we choose to treat the gauge principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We can either claim that gauge symmetry is physical and motivate physical structures directly representative of gauge fields, try to reformulate the entire theory in terms of quantities that are purely gauge-invariant, or we can let non-gauge-invariant quan- tities enter as surplus structure and develop the theory accordingly, adding further surplus structure if necessary to make the theory work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The third option is one often taken by physicists for practical purposes and is the stance we will undertake.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The question then becomes one in which we ask ourselves 1 See Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' [2] for a full discussion of this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 2 how we should categorize such intermediary concepts in theoretical physics and more broadly in mathematical representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Redhead’s three propositions do not fully articulate such a position, rather it seems that they cap- ture some aspects of how we wish to treat the gauge symmetry in each of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' His distinctions seem to arise within the context of a particular theory or set of calcu- lations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' For example, with respect to his second propo- sition, we can look to Wilson loops which are completely gauge invariant quantities and work out the dynamics of our QFT in terms of them, however they lead to non-local physics, as is well evidenced by the Aharonov–Bohm ef- fect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Indeed, parsing out various physical systems and regimes of our QFT and observing what role the gauge principle has within them will be central to how we choose to speak about its status.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Furthermore, this will allow us to set up a shifting scale ontology and classify various pillars of our theory in an ontological way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Not only do we need to deal with this status of gauge symmetry, we must also find a way to incorporate its in- herent ambiguity both in its implementation and with respect to the physical objects the symmetry comes to represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We do not directly discuss surplus structure broadly in mathematics, as Guay has suggested we ought to do;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' however it is not so much of a concern in the par- ticular way we choose to position gauge symmetry since we are teasing out the physical data that it leads us to and treating its inherent redundancy as a triviality since we can easily choose a particular gauge even though we are given infinite choices from which we can begin our computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The redundancies will only be important if they mask the underlying simplicity of our theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This will be discussed within the context of the modern scat- tering amplitudes program and it is within this context that the discussion of surplus structure in general may be apt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We now begin with a brief summary of the gauge prin- ciple and its operation in the context of electrodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' THE GAUGE PRINCIPLE Let us briefly review the gauge principle in the con- text of classical electromagnetism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We begin with the Lagrangian for complex scalar field theory L = (∂µψ)† (∂µψ) − m2ψ†ψ (1) The Lagrangian possesses a U(1) symmetry by the fol- lowing replacement ψ(x) → ψ(x)eiα (2) Note that this symmetry is a global one, namely the field is changed by the same amount at every spacetime point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' As is standard, we can then work out the consequences if we impose the notion that our Lagrangian remain in- variant under local transformations with the following replacement ψ(x) → ψ(x)eiα(x) (3) Clearly, under such a prescription the Lagrangian is not invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In order to restore the local symmetry, we introduce a new field Aµ(x) which cancel out the ex- tra terms resulting from the requirement of local gauge invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We must require that Aµ transforms as Aµ(x) → Aµ(x) − 1 q ∂µα(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='2We then introduce the co- variant derivative defined as follows Dµ = ∂µ + iqAµ(x) (4) Given this set of transformations, our theory is locally gauge invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The introduction of an additional field in order to obtain local gauge invariance is a ubiquitous feature of gauge theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In the context of electromag- netism, we utilize the following Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 3 L = −1 4(∂µAν − ∂νAµ)(∂µAν − ∂νAµ) − JµAµ (5) The equations of motion are the first two Maxwell equa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' ∂2Aν − ∂ν(∂µAµ) = Jν (6) We can rewrite the transformation of our gauge field in the following more generalized form Aµ(x) → Aµ(x) − ∂µχ(x) (7) Of course, in any physical theory we want the fields we in- troduce to hold physical significance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The function χ(x) has, for all intents and purposes, an infinite number of symmetries, one for each function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='4 Therefore, we must take further steps in constraining our gauge field to facili- tate its representation of a physical object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In the case of electrodynamics, the gauge field contains the dynamics of the photon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The following two conditions, known as choosing a gauge, are imposed to ensure that the gauge field has two degrees of freedom since the photon can only have two polarizations ∂µAµ(x) = 0 (8) ∂0ξ = A ′ 0 (9) These particular gauges are known as Lorentz gauge and Coulomb gauge, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The gauge principle then 2 Here, q is the coupling strength and in this case is just an extra parameter which will have physical significance in other theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 3 Note, that this is the same as writing L = − 1 4FµνF µν − JµAµ 4 This conundrum in itself is indicative of gauge as redundancy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It would be startling to say that presence of infinite symmetries would yield an infinite number of conservation laws following from Noether’s theorem, for example!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Here two states related by a gauge transformation are indeed the same physical state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 3 refers to the procedure of introducing this additional dy- namical field to maintain local gauge symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' More- over, as we have just seen, the gauge field dictates the form of the coupling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Prima facie, there seems to be no reason at all to im- pose local gauge invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Moreover, it seems quite contrived to insist that the gauge field introduced to pre- serve our desired symmetry must have a precise set of dynamics correspondent to a particular object in a the- ory;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' in other words our designation of the gauge field as a photon was put in by hand and not derived from the gauge principle itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Even more striking perhaps, is the fact that we can never measure Aµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' That nature seems to require us to introduce such objects is truly incredible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='5 Indeed, Martin calls these difficulties with the gauge principle out and writes that the idea that the gauge principle ”’dictates’ [or] ’determines’ the form of funda- mental interactions as well as the existence of certain physical fields must be taken with a large grain of salt” (Martin, 233).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' He argues that, at best, the gauge prin- ciple should be taken as heuristic and offers a differing approach to the logic of nature, viewing the gauge sym- metry as not a fundamental physical principle, rather, as a relic of a theory that is more fundamental, in particu- lar as renormalizable theories, that works in conjunction with other physical requirements such as Lorentz invari- ance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is clear that a proper revaluation of the status of the gauge principle in our physical theories is necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We will expand on Martin’s thesis and seek to make more general modal statements relating the conception of local gauge symmetry to our physical theory as one apparatus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In doing so, we do not contradict Rovelli’s argument on the ubiquity of gauge or the arbitrariness inherent to our imposition of the gauge principle in itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Instead, we re- consider its position as a principle in our physical theory all together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' PURE REDUNDANCY Consider the following Lagrangian in φ4 theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' L = 1 2(∂µφ)2 − m2 2 φ2 − g 8φ4 (10) We shift the Lagrangian by adding a new σ field in the following way6 L′ = L + 1 2g � σ − g 2φ2�2 (11) 5 All of this may be a false problem, of course, and Rovelli aptly calls into question whether or not we should ask our mathemat- ical procedures to have a purpose;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' asking us whether or not we should conclude that it was the purpose of humans to kill large mammals if we were indeed responsible for their deaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 6 Shifting our Langrangian in such a fashion is commonly referred to as a Hubbard-Stratonovich transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The Lagrangian is then L′ = 1 2(∂µφ)2 − m2 2 φ2 − g 8φ4 + 1 2g � σ − g 2φ2�2 (12) In QFT, the Green’s functions of our theory tell us about the dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We can compute these via the generating functional Z[J] = � DφDσexp � i � d4x (L′ + Jφ) � � DφDσexp � i � d4xL′ � (13) If one carries out the above functional integral, it is straightforward to show that the expression written above for our newly defined theory is the same as the original one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' To emphasize this, we can compute the equation of motion for the σ field ∂L′ ∂σ − ∂µ ∂L′ ∂(∂µσ) = 0 (14) and find that σ = g 2φ2 (15) There are no time derivatives present in the computed equation of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Therefore, our newly added field does not contribute to the dynamics of this system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Fur- thermore, we can eliminate the additional field since it can only provide a constraint on the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We can promote the fields of our system to operators and write down the vacuum-to-vacuum expansion of the S-matrix for our theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This yields the relevant Feyn- man diagrams corresponding to various wick contractions of the fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This further tells us about the role of the σ field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The S-matrix operator reads as follows ⟨0| S |0⟩ = ⟨0| T � exp � −i � d4x1 2σφ2 �� |0⟩ (16) Writing out the first couple of terms in the perturbation expansion yields − i 2 � d4x ⟨0| T [σxφxφy] |0⟩ + 1 2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' � − i 2 �2 � d4xd4y ⟨0| T [σxφxφxσyφyφy] |0⟩ + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (17) Contractions of the φ field yield the standard free field propagator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' As we have shown, since the σ field does not contribute to the dynamics of the theory, nothing can propagate through it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Contractions such as σxσy take two spacetime points and identify them with one another playing the role of interaction vertices in the various di- agrams which arise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This calculation is an example of a procedure resulting in a trivial redundancy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is clearly distinct from the procedure undertaken in incorporating gauge symmetry into our field equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 4 Treating the gauge principle as simply an artifact of pure redundancy does not capture its importance to the dynamics of say Quantum Electrodynamics and to the construction of the standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The σ field’s im- portance in the constructed example and the role of the gauge field are wholly different in terms of what they rep- resent and what role they play in the theory, although both were introduced in an arbitrary fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We must then seek to reposition the status of gauge as it relates to QFT and see what we can say about scientific theories in full generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' PURE GAUGE As we have reviewed, any Lagrangian with local sym- metry must harbor gauge fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Consider the following Lagrangian for gauged complex scalar field theory L = � ∂µψ† − iqAµψ†� (∂µψ + iqAµψ) + µ2ψ†ψ − λ(ψ†ψ)2 − 1 4FµνF µν (18) It is crucial to note that the sign of the mass term has been flipped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This allows us to invoke the standard symmetry breaking procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We insist, again, local gauge invariance and work in polar coordinates by let- ting ψ(x) = σ(x)eiθ(x) for a unique ground state phase set at θ(x) = θ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The fact that we can not now change the phase of the ground state both locally and globally means that the symmetry is broken in both regimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Now we ob- serve, as is commonly done, what physical consequences can be derived by computing the particle spectrum of this system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In polar coordinates ∂µψ + iqAµψ = (∂µσ)eiθ + i(∂µθ + qAµ)σeiθ (19) Where the gauge field is represented in our theory in the following way Aµ + 1 q ∂µθ ≡ Bµ (20) Therefore (∂µψ† − iqAµψ†)(∂µψ + iqAµψ) = (∂µσ2) + σ2q2BµBµ (21) The Lagrangian becomes L = (∂µσ)2 + σ2q2B2 + µ2σ2 − λσ4 − 1 4FµνF µν (22) We now invoke the standard symmetry breaking proce- dure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The minima of the potential are at σ = � µ2 2λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We break the symmetry by setting σ0 = � µ2 2λand θ0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Expanding the Lagrangian in terms of a new field δ de- fined as δ √ 2 = σ − σ0 and ignoring constants yields the following L = 1 2(∂µδ)2 − µ2δ2 − √ λµδ3 − λ 4 δ4 − 1 4F µνFµν +A2 2 B2 + q2 �µ2 λ � 1 2 δB2 + 1 2q2δ2B2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (23) Here, A = q � µ2 λ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Breaking the symmetry surprisingly results in our theory containing the massive vector field Bµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Meanwhile, the massless excitations of the θ field have disappeared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Note that these excitations were only present in the case of global symmetry breaking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Our theory, which once described two massive scalars and two massless photons now describes one massive scalar and three massive vector particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Imposing the gauge trans- formation Aµ+ 1 q ∂µθ = Bµ removes the Goldstone modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This removal of the Goldstone mode via the prescribed gauge transformations means that it is pure gauge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Local symmetry breaking yields the massive physical degree of freedom while removing the nonphysical mass- less one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This procedure, as is well known, is crucial to the Higgs mechanism and places the Higgs boson cor- rectly within the standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Local symmetry yields new physical insight fixing the redundancy of global sym- metry breaking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This manifestation of gauge invariance is distinct from the pure redundancy discussed with re- gards to the addition of an auxiliary scalar field, however it is an indication of the intermediary role that gauge symmetry plays in a particular regime that we wish to probe in the context of our field theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' MODAL CONSIDERATIONS Given the preceding discussions, it is now natural to ask ourselves what we can say about modality with re- spect to the gauge principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Modal language, even if used loosely for our purposes currently, gives us a con- venient way to categorize the gauge principle in relation to the other mechanisms in our QFTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In order to make modal statements on gauge symmetry, it seems apt to first take one of Redhead’s positions on how we should treat it functionally and proceed from there in addressing its status.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' That the gauge symmetry results in surplus structure, redundant degrees of freedom, means that only a subset of these degrees of freedom can be recognized as physical representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We can take Redhead’s third proposition that theories should keep the gauge invari- ance for as long as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This means that we do not dispose of the local gauge symmetry and take into ac- count its physical predicative power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 7 7 Again, as discussed earlier, we are not so concerned with this subtly and work with the gauge principle, in many ways, after the question of what gauge to work in has been decided upon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 5 This conveniently allows us to split up the gauge sym- metry into a local piece and a global one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 8 This fac- torization allows us to evaluate any claims of necessity independent of one another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Global gauge invariance has physical necessity because it carries through as a symmetry in all of our foundational theories although it is, in most cases trivially a part of our theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Any principles that are essential to the structure of the theory are metaphysically necessary, for example Lorentz invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Local gauge symmetry then takes on an intermediary role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Since it is not wholly es- sential in our theories which are more fundamental, this will be discussed in the context of the modern scattering amplitudes program, we can posit that it is metaphysi- cally possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is possible that local gauge invariance is required for the prediction of a photon coupling to a electron, however it may not be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Perhaps it may carry nomic necessity, however such a claim would require that we know the true origin of why we must impose local gauge symmetry at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This would mean that it would be attached to some underlying law of nature that we clearly do not know of now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' For now, in the context of how we are exploring and seeking to categorize the gauge symmetry, it is enough to make the modal distinction we have made above in the hopes of clarifying how we wish to treat gauge within the broader construction of QFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 9 VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' SCATTERING AMPLITUDES AND SIMPLICITY The modern scattering amplitudes movement has given us a method to compute amplitudes in a way that forgoes gauge redundancy all together revealing aspects of a more foundational QFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The standard polariza- tion vectors responsible for describing redundant mass- less particle states are replaced by spinor-helicity vari- ables which are trivially gauge invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 10This mod- ern incarnation of the S-matrix bootstrap imposes the fundamental principles of locality and unitarity to de- termine amplitudes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' What one finds is that calculations which were once extremely complicated in the traditional Feynman diagrammatic approach become tremendously simplified and almost trivial, thanks to a cancellation of redundancies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Taking the fermion states to be massless for these calculations, we are working in the high-energy 8 It is safe to say that use of the word ”principle” in gauge principle is misleading and not indicative of how we treat gauge symmetry at large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Although the gauge principle in itself refers to the local gauge symmetry, the use of the word would be better served when looking at the role of gauge symmetry in particular as a whole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 9 It would be interesting to explore whether such physical con- cepts, including gauge, tied to physical objects can be catego- rized within a more rigorous formal modal structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 10 For a review of the spinor-helicity formalism refer to [4] scattering limit which constitutes a theory at a more fun- damental energy scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Let us compute the color-ordered 4-gluon amplitude, A4[1−2−3+4+], at tree level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Recall that such partial amplitudes are trivially gauge invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Utilizing the standard Feynman rule for the 4 gluon vertex allows us to write11 A4 = (−i √ 2g2)((ǫ1 · p2)ǫ2 − (p1 · ǫ2)ǫ1)((ǫ3 · p4)ǫ4 − (p3 · ǫ4)ǫ3) (p1 + p2)2 (24) Translating this into the spinor helicity formalism,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' one finds the following expression A4 = −2g2 ⟨12⟩ [12] ⟨12⟩ [34] ⟨13⟩ [24] ⟨12⟩ [24] √ 2[14] ⟨13⟩ [34] √ 2 ⟨14⟩ (25) Applying momentum conservation and simplifying this expression gives us the following simple amplitude A4 = ⟨12⟩4 ⟨12⟩ ⟨23⟩ ⟨34⟩ ⟨41⟩ (26) Indeed,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' one finds the following simple expression for all tree level Yang-Mills amplitudes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 12 An[1+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='i−.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='j−.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='n+] = ⟨ij⟩4 ⟨12⟩ ⟨23⟩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' ⟨n1⟩ (27) It is a remarkably simple expression that is fully general- ized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In the traditional perturbative formalism comput- ing a seven gluon amplitude would require the calcula- tion of 154 separate diagrams, with the amplitude boiling down still to the result above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Without the extra gauge redundancies clouding the fundamental structure of the scattering amplitudes, we can ask ourselves what princi- ples we are left with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Consider the following ansatz for for 3 particle amplitudes A3(1h12h23h3) = c ⟨12⟩x12 ⟨13⟩x13 ⟨23⟩x23 (28) Under little group scaling on-shell amplitudes transform in the following way, with helicity hi An(|1⟩ , |1], h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=', ti |i⟩ , t−1 i |i], hi, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=') = t−2hi i An(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='|i⟩ , |i], hi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=') (29) This fixes the following − 2h1 = x12 + x13 (30a) 11 Note that one needs to specify a particular set of reference spinors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We have chosen q’s such that the t-channel diagram vanishes and all ǫi · ǫj’s vanish except ǫ2 · ǫ3 12 This can be derived using the standard BCFW recursion rela- tions 6 − 2h2 = x12 + x23 (30b) − 2h3 = x12 + x33 (30c) Solving the system of equations we can rewrite the ansatz as follows A3(1h12h23h3) = c ⟨12⟩h3−h1−h2 ⟨13⟩h2−h1−h3 ⟨23⟩h1−h2−h3 (31) Now,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' we can consider a 3-gluon amplitude with the fol- lowing helicity configuration A3(g− 1 g− 2 g+ 3 ) = g ⟨12⟩3 ⟨12⟩ ⟨23⟩ (32) Little group scaling fixes the form of the amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Moreover, the amplitude is fixed by locality, namely that it is compatible with a term of the form AA∂A in the Lagrangian TrFµνF µν and not a term that goes like g′AA ∂ □A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We are now in position to ask ourselves what we are left with in this high energy theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We are left with the principles of locality, unitarity, and Lorentz invariance, as outlined in the simple calculations above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Gauge symme- try plays a trivial role in this regime, where calculations are simplified and where a more foundational structure of our amplitudes, and perhaps our QFT, is revealed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Coupling this insight with our previous modal claims, we can revise and categorize a new ontological status for the gauge principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' THE ONTOLOGICAL STATUS OF GAUGE SYMMETRY We begin with a set of principles and regard them as the basis for our ontology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Then the gauge principle, as is customarily defined, cannot fit into our ontological con- struction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Instead, we can take its factorized local piece to be one step removed from the fundamental principles of locality, unitarity and Lorentz invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Local gauge symmetry, stated as a principle in the way we utilize it, is a part of our ontology in the theory that is less fun- damental, at lower energy scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is a projection onto the more fundamental theory that becomes necessary at lower energy and resolves itself by exiting the picture at higher energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We have, then, a direct manifestation of Occam’s razor at higher energy, whereby the theory seems to become simpler and where our ontological stakes become more defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Indeed, as we have seen we are left with a set of principles embodied within the higher energy theory that dictate all of its tenets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is quite likely that such a theory is wholly inaccessible to experiment, as has been predicated by String Theory and its exploration in the past several decades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' And so, it is not simply enough to say that our ontology, as determined by principles, should be determined by the theory which inhabits higher energy scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Instead, there is a sense in which our ontol- ogy resolves itself at various scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The base principles carry throughout the scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Physics is local for quarks, for baseballs and for nuclei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It must be local even for strings if they exist empirically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Certain principles then, get added to our ontology as we lower our energy scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' The gauge principle then, is a principle in the sense that its imposition seems to be necessary to obtain the relevant physical phenomena in the effective renormaliz- able theory and thus becomes a part of our ontology in that setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' However, our classical field equations for example only carry global gauge invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Local gauge invariance does not carry through nor is it necessary to tell us about the physical data about our classical equa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is important to reconcile the fact that there are an in- finite number of ways we could utilize our gauge freedom in setting up our equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This brings us to Quine’s idea of the proxy function somewhat loosely, in which the various choices of gauge can yield the same correct physical result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' There is no ”true gauge” specified by even more fundamental principles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' In other words, lo- cal gauge symmetry in particular can be treated as a proxy for all the various gauge constraints we impose on our equations in the theory where local gauge invariance maps onto physical entities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Therefore, it is not strictly ontological, as global gauge symmetry may be taken to be, rather it is which masks the fundamental principles in lieu of redundancies, but also plays an ontological role in a particular regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 13 We can think of this shifting scale ontology as if we were trying to resolve the pixels of a computer screen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Various details will come in and out of focus as one zooms in and out of the screen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Our ontological commitments must be modified accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' If we are committed to principles, those principles will shift and resolve them- selves in accordance to what the physical phenomena ne- cessitate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' CONCLUSIONS The gauge principle as an intermediary has been ex- plored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We have set up a variety of systems, in various contexts and exhibited the importance of the gauge prin- ciple in each instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We have also shown an instance where our additions to the theory, in the case of an aux- iliary field, result in no new physical insight whatsoever, exemplifying the difference between the gauge principle and simply adding extra degrees of freedom to our system 13 As arbiters of Quantum Field Theory, we can make the claim that photons exist independent of whether or not they arise as a result of gauge symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' That being said, the mathematical representations of the photon all exhibit this symmetry and thus we take the abstract entities that map onto physical data to be one and the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 7 with the hopes of new physical information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Moreover, in the case of symmetry breaking, our basic example shows that the Higgs mechanism is also a result of following our nose after realizing the importance of local gauge sym- metry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' A consideration of this principle poses great founda- tional problems that have yet to be resolved and warrant further exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' It is an open question if we can ex- tend the idea of intermediaries and shifting scale ontology to a larger system;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' therefore, it would be worthwhile to explore these ideas as they relate to the broader architec- ture of the standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' An application to accidental symmetries and group symmetries in QFT immediately comes to mind as well as an application to the renormal- ization group where our equations are fully derived from various scalings in energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This is also particularly rele- vant given the current landscape of theoretical physics in which QFTs are seen as effective field theories only rele- vant up to a certain energy scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' This has resulted in a long search for the theory that is more fundamental and which will resolve the decades old problem, still with- standing, of quantizing the gravitational force.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' More- over, such an approach will surely have consequences for a broader metaphysical set up which can extend itself into larger epistemological considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' One can conceive of a philosophical system that is sparred by the conception of intermediaries which present themselves as fundamental as various initial conditions are presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' ACKNOWLEDGMENTS We wish to thank Aden Evens and Erkki Wilho Mackey for useful conversations and for reading the initial draft of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' We also wish to thank Carlo Rovelli for useful clarifications via e-mail correspondence as well as Laura Reutsche for helpful resources as this work was being completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Appendix A: Spinor Helicity Conventions We introduce σµ = (1, σi), ¯σµ = (1, −σi), (A1) where σi are the standard Pauli matrices and γµ = � 0 (σµ) ˙ab (¯σµ)a˙b 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (A2) Here, γµ are the usual gamma matrices obeying the Clif- ford algebra {γµ, γν} = −2ηµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (A3) Defining p ˙ab ≡ 1 √ 2pµ(σµ) ˙ab = 1 √ 2 � −p0 + p3 p1 − ip2 p1 + ip2 −p0 − p3 � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' pa˙b ≡ 1 √ 2pµ(¯σµ)a˙b = − 1 √ 2 � p0 + p3 p1 − ip2 p1 + ip2 p0 − p3 � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (A4) we obtain expressions for null momenta in terms of two- component spinor helicity variables: p ˙ab = −|p] ˙a⟨p|b = −˜λ˙aλb,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' pa˙b = −|p⟩a[p| ˙b = −λa˜λ ˙b,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (A5) Indices are raised and lowered with the Levi-Civita symbol: [p| ˙a = ǫ ˙a˙b|p]˙b,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' |p⟩a = ǫab⟨p|b (A6) where ǫab = ǫ ˙a˙b = � 0 1 −1 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (A7) Finally, in these conventions pi · pj = ⟨ij⟩[ij], (A8) which can be readily verified using the identity σµ ˙aa ¯σν a ˙a = −2ηµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' (A9) [1] Quine, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' ”Things and Their Place in Theories” The Belknap Press of Harvard University Press, 1999 [2] Rovelli, Carlo ”Why Gauge?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Foundations of Physics vol- ume 44, pages 91–104 (2014), arXiv:1308.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='5599 [3] Weinberg, Steven ”The Quantum Theory of Fields” Cam- bridge University Press, 1995 [4] Peskin, Michael E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Schroeder, Daniel V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' ”An Introduc- tion To Quantum Field Theory” CRC Press, 2019 [5] Elvang, Henriette;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Huang, Yu-tin ”Scattering Amplitudes in Gauge Theory and Gravity” Cambridge University Press, 2015 [6] Arkani-Hamed, Nima;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Rodina, Laurentiu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Trnka, Jaroslav ”Scattering Amplitudes in Gauge Theory and Gravity” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 120, 231602 – Published 8 June 2018 [7] Redhead, Michael”Symmetries in Physics Philosophical Reflections” Chapter 7 - The interpretation of gauge sym- metry pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 124-139, Edited by Katherine Brading, Elena Castellani, Cambridge University Press, 2003 [8] Guay, Alexandre”The arbitrariness of local gauge 8 symmetry”philsci-archive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='pitt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content='edu [9] Lyre, Holger”The Principles of Gauging” Philosophy of Science Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 68, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 3, Supplement: Proceedings of the 2000 Biennial Meeting of the Philosophy of Science As- sociation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' Part I: Contributed Papers (Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=', 2001), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' S371-S381, The University of Chicago Press [10] Martin, Christopher A.”Gauge Principles, Gauge Argu- ments and the Logic of Nature” Philosophy of Science Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 69, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' S3 (September 2002), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' S221-S234, The University of Chicago Press [11] Resnik, Michael D.”Quine, the Argument from Proxy Functions, and Structuralism” Philosophical Topics , SPRING 1996, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 24, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 1, Metaphysics (SPRING 1996), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 129-148, University of Arkansas Press [12] Roberts, Alexander” From Physical to Metaphysical Necessity” Mind, Oxford Academic;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} +page_content=' 2021' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ntAyT4oBgHgl3EQfy_ls/content/2301.00694v1.pdf'} diff --git a/oNE1T4oBgHgl3EQfOwM_/vector_store/index.faiss b/oNE1T4oBgHgl3EQfOwM_/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..9ebbe59674aa578c1c7203730723d7dc4bb6ba62 --- /dev/null +++ b/oNE1T4oBgHgl3EQfOwM_/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05b01b147bd8fdf9b7fded03bce24bedc2b54445460b8731301ce79e1bdab057 +size 2949165 diff --git a/otAyT4oBgHgl3EQfzPl9/content/2301.00698v1.pdf b/otAyT4oBgHgl3EQfzPl9/content/2301.00698v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2b3e64ae7b07423cb399f8515009d9dcae90b4bc --- /dev/null +++ b/otAyT4oBgHgl3EQfzPl9/content/2301.00698v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b12856df5d478b677235be4e6cb2531bccf17053f487cf2658874777ce36b08 +size 707618 diff --git a/otAyT4oBgHgl3EQfzPl9/vector_store/index.faiss b/otAyT4oBgHgl3EQfzPl9/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..7df59fbde45528874d20b91191a70a84e9908f21 --- /dev/null +++ b/otAyT4oBgHgl3EQfzPl9/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c600011a0c9b2132a4efdf991593791b7ff4cf608ec04eea8d244c09ed51f97 +size 5963821 diff --git a/otAyT4oBgHgl3EQfzPl9/vector_store/index.pkl b/otAyT4oBgHgl3EQfzPl9/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..043a441d78f744a6eb8c55211b2a4089ff3547ff --- /dev/null +++ b/otAyT4oBgHgl3EQfzPl9/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8977c90a6e0d7a61d5daf49f3f81d8c004449c74541dddb6abe4b0661f8c7d83 +size 215660 diff --git a/pNAzT4oBgHgl3EQf5f6s/vector_store/index.pkl b/pNAzT4oBgHgl3EQf5f6s/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..87f7c1e3ab04e755132742bdd941ad1ec38a3888 --- /dev/null +++ b/pNAzT4oBgHgl3EQf5f6s/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fdc92848f307cb331bfc89c358ecc5b566ded4b3905c09c055afc034d0a6340 +size 69941 diff --git a/qNAyT4oBgHgl3EQfzfnr/content/tmp_files/2301.00704v1.pdf.txt b/qNAyT4oBgHgl3EQfzfnr/content/tmp_files/2301.00704v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e93e307000dd00c63df5e7e8021f3ea4fc8df2e7 --- /dev/null +++ b/qNAyT4oBgHgl3EQfzfnr/content/tmp_files/2301.00704v1.pdf.txt @@ -0,0 +1,1212 @@ +Muse: Text-To-Image Generation via Masked Generative Transformers +Huiwen Chang * Han Zhang * Jarred Barber † AJ Maschinot † Jos´e Lezama Lu Jiang Ming-Hsuan Yang +Kevin Murphy William T. Freeman Michael Rubinstein † Yuanzhen Li † Dilip Krishnan † +Google Research +Abstract +We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance +while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked +modeling task in discrete token space: given the text embedding extracted from a pre-trained large language +model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion +models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and +requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient +due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, +translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial +relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID +score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along +with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need +to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at +http://muse-model.github.io. +1. Introduction +Generative image models conditioned on text prompts have taken an enormous leap in quality and flexibility in the last few +years (Ramesh et al., 2022; Nichol et al., 2021; Saharia et al., 2022; Yu et al., 2022; Rombach et al., 2022; Midjourney, +2022). This was enabled by a combination of deep learning architecture innovations (Van Den Oord et al., 2017; Vaswani +et al., 2017); novel training paradigms such as masked modeling for both language (Devlin et al., 2018; Raffel et al., 2020) +and vision tasks (He et al., 2022; Chang et al., 2022); new families of generative models such as diffusion (Ho et al., 2020; +Rombach et al., 2022; Saharia et al., 2022) and masking-based generation (Chang et al., 2022); and finally, the availability +of large scale image-text paired datasets (Schuhmann et al., 2021). +In this work, we present a new model for text-to-image synthesis using a masked image modeling approach (Chang et al., +2022). Our image decoder architecture is conditioned on embeddings from a pre-trained and frozen T5-XXL (Raffel et al., +2020) large language model (LLM) encoder. In agreement with Imagen (Saharia et al., 2022), we find that conditioning on a +pre-trained LLM is crucial for photorealistic, high quality image generation. Our models (except for the VQGAN quantizer) +are built on the Transformer (Vaswani et al., 2017) architecture. +We have trained a sequence of Muse models, ranging in size from 632M parameters to 3B parameters (for the image decoder; +the T5-XXL model has an additional 4.6B parameters). Each model consists of several sub-models (Figure 3): First, we +have a pair of VQGAN “tokenizer” models (Esser et al., 2021b), which can encode an input image to a sequence of discrete +tokens as well as decode a token sequence back to an image. We use two VQGANs, one for 256x256 resolution (“low-res”) +and another for 512x512 resolution (“high-res”). Second, we have a base masked image model, which contains the bulk +of our parameters. This model takes a sequence of partially masked low-res tokens and predicts the marginal distribution +*Equal contribution †Core contribution. Correspondence to: Huiwen Chang , Han Zhang , Dilip Krishnan . +1 +arXiv:2301.00704v1 [cs.CV] 2 Jan 2023 + +Two cats doing research. +Astronauts kicking a +football in front of +Eiffel tower. +A fluffy baby sloth with +a knitted hat trying to +figure out a laptop, +close up. +Manhattan skyline made +of bread. +A large array of color- +ful cupcakes, arranged +on a maple table to +spell MUSE. +A storefront with +'Apollo' written on it, +in front of Matterhorn +Zermatt. +3D mesh of Titanic +floating on a water +lily pond in the style +of Monet. +A storefront with 'Muse' +written on it, in front +of Matterhorn Zermatt. +Three dogs celebrating +Christmas with some +champagne. +A cake made of macarons +in a unicorn shape. +A futuristic city with +flying cars. +A sheep in a wine glass. +A surreal painting of a +robot making coffee. +Figure 1. Muse text-to-image generation (512 × 512 resolution). Under each generated image, the corresponding caption is shown, +exhibiting a variety of styles, captions and understanding. Each image was generated in 1.3s on a TPUv4 chip. +for each masked token, conditioned on the unmasked tokens and a T5XXL text embedding. Third, we have a “superres” +transformer model which translates (unmasked) low-res tokens into high-res tokens, again conditioned on T5-XXL text +embeddings. We explain our pipeline in detail in Section 2. +Compared to Imagen (Saharia et al., 2022) or Dall-E2 (Ramesh et al., 2022) which are built on cascaded pixel-space diffusion +models, Muse is significantly more efficient due to the use of discrete tokens; it can be thought of as a discrete diffusion +process with the absorbing state ([MASK]) (Austin et al., 2021). Compared to Parti (Yu et al., 2022), a state-of-the-art +autoregressive model, Muse is more efficient due to the use of parallel decoding. Based on comparisons on similar hardware +(TPU-v4 chips), we estimate that Muse is more than 10x faster at inference time than either Imagen-3B or Parti-3B models +and 3x faster than Stable Diffusion v1.4 (Rombach et al., 2022) (see Section 3.2.2). All these comparisons are when images +of the same size: either 256 × 256 or 512 × 512. Muse is also faster than Stable Diffusion (Rombach et al., 2022), in spite +of both models working in the latent space of a VQGAN. We believe that this is due to the use of a diffusion model in Stable +2 + +MUSE Inpainting + Outpainting +Mask-free Editing +A funny big inflatable +yellow duck +On the ring of Saturn +Hot air balloons +A futuristic Streamline +Moderne building +Input +Output +London skyline +A wildflower bloom at +Mountain Rainier +A woman wearing a dress +A man wearing a blue +t-shirt with “hello +world” written on it +NegPrompt: A man wearing +a t-shirt +A man wearing a christ- +mas sweater. +Figure 2. Examples of zero-shot text-guided image editing using Muse. We show examples of a number of editing applications using the +Muse text-to-image generative model, on real input images, without fine-tuning. All edited images are generated at 512 × 512 resolution. +Diffusion v1.4 which requires a significantly higher number of iterations at inference time. +The efficiency improvement of Muse, however, does not come at a loss of generated image quality or semantic understanding +of the input text prompt. We evaluate our output on multiple criteria, including CLIP score (Radford et al., 2021) and FID +(Heusel et al., 2017). The former is a measure of image-text correspondence; and the latter a measure of image quality and +diversity. Our 3B parameter model achieves a CLIP score of 0.32 and an FID score of 7.88 on the COCO (Lin et al., 2014) +zero-shot validation benchmark, which compares favorably with that of other large-scale text-to-image models (see Table 2). +Our 632M(base)+268M(super-res) parameter model achieves a state of the art FID score of 6.06 when trained and evaluated +on the CC3M (Sharma et al., 2018) dataset, which is significantly lower than all other reported results in the literature (see +Table 1). We also evaluate our generations on the PartiPrompts (Yu et al., 2022) evaluation suite with human raters, who find +that Muse generates images better aligned with its text prompt 2.7x more often than Stable Diffusion v1.4 (Rombach et al., +2022). +Muse generates images that reflect different parts of speech in input captions, including nouns, verbs and adjectives. +Furthermore, we present evidence of multi-object properties understanding, such as compositionality and cardinality, as +3 + +HELLO +WORLD鑫鑫鑫well image style understanding. See Figure 1 for a number of these examples and our website http://muse-model. +github.io for more examples. The mask-based training of Muse lends itself to a number of zero-shot image editing +capabilities. A number of these are shown in Figure 2, including zero-shot, text-guided inpainting, outpainting and mask-free +editing. More details are in Section 3. Our contributions are: +1. We present a state-of-the-art model for text-to-image generation which achieves excellent FID and CLIP scores +(quantitative measures of image generation quality, diversity and alignment with text prompts). +2. Our model is significantly faster than comparable models due to the use of quantized image tokens and parallel +decoding. +3. Our architecture enables out-of-the-box, zero-shot editing capabilities including inpainting, outpainting, and mask-free +editing. +Masked +High-Res Tokens +VQ Tokenizer +Mask +SuperRes +Transformer +Input +Image +Masked +Tokens +Reconstructed +Tokens +VQ Tokenizer +Text Encoder +Base +Transformer +Text Embedding +Text Prompt: “A cat +looking at a dog” +Mask +Reconstructed +HighRes Tokens +Input Image +Cross Entropy +Loss +Cross Entropy +Loss +256x256 +512x512 +Figure 3. Muse Framework: We show the training pipeline for our model, with the T5-XXL pre-trained text encoder, base model and +super-resolution model depicted on the three rows. The text encoder generates a text embedding that is used for cross-attention with +image tokens for both base and super-res Transformer layers. The base model uses a VQ Tokenizer that is pre-trained on lower resolution +(256 × 256) images and generates a 16 × 16 latent space of tokens. This sequence is masked at a variable rate per sample and then the +cross-entropy loss learns to predict the masked image tokens. Once the base model is trained, the reconstructed lower-resolution tokens +and text tokens are passed into the super-res model that then learns to predict masked tokens at a higher resolution. +2. Model +Our model is built on a number of components. Here, we provide an overview of each of those components in the order +of their training, while relegating many details of the architecture and parameters to the Appendix. Figure 3 provides an +overview of the model architecture. +2.1. Pre-trained Text Encoders +Similar to the findings in (Saharia et al., 2022), we find that leveraging a pre-trained large language model (LLM) is beneficial +to high-quality image generation. The embeddings extracted from an LLM such as T5-XXL (Raffel et al., 2020) carry +rich information about objects (nouns), actions (verbs), visual properties (adjectives), spatial relationships (prepositions), +and other properties such as cardinality and composition. Our hypothesis is that the Muse model learns to map these rich +visual and semantic concepts in the LLM embeddings to the generated images; it has been shown in recent work (Merullo +et al., 2022) that the conceptual representations learned by LLM’s are roughly linearly mappable to those learned by models +trained on vision tasks. Given an input text caption, we pass it through the frozen T5-XXL encoder, resulting in a sequence +of 4096 dimensional language embedding vectors. These embedding vectors are linearly projected to the hidden size of our +Transformer models (base and super-res). +4 + +2.2. Semantic Tokenization using VQGAN +A core component of our model is the use of semantic tokens obtained from a VQGAN (Esser et al., 2021b) model. This +model consists of an encoder and an decoder, with a quantization layer that maps an input image into a sequence of tokens +from a learned codebook. We build our encoder and decoder entirely with convolutional layers to support encoding images +from different resolutions. The encoder has several downsampling blocks to reduce the spatial dimension of the input, while +the decoder has the corresponding number of upsampling blocks to map the latents back into original image size. Given an +image of size H × W, the encoded token is of size H/f × W/f, with downsampling ratio f. We train two VQGAN models: +one with downsampling ratio f = 16 and the other with downsampling ratio f = 8. We obtain tokens for our base model +using the f = 16 VQGAN model on 256×256 pixel images, thus resulting in tokens with spatial size 16 × 16. We obtain +the tokens for our super-resolution model using the f = 8 VQGAN model on 512 × 512 images, and the corresponding +token has spatial size 64 × 64. As mentioned in previous work (Esser et al., 2021b), the resulting discrete tokens after +encoding capture higher-level semantics of the image, while ignoring low level noise. Furthermore, the discrete nature of +these tokens allows us to use a cross-entropy loss at the output to predict masked tokens in the next stage. +2.3. Base Model +Our base model is a masked transformer(Vaswani et al., 2017; Devlin et al., 2018), where the inputs are the projected +T5 embeddings and image tokens. We leave all the text embeddings unmasked and randomly mask a varying fraction of +image tokens (see Section 2.6) and replace them with a special [MASK]token (Chang et al., 2022). We then linearly map +image tokens into image input embeddings of the required Transformer input/hidden size along with learned 2D positional +embeddings. Following previous transformer architecture (Vaswani et al., 2017), we use several transformer layers including +self-attention block, cross-attention block and MLP block to extract features. At the output layer, an MLP is used to convert +each masked image embedding to a set of logits (corresponding to the VQGAN codebook size) and a cross-entropy loss is +applied with the ground truth token label as the target. At training, the base model is trained to predict all masked tokens at +each step. However, for inference, mask prediction is performed in an iterative manner which significantly increases quality. +See Section 2.8 for details. +2.4. Super-Resolution Model +Text Embedding +K V +Low-Res +Tokens +Concat +Embed +Self +Attention +MLP +Embed +Multi-axis +Attention +MLP +Q +Project +16x16 +64x64 +Predicted +High-Res Tokens +32x + 8x +64x64 +Masked +High-Res Tokens +Cross +Attention ++ ++ +LowRes 256x256 +HighRes 512x512 +A high contrast portrait photo +of a fluffy hamster wearing an +orange beanie and sunglasses +holding a sign that says +"Let's PAINT!" +Text +A bear riding a bicycle, +with a bird perched on +the handlebars. +Figure 4. Super-resolution Model. On the left is shown the architecture of the super-resolution model. Low-resolution tokens are +passed into a series of self-attention Transformer layers; and the resulting output embeddings are concatenated with text embeddings +extracted from the conditioning text prompt. Following this, cross-attention is applied from these concatenated embeddings to the masked +high-resolution tokens; the loss learns to predict these masked tokens conditioned on the low-resolution and text tokens. On the right are +shown two examples of the improvement brought about by the super-resolution model. +We found that directly predicting 512 × 512 resolution leads the model to focus on low-level details over large-scale +semantics. As a result we found it beneficial to use a cascade of models: first a base model that generates a 16 × 16 latent +map (corresponding to a 256 × 256 image), followed by a super-resolution model that upsamples the base latent map to a +64 × 64 latent map (corresponding to a 512 × 512 image). The super-res model is trained after the base model has been +5 + +FAINILETS +PAINT!trained. +As mentioned in Section 2.2, we trained two VQGAN models, one at 16 × 16 latent resolution and 256 × 256 spatial +resolution, and the second at 64 × 64 latent resolution and 512 × 512 spatial resolution. Since our base model outputs +tokens corresponding to a 16 × 16 latent map, our super-resolution procedure learns to “translate” the lower-resolution +latent map to the higher-resolution latent map, followed by decoding through the higher-resolution VQGAN to give the final +high-resolution image. This latent map translation model is also trained with text conditioning and cross-attention in an +analogous manner to the base model, as shown in Figure 4. +2.5. Decoder Finetuning +To further improve our model’s ability to generate fine details, we increase the capacity of the VQGAN decoder by the +addition of more residual layers and channels while keeping the encoder capacity fixed. We then finetune the new decoder +layers while keeping the VQGAN encoder weights, codebook and transformers (i.e., base model and super resolution model) +frozen. This allows us to improve our visual quality without re-training any of the other model components (because the +visual token “language” stays fixed). This is shown in Figure 13 in the Appendix, where we see that the finetuned decoder +can reconstruct more sharper details in the store front. We also give details of the finetuned decoder architecture in the +Appendix. +2.6. Variable Masking Rate +As was done in (Chang et al., 2022), we train our model with a variable masking rate based on a Cosine scheduling: for +each training example, we sample a masking rate r ∈ [0, 1] from a truncated arccos distribution with density function +p(r) = 2 +π(1 − r2)− 1 +2 . This has an expected masking rate of 0.64, with a strong bias towards higher masking rates. The bias +towards higher masking rates makes the prediction problem harder. In contrast with autoregressive approaches, which learn +conditional distributions P(xi|x